greyfox

joined 2 years ago
[–] [email protected] 1 points 3 weeks ago

Since the ER-X is Linux under the hood the easiest thing to do would be to just ssh in and run tcpdump.

Since you suspect this is from the UDR itself you should be able to filter for the IP of the UDRs management interface. That should get you destination IPs which will hopefully help track it down.

Not sure what would cause that sort of traffic, but I know there used to be a WAN speed test on the Unifi main page which could chew up a good amount of traffic. Wouldn't think it would be constant though.

Do you have other Unifi devices that might have been adopted with layer 3 adoption? Depending on how you setup layer 3 adoption even if devices are local to your network they might be using hairpin NAT on the ER-X which might look like internet activity destined for the UDR even though it is all local.

[–] [email protected] 29 points 3 weeks ago

Named volumes are often the default because there is no chance of them conflicting with other services or containers running on the system.

Say you deployed two different docker compose apps each with their own MariaDB. With named volumes there is zero chance of those conflicting (at least from the filesystem perspective).

This also better facilitates easier cleanup. The apps documentation can say "docker compose down -v", and they are done. Instead of listing a bunch of directories that need to be cleaned up.

Those lingering directories can also cause problems for users that might have wanted a clean start when their app is broken, but with a bind mount that broken database schema won't have been deleted for them when they start up the services again.

All that said, I very much agree that when you go to deploy a docker service you should consider changing the named volumes to standard bind mounts for a couple of reasons.

  • When running production applications I don't want the volumes to be able to be cleaned up so easily. A little extra protection from accidental deletion is handy.

  • The default location for named volumes doesn't work well with any advanced partitioning strategies. i.e. if you want your database volume on a different partition than your static web content.

  • Old reason and maybe more user preference at this point but back before the docker overlay2 storage driver had matured we used the btrfs driver instead and occasionally Docker would break and we would need to wipe out the entire /var/lib/docker btrfs filesystem, so I just personally want to keep anything persistent out of that directory.

So basically application writers should use named volumes to simplify the documentation/installation/maintenance/cleanup of their applications.

Systems administrators running those applications should know and understand the docker compose well enough to change those settings to make them production ready for their environment. Reading through it and making those changes ends up being part of learning how the containers are structured in the first place.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago)

For shared lines like cable and wireless it is often asymmetrical so that everyone gets better speeds, not so they can hold you back.

For wireless service providers for instance let's say you have 20 customers on a single access point. Like a walkie-talkie you can't both transmit and receive at the same time, and no two customers can be transmitting at the same time either.

So to get around this problem TDMA (time division multiple access) is used. Basically time is split into slices and each user is given a certain percentage of those slices.

Since the AP is transmitting to everyone it usually gets the bulk of the slices like 60+%. This is the shared download speed for everyone in the network.

Most users don't really upload much so giving the user radios equal slices to the AP would be a massive waste of air time, and since there are 20 customers on this theoretical AP every 1mbit cut off of each users upload speed is 20mbit added to the total download capability for anyone downloading on that AP.

So let's say we have APs/clients capable of 1000mbit. With 20 users and 1AP if we wanted symmetrical speeds we need 40 equal slots, 20 slots on the AP one for each user to download and 1 slot for each user to upload back. Every user gets 25mbit download and 25mbit upload.

Contrast that to asymmetrical. Let's say we do a 80/20 AP/client airtime split. We end up with 800mbit shared download amongst everyone and 10mbit upload per user.

In the worst case scenario every user is downloading at the same time meaning you get about 40mbit of that 800, still quite the improvement over 25mbit and if some of those people aren't home or aren't active at the time that means that much more for those who are active.

I think the size of the slices is a little more dynamic on more modern systems where AP adjusts the user radios slices on the fly so that idle clients don't have a bunch of dead air but they still need to have a little time allocated to them for when data does start to flow.

A quick Google seems to show that DOCSIS cable modems use TDMA as well so this all likely applies to cable users as well.

[–] [email protected] 2 points 1 month ago (1 children)

They are from the Lemmynsfw instance. Probably automatically applied to any post coming from that instance.

[–] [email protected] 1 points 1 month ago

I am assuming this is the LVM volume that Ubuntu creates if you selected the LVM option when installing.

Think of LVM like a more simple more flexible version of RAID0. It isn't there to offer redundancy but it take make multiple disks aggregate their storage/performance into a single block device. It doesn't have all of the performance benefits of RAID0, particularly with sequential reads, but in the cases of fileservers with multiple active users it can probably perform even better than a RAID0 volume would.

The first thing to do would be to look at what volume groups you have. A volume group is one or more drives that creates a pool of storage that we can allocate space from to create logical volumes. Run vgdisplay and you will get a summary of all of the volume groups. If you see a lot of storage available in the 'Free PE/Size' (PE means physical extents) line that means that you have storage in the pool that hasn't been allocated to a logical volume yet.

If you have a set of OS disks an a separate set of storage disks it is probably a good idea to create a separate volume group for your storage disks instead of combining them with the OS disks. This keeps the OS and your storage separate so that it is easier to do things like rebuilding the OS, or migrating to new hardware. If you have enough storage to keep your data volumes separate you should consider ZFS or btrfs for those volumes instead of LVM. ZFS/btrfs have a lot of extra features that can protect your data.

If you don't have free space then you might be missing additional drives that you want to have added to the pool. You can list all of the physical volume which have been formatted to be used with LVM by running the pvs command. The pvs command show you each formatted drive and if they are associated with a volume group. If you have additional drives that you want to add to your volume group you can run pvcreate /dev/yourvolume to format them.

Once the new drives have been formatted they need to be added to the volume group. Run vgextend volumegroupname /dev/yourvolume to add the new physical device to your volume group. You should re-run vgdisplay afterwards and verify the new physical extents have been added.

If you are looking to have redundancy in this storage you would usually build an mdam array and then do the pvcreate on the volume created my mdadm. LVM is usually not used to give you redundancy, other tools are better for that. Typically LVM is used for pooling storage, snapshots, multiple volumes from a large device, etc.

So one way or another your additional space should be in the volume group now, however that doesn't make it usable by the OS yet. On top of the volume group we create logical volumes. These are virtual block devices made up of physical extents on the physical disks. If you run lvdisplay you will see a list of logical volumes that were created by the Ubuntu installer which is probably only one by default.

You can create new logical volumes with the lvcreate command or extend the volume that is already there. Or resize the volume that you already have with lvresize. I see other posts already explained those commands in more detail.

Once you have extended the logical volume (the virtual block device) you have to extend the filesystem on top of it. That procedure depends on what filesystem you are using on your logical volume. Likely resize2fs for ext4 by default in Ubuntu, or xfs_growfs if you are on XFS.

[–] [email protected] 4 points 1 month ago

FYI the latest SteamOS release (just a couple of days ago) added an option to only charge to 80%.

[–] [email protected] 4 points 1 month ago

The problem is that on top of the pins occasionally not making good contact on these new connectors, Nvidia has been cheaping out on how power is delivered to the card.

They used to have three shunt resistors that the card could use to measure voltage drop. That meant that the six power pins were split into pairs and if any pair did make contact the card could detect it and prevent the card from powering up.

There could be a single pin in each of those pairs not making contact meaning that the remaining pins are being forced to handle double their rated power. It is unlikely that you would lose one pin on each pair so that is an unlikely worst case, but a single pin in a single pair failing could be fairly common.

But on the 40 series they dropped to two shunt resistors. So instead of three pairs, they can only monitor 2x bundles of three wires. Meaning the card can only detect that the plug isn't plugged in correctly if all three wires in the same bundle are disconnected.

You could theoretically have only two out of six power pins plugged in and the card would think everything is fine. Each of those two remaining pins being forced to handle three times their normal current.

And on the 5090 FE they dropped down to one shunt resistor... So five of the six pins can be disconnected and the card thinks everything is fine, forcing six times the current down a single wire.

https://www.youtube.com/watch?v=kb5YzMoVQyw

So the point of these fused cables is to work around a lack of power monitoring on the card itself with cables that destroy themselves instead of melting the connector on your $2000 GPU.

[–] [email protected] 15 points 1 month ago (2 children)

Cisco c3850-12x48u is about $150 on eBay.

  • 802.3bt (60watt) PoE on all ports
  • 36x 1gig rj45 ports
  • 12x 1/2.5/5/10gig rj45 ports
  • Has a module slot that you can add 4x or 8x (8x is rare so expensive) 10gig sfp+

The main problem is the idle power consumption. About 150w with nothing plugged in.

[–] [email protected] 2 points 1 month ago

Can you run more cat6? There are plenty of HDMI over cat6 adapters that work well over some fairly long distances.

There are also plenty of extended length HDMI cables that are 50+ feet if you can fish through the HDMI end. They get a bit expensive at that length because they are hybrid fiber optic but no noise concerns.

USB also has adapters to run over cat6. They are usually limited to USB2.0 but that should be plenty to plug a small hub in for mouse and keyboard.

[–] [email protected] 4 points 1 month ago

In the US it's just like getting your regular license. A written test first which gets you a permit to ride (restrictions on that depending on the state you are in, like no riding after dark, no highways, no passengers, etc).

Then you take the road test (or take a class) which gets the full endorsement added to your license.

But yeah I would think on private property you should have been safe.

[–] [email protected] 1 points 1 month ago (1 children)

The remote was awesome, when you had a hundred mp3s on a cd it was so easy to navigate. Upgraded to the Rio Volt and then Rio Karma after that, always wished they would use those expansion ports for another remote.

Karma is still one of the best mp3 players ever made. Flac, gapless playback, parametric equalizer, dock, etc. Made the iPhone look like junk except for the control wheel being a bit too easy to break.

I still have the Volt and a couple of Karmas. 128gb compact flash cards are drop in replacements for the HDD so you get even more space and better battery life. Unfortunately the phone is just too convenient to use so they collect dust now.

[–] [email protected] 6 points 1 month ago

Just a guess but usually similar kde dialogs usually use kwallet.

I think I have seen something similar in the past if I didn't have a wallet setup (remember password dialogs would be ignored). I might have even explicitly deleted the default wallet when I ran into that

So maybe check that you have a default kwallet that is opened.

view more: ‹ prev next ›