• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: November 5th, 2023

help-circle

  • I am assuming this is the LVM volume that Ubuntu creates if you selected the LVM option when installing.

    Think of LVM like a more simple more flexible version of RAID0. It isn’t there to offer redundancy but it take make multiple disks aggregate their storage/performance into a single block device. It doesn’t have all of the performance benefits of RAID0, particularly with sequential reads, but in the cases of fileservers with multiple active users it can probably perform even better than a RAID0 volume would.

    The first thing to do would be to look at what volume groups you have. A volume group is one or more drives that creates a pool of storage that we can allocate space from to create logical volumes. Run vgdisplay and you will get a summary of all of the volume groups. If you see a lot of storage available in the ‘Free PE/Size’ (PE means physical extents) line that means that you have storage in the pool that hasn’t been allocated to a logical volume yet.

    If you have a set of OS disks an a separate set of storage disks it is probably a good idea to create a separate volume group for your storage disks instead of combining them with the OS disks. This keeps the OS and your storage separate so that it is easier to do things like rebuilding the OS, or migrating to new hardware. If you have enough storage to keep your data volumes separate you should consider ZFS or btrfs for those volumes instead of LVM. ZFS/btrfs have a lot of extra features that can protect your data.

    If you don’t have free space then you might be missing additional drives that you want to have added to the pool. You can list all of the physical volume which have been formatted to be used with LVM by running the pvs command. The pvs command show you each formatted drive and if they are associated with a volume group. If you have additional drives that you want to add to your volume group you can run pvcreate /dev/yourvolume to format them.

    Once the new drives have been formatted they need to be added to the volume group. Run vgextend volumegroupname /dev/yourvolume to add the new physical device to your volume group. You should re-run vgdisplay afterwards and verify the new physical extents have been added.

    If you are looking to have redundancy in this storage you would usually build an mdam array and then do the pvcreate on the volume created my mdadm. LVM is usually not used to give you redundancy, other tools are better for that. Typically LVM is used for pooling storage, snapshots, multiple volumes from a large device, etc.

    So one way or another your additional space should be in the volume group now, however that doesn’t make it usable by the OS yet. On top of the volume group we create logical volumes. These are virtual block devices made up of physical extents on the physical disks. If you run lvdisplay you will see a list of logical volumes that were created by the Ubuntu installer which is probably only one by default.

    You can create new logical volumes with the lvcreate command or extend the volume that is already there. Or resize the volume that you already have with lvresize. I see other posts already explained those commands in more detail.

    Once you have extended the logical volume (the virtual block device) you have to extend the filesystem on top of it. That procedure depends on what filesystem you are using on your logical volume. Likely resize2fs for ext4 by default in Ubuntu, or xfs_growfs if you are on XFS.



  • The problem is that on top of the pins occasionally not making good contact on these new connectors, Nvidia has been cheaping out on how power is delivered to the card.

    They used to have three shunt resistors that the card could use to measure voltage drop. That meant that the six power pins were split into pairs and if any pair did make contact the card could detect it and prevent the card from powering up.

    There could be a single pin in each of those pairs not making contact meaning that the remaining pins are being forced to handle double their rated power. It is unlikely that you would lose one pin on each pair so that is an unlikely worst case, but a single pin in a single pair failing could be fairly common.

    But on the 40 series they dropped to two shunt resistors. So instead of three pairs, they can only monitor 2x bundles of three wires. Meaning the card can only detect that the plug isn’t plugged in correctly if all three wires in the same bundle are disconnected.

    You could theoretically have only two out of six power pins plugged in and the card would think everything is fine. Each of those two remaining pins being forced to handle three times their normal current.

    And on the 5090 FE they dropped down to one shunt resistor… So five of the six pins can be disconnected and the card thinks everything is fine, forcing six times the current down a single wire.

    https://www.youtube.com/watch?v=kb5YzMoVQyw

    So the point of these fused cables is to work around a lack of power monitoring on the card itself with cables that destroy themselves instead of melting the connector on your $2000 GPU.



  • Can you run more cat6? There are plenty of HDMI over cat6 adapters that work well over some fairly long distances.

    There are also plenty of extended length HDMI cables that are 50+ feet if you can fish through the HDMI end. They get a bit expensive at that length because they are hybrid fiber optic but no noise concerns.

    USB also has adapters to run over cat6. They are usually limited to USB2.0 but that should be plenty to plug a small hub in for mouse and keyboard.


  • In the US it’s just like getting your regular license. A written test first which gets you a permit to ride (restrictions on that depending on the state you are in, like no riding after dark, no highways, no passengers, etc).

    Then you take the road test (or take a class) which gets the full endorsement added to your license.

    But yeah I would think on private property you should have been safe.



  • Good catch on the redundancy, at the time posting this I didn’t realize I needed the physical space/drives to set up that safety net. 8 should be plenty for the time being. Say if I wanted to add another drive or two down the road, what sort of complications would that introduce here?

    With TrueNAS your underlying filesystem is ZFS. When you add drives to a pool you can add them:

    • individually (RAID0 - no redundancy, bad idea)
    • in a mirror (RAID1 - usually two drives, a single drive failure is fine)
    • raidz1 (RAID5 - any single drive in the set can fail, one drive’s worth of data does to parity). Generally a max of about 5 drives in a raidz1, if you make the stripe too wide when a drive fails and you start a rebuild to replace it the chances of one of the remaining drives you are reading from failing or at least failing to read some data increases quickly.
    • raidz2 (RAID6 - any two drives can fail, two drives worth of data goes to parity). I’ve run raidz2 vdev up to about 12 drives with no problems. The extra parity drive means the chances of data corruption, or of a other drive failing while you are rebuilding is much lower.
    • raidz3 (triple parity - any three drives can fail, three drives worth of data goes to parity). I’ve run raidz3 with 24 drive wide stripes without issues. Though this was usually for backup purposes.
    • draid (any parity level and stripe switch you want). This is generally for really large arrays like 60+ disks in a pool.

    Each of these sets is called a vdev. Each pool can have multiple vdevs and there is essentially a RAID0 across all of the vdevs in the pool. ZFS tends to scale performance per vdev so if you want it to be really fast, more smaller vdevs is better than fewer larger vdevs.

    If you created a mirror vdev with two drives, you could add a second mirror vdev later. Vdevs can be of diferent sizes so it is okay if the second pair of drives is a different size. So if you buy two 10TB drives later they can be added to your original pool for 18TB usable.

    What you can’t do is change a vdev from one type to another. So if you start with a mirror you can’t change to a raidz1 later.

    You can mix different vdev types in a pool though. So you could have two drives in a mirror today, and add an additional 5 drives in a raidz1 later.

    Drives in a vdev can be different sizes but the vdev gets sized based on the smallest drive. Any drives that are larger will be wasting space until you replace that smaller drive with a similar sized one.

    A rather recent feature lets you expand raidz1/2/3 vdevs. So you could start with two drives today in a raidz1 (8TB usable), and add additional 8TB or higher drives later adding 8TB of usable space each time.

    If you have a bunch of mismatched drives of different sizes you might want to look at UnRAID. It isn’t free but it is reasonably priced. Performance isn’t nearly as good but it has its own parity system that allows for mixing drives of many sizes and only your single largest drive needs to be used for parity. It also has options to add additional parity drives later so you can start at RAID5 and move to RAID6 or higher later when you get enough drives to warrant the extra parity.


  • My server itself is a little HP mini PC. i7, 2 TB SSD, solid little machine so far. Running Proxmox with a single debian VM which houses all my docker containers - I know I’m not using proxmox to its full advantage, but whatever it works for me. I mostly just use it for its backup system.

    Not sure how mini you mean but if it has spots for your two drives this should be plenty of hardware for both NAS and your VMs. TrueNAS can run VMs as well, but it might be a pain migrating from Proxmox.

    Think of Proxmox as a VM host that can do some NAS functions, and TrueNAS as a NAS that can do some VM functions. Play with them both, they will have their own strengths and weaknesses.

    I’ve been reading about external drive shucking, since apparently that’s a thing? Seems like my best bet here would be to crack both of these external drives open and slap them into a NAS. 16TB would be plenty for my use.

    It’s been a couple of years since I have shucked drives but occasionally the drives are slightly different than normal internal drives. There were some western digital drives that had one pin that was different from normal and worked in most computers, but some power supplies which had that pin wired required you to mask the pin before the drive would fire up.

    I wouldn’t expect any major issues just saying you should research your particular model.

    You say 16TB with two 8TB drives so I assume you aren’t expecting any redundancy here? Make sure you have some sort of backup plan because those drives will fail eventually, it’s just a matter of time.

    You can build those as some sort of RAID0 to get you 16TB or you can just keep them as separate drives. Putting them in a RAID0 gives you some read and write performance boost, but in the event of a single drive failure you lose everything.

    If 8TB is enough you want to put them in a mirror which give you 8TB of storage and allows a drive to fail without losing any data. There is still a read performance boost but maybe a slight loss on write performance.

    Hardware: while I like the form factor of Synology/Terramaster/etc, seems like the better choice would be to just slap together my own mini-ITX build and throw TrueNAS on it. Easy enough, but what sort of specs should I look for? Since I already have 2 drives to slap in, I’d be looking to spend no more than $200. Alternatively, if I did want the convenience and form factor of a “traditional” NAS, is that reasonable within the budget? From what I’ve seen it’s mostly older models in that price range.

    If you are planning on running Plex/Jellyfin an Intel with UHD 600 series or newer integrated graphics is the simplest and cheapest option. The UHD 600 series iGPU was the first Intel generation that has hardware decode for h265 so if you need to transcode Plex/Jellyfin will be able to read almost any source content and reencode it to h264 to stream. It won’t handle everything (i.e. AV1) but at that price range that is the best option.

    I assume I can essentially just mount the NAS like an external drive on both the server and my desktop, is that how it works? For example, Jellyfin on my server is pointed to /mnt/external, could I just mount a NAS to that same directory instead of the USB drive and not have to change a thing on the configuration side?

    Correct. Usually a NAS offers a couple of protocols. For Linux NFS is the typical filesystem used for that. For Windows it would be a Samba share. NFS isn’t the easiest to secure, so you will either end up with some IP ACLs or just allowing access to any machine on your internal network.

    If you are keeping Proxmox in the mix you can also mount your NFS share as storage for Proxmox to create the virtual hard drives on. There are occasionally reasons to do this like if you want your NAS to be making snapshots of the VMs, or for security reasons, but generally adding the extra layers is going to cut down performance so mounting inside of the VM is better.

    Will adding a NAS into the mix introduce any buffering/latency issues with Jellyfin and Navidrome?

    Streaming apps will be reading ahead and so you shouldn’t notice any changes here. Library scans might take longer just because of the extra network latency and NAS filesystem layers, but that shouldn’t have any real effect on the end user experience.

    What about emulation? I’m going to set up RomM pretty soon along with the web interface for older games, easy enough. But is streaming roms over a NAS even an option I should consider for anything past the Gamecube era?

    Anything past GameCube era is probably large ISO files. Any game from a disk is going to be designed to load data from disk with loading screens, and an 8tb drive/1gb Ethernet is faster than most disks are going to be read. PS4 for example only reads disks at 24MB/s. Nintendo Switch cards aren’t exactly fast either so I don’t think they should be a concern.

    It wouldn’t be enough for current gen consoles that expect NVMe storage, but it should be plenty fast for running roms right from your NAS.


  • The point is that you can still treat it like a physical game. So there are upsides in that you can borrow it to your friends or resell it.

    If it is a game that gets updated often or requires updates to even play it (multiplayer games) then having the game data on the card is next to worthless anyways and just makes publishing the game more difficult because they can’t start manufacturing the cards until the game is 100% ready.

    Nintendo’s audience goes for physical much more than the other consoles, much easier swapping cards than dealing with family sharing, a lot of their adult users collect games, and generally Nintendo games hold their value much more so being able to resell is important. So this is a compromise between what their users want and what they need for modern game development.

    Slippery slope for sure if they start doing the same with single player games but there are valid reasons for them to do this, and the alternative is they just start forcing everyone to download all of their games which is even worse. MIG switch would never have been an issue for them if there just weren’t game card slots to begin with.

    Of course end users should assume the store is going to get shutdown someday and their games will be inaccessible at that time. Nintendo needs to shutdown those stores so that a couple of generations later they can sell everyone the same games for the second/third/fourth time.


  • Sales taxes are state/city level taxes, there are no federal sales taxes (yet). But he is essentially using the tariffs as a way to enact sales taxes without really adding a sales tax.

    With the tariffs he can add a massive tax on the people which Republicans would normally be very much against, but he can say it is about being pro American and most of them forget about all of the extra money they will be paying.

    This shifts the tax burden further onto middle/lower income homes and lets him give more income tax cuts to higher earners without increasing the deficit so much that congress would turn on him.

    The Republicans have actually been talking about this for a long time they called it the “fair tax”. Their fair tax plan was basically a flat ~23% federal sales tax that would replace income tax, but they could never get their base behind it.

    Someone on Trump’s team realized that we buy so much from other countries that he could accomplish the same thing the fair tax aimed to do via tariffs while selling them to his party as “buy American”. His lower/middle income base eats that up, and his campaign donors see it as killing their overseas competition.

    If it weren’t for the other countries reciprocating it would have been a good plan for them.


  • greyfox@lemmy.worldtoSelfhosted@lemmy.worldSharing Jellyfin
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Depending on how you setup your reverse proxy it can reduce random scanning/login attempts to basically zero. The point of a reverse proxy is to act as a proxy, as a sort of web router, and to validate that the http requests are correctly formatted.

    For the routing depending on what DNS name/path the request comes in with it can route to different backends. So you can say that app1.yourdomain.com is routed to the internal IP address of your app1, and app2.yourdomain.com goes to app2. You can also do this with paths if the applications can handle it. Like yourdomain.com/app1.

    When your client makes a request the reverse proxy uses the “Host” header or the SNI string that is part of the TLS connection to determine what certificate to use and what application to route to.

    There is usually a “default” backend for any request that doesn’t match any of the names for your backend services (like a scanner blindly trying to access your IP). If you disable the default backend or redirect default requests to something that you know is secure any attacker scanning your IP for vulnerabilities would get their requests rejected. The only way they can even try to hit your service is to know the correct DNS name of your service.

    Some reverse proxies (Traefik, HAproxy) have options to reject the requests before the TLS negation has even completed. If the SNI string doesn’t match the connection just drops it doesn’t even bother to send a 404/5xx error. This can prevent an attacker from doing information gathering about the reverse proxy itself that might be helpful in attacking it.

    This is security by obscurity which isn’t really security, but it does reduce your risk because it significantly reduces the chances of an attacker being able to find your applications.

    Reverse proxies also have a much narrower scope than most applications as well. Your services are running a web server with your application, but is Jellyfin’s built in webserver secure? Could an attacker send invalid data in headers/requests to trigger a buffer overflow? A reverse proxy often does a much better job of preventing those kinds of attacks, rejecting invalid requests before they ever get to your application.


  • Btrfs is a copy on write (COW) filesystem. Which means that whenever you modify a file it can’t be modified in place. Instead a new block is written and then a single atomic operation is done to flip that new block to be the location of that data.

    This is a really good thing for protecting your data from things like power outages or system crashes because the data is always in a good state on disk. Either the update happened or it didn’t there is never any in-between.

    While COW is good for data integrity it isn’t always good for speed. If you were doing lots of updates that are smaller than a block you first have to read the rest of the block and then seek to the new location and write out the new block. On ssds this isn’t a issue but on HDDs it can slow things down and fragment your filesystem considerably.

    Btrfs has a defragmentation utility though so fragmentation is a fixable problem. If you were using ZFS there would be no way to reverse that fragmentation.

    Other filesystems like ext4/xfs are “journaling” filesystems. Instead of writing new blocks or updating each block immediately they keep the changes in memory and write them to a “journal” on the disk. When there is time those changes from the journal are flushed to the disk to make the actual changes happen. Writing the journal to disk is a sequential operation making it more efficient on HDDs. In the event that the system crashes the filesystem replays the journal to get back to the latest state.

    ZFS has a journal equivalent called the ZFS Intent Log (ZIL). You put the ZIL on fast SSDs while the data itself is on your HDDs. This also helps with the fragmentation issues for ZFS because ZFS will write incoming writes to the ZIL and then flush them to disk every few seconds. This means fewer larger writes to the HDDs.

    Another downside of COW is that because the filesystem is assumed to be so good at preventing corruption, in some extremely rare cases if corruption gets written to disk you might lose the entire filesystem. There are lots of checks in software to prevent that from happening but occasionally hardware issues may let the corruption past.

    This is why anyone running ZFS/btrfs for their NAS is recommended to run ECC memory. A random bit flipping in ram might mean the wrong data gets written out and if that data is part of the metadata of the filesystem itself the entire filesystem may be unrecoverable. This is exceedingly rare, but a risk.

    Most traditional filesystems on the other hand were built assuming that they had to cleanup corruption from system crashes, etc. So they have fsck tools that can go through and recover as much as possible when that happens.

    Lots of other posts here talking about other features that make btrfs a great choice. If you were running a high performance database a journaling filesystem would likely be faster but maybe not by much especially on SSD. But for a end user system the snapshots/file checksumming/etc are far more important than a tiny bit of performance. For the potential corruption issues if you are lacking ECC backups are the proper mitigation (as of DDR5 ECC is in all ram sticks).


  • Agreed. The nonstandard port helps too. Most script kiddies aren’t going to know your service even exists.

    Take it another step further and remove the default backend on your reverse proxy so that requests to anything but the correct DNS name are dropped (bots just are probing IPs) and you basically don’t have to worry at all. Just make sure to keep your reverse proxy up to date.

    The reverse proxy ends up enabling security through obscurity, which shouldn’t be your only line of defence, but it is an effective first line of defence especially for anyone who isn’t a target of foreign government level of attacks.

    Adding basic auth to your reverse proxy endpoints extends that a whole lot further. Form based logins on your apps might be a lot prettier, but it’s a lot harder to probe for what’s running behind your proxy when every single URI just returns 401. I trust my reverse proxy doing basic auth a lot more than I trust some php login form.

    I always see posters on Lemmy about setting up elaborate VPN setups for as the only way to access internal services, but it seems like awful overkill to me.

    VPN still needed for some things that are inherently insecure or just should never be exposed to the outside, but if it is a web service with authentication required a reverse proxy is plenty of security for a home lab.


  • You are paying for reasonably well polished software, which for non technical people makes them a very good choice.

    They have one click module installs for a lot of the things that self hosted people would want to run. If you want Plex, a onedrive clone, photo sync on your phone, etc just click a button and they handle installing and most of the maintenance of running that software for you. Obviously these are available on other open source NAS appliances now too so this isn’t much of a differnentiator for them anymore, but they were one of the first to do this.

    I use them for their NVR which there are open source alternatives for but they aren’t nearly as polished, user friendly, or feature rich.

    Their backup solution is also reasonably good for some home labs and small business use cases. If you have a VMware lab at home for instance it can connect to your vCenter and it do incremental backups of your VMs. There is an agent for Windows machines as well so you can keep laptops/desktops backed up.

    For businesses there are backup options for Office365/Google Workspace where it can keep backups of your email/calendar/onedrive/SharePoint/etc. So there are a lot of capabilities there that aren’t really well covered with open source tools right now.

    I run my own built NAS for mass storage because anything over two drives is way too expensive from Synology and I specifically wanted ZFS, but the two drive units were priced low enough to buy just for the software. If you want a set and forget NAS they were a pretty good solution.

    If their drives are reasonably priced maybe they will still be an okay choice for some people, but we all know the point of this is for them to make more money so that is unlikely. There are alternatives like Qnap, but unless you specifically need one of their software components either build it yourself or grab one of the open source NAS distros.


  • The biggest question is, are you looking for Dolby Vision support?

    There is no open source implementation for Dolby Vision or HDR10+ so if you want to use those formats you are limited to Android/Apple/Amazon streaming boxes.

    If you want to avoid the ads from those devices apart from side loading apks to replace home screens or something the only way to get Dolby Vision with Kodi/standard Linux is to buy a CoreELEC supported streaming device and flashing it with CoreELEC.

    List of supported devices here

    CoreELEC is Kodi based so it limits your player choice, but there are plugins for Plex/Jellyfin if you want to pull from those as back ends.

    Personally it is a lot easier to just grab the latest gen Onn 4k Pro from Walmart for $50 and deal with the Google TV ads (never leave my streaming app anyways). Only downside with the Onn is lack of Dolby TrueHD/DTS Master audio output, but it handles AV1, and more Dolby Vision profiles than the Shield does at a much cheaper price. It also handles HDR10+ which the Shield doesn’t but that for at isn’t nearly as common and many of the big TV brands don’t support it anyways.