Firestarter321

joined 1 year ago
[–] [email protected] 1 points 11 months ago

I don’t know what to say other than I’ve tried it and it doesn’t work. I thought it would but it doesn’t.

[–] [email protected] 1 points 11 months ago (2 children)

Not in my experience over the last couple of winters. The office just stays at 80F or more while the rest of the basement is 70F even with a fan blowing from the office out into the main room in the basement.

[–] [email protected] 1 points 11 months ago

Some of it would leave, however, most of it stayed in the office which it’s why it was 80F+ in there.

[–] [email protected] 1 points 11 months ago

I have a constant 1000W load which is ~3400 BTU.

 

I have my office and rack in the basement where I work out of during the week. In the early spring and late fall when there isn't much cooling it would get rather warm to be comfortable (80F-82F).

A few weeks ago I realized that I have a cold air return duct in the ceiling so I cut an 8"x10" hole in it and left the furnace fan on 24/7 hoping that would help...it didn't really.

Last week I decided to hang an ~8" fan 3" below the hole I cut into the cold air return to see what would happen if I forced air into the duct...it didn't do much.

Last Thursday I remembered something from my volunteer firefighter days about how to set up a fan to ventilate a room through a window/door and how it was important to have the wind cover the entire opening. This led me to put a 12" fan in place of the 8" fan at 9PM.

Fast forward about an hour and my office was now 76F. The next morning it was 72F and it has stayed at 72F-73F ever since then.

The side benefit is that I'm able to provide a bunch of supplemental heat to the upstairs meaning that rather than my heat pump running 16hr+ per day with the electric strips kicking on periodically overnight during the <15F weather we've been having the heat pump has been running for 8hrs per day and the electric backup strips haven't needed to kick on at all.

I'm curious how it works for cooling next summer when I won't be able to run the furnace fan 24/7 since that'd just dump humidity back into the house so we'll see how that goes.

I'm still pretty happy with the results at the moment.

[–] [email protected] 1 points 11 months ago (1 children)

Sellers can send offers to potential buyers. It happens to me all of the time.

[–] [email protected] 1 points 11 months ago

No concerns here.

I buy used server chassis and power supplies for my NASes.

The only new items in my primary, backup, and offsite NASes are the 500TB of HDD’s and a couple of fan splitters.

The only new items in my 3 Proxmox nodes are…nothing actually. All SSD’s, HDD’s and everything else is used enterprise hardware.

[–] [email protected] 2 points 11 months ago (3 children)

The only thing I buy new are the HDD’s for the bulk storage.

I’ll buy used enterprise SSD’s for flash storage.

[–] [email protected] 1 points 11 months ago

Not necessarily. I’ve had enterprise SSD’s die that were under 1yr old with less than 100TB written.

I also have HDD’s used in my surveillance system that have several petabytes written to them over the last 6yrs still going strong.

I just moved the HDD I got in my first NAS (8TB WD Red) to its 4th home and it just turned 7 y/o.

[–] [email protected] 1 points 11 months ago (2 children)

All drives die eventually whether they are HDD’s or SSD’s.

8 years is a good run for any type of drive.

Backups are key for keeping your data safe over the decades.

[–] [email protected] 1 points 11 months ago

I don’t care what others do, however, I just have no interest storing documents with my SSN on their software.

Could I go through the 50K+ lines of code (even though I’m not proficient with Go) looking for something nefarious…sure. Will I given that trusted software such as Nextcloud exists and is also free…nope.

[–] [email protected] 1 points 11 months ago (2 children)

I may just have my tinfoil hat on too tight, however, there’s not a chance that I’ll trust a Chinese application with any of my personal information.

 

I'm running 2 Proxmox nodes in an HA Cluster with dual E5-2697A V4's and while it works great it's making my office rather warm.

The 9124 isn't cheap, however, it has a 200W TDP and has a higher benchmark with just 1 of them compared to my dual Xeon setup. It's $1150 though which is kind of sucky, however, it should kick out much less heat I'd think?

Is anyone using this CPU or the other 2 I listed below? They are $1600-$2000 though.

Supermicro H13SSL-N Motherboard - https://www.supermicro.com/en/products/motherboard/h13ssl-n

AMD EPYC 9124 - https://www.amd.com/en/products/cpu/amd-epyc-9124

or

AMD EPYC 9224 - https://www.amd.com/en/products/cpu/amd-epyc-9224

or

AMD EPCY 9254 - https://www.amd.com/en/products/cpu/amd-epyc-9254

ETA: The Supermicro H13SSL-NT is the same as the H13SSL-N except that it has 10Gbe, however, it's $120 more expensive. I don't need it as I use SFP+ with fiber everywhere at the moment.

[–] [email protected] 6 points 11 months ago

I guess you could do that…or you could switch to a company that hasn’t gone to $hit like Jellyfin or Emby.

 

I've been dreading doing it, however, it wasn't too bad. I had 4 LXC's to convert to VM's in total.

The biggest difference is that the LXC I had which hosted CodeProject.AI for my Blue Iris server went from using 120GB down to 19GB to for the same containers. I'm guessing it's due to being able to change from using vfs in the LXC to overlay2 in the VM.

Having docker-compose yml files to recreate all containers on the VM helped a TON as well as using rsync to move everything over to the new VM that the containers needed.

Has anyone else made the move?

I got the kick in the pants to do it after trying to restore the 120GB LXC from PBS, giving up after 2 hours, and restoring it in totality from a standard Proxmox backup instead which only took 15 minutes.

 

I've been dreading doing it, however, it wasn't too bad. I had 4 LXC's to convert to VM's in total.

The biggest difference is that the LXC I had which hosted CodeProject.AI for my Blue Iris server went from using 120GB down to 19GB to for the same containers. I'm guessing it's due to being able to change from using vfs in the LXC to overlay2 in the VM.

Having docker-compose yml files to recreate all containers on the VM helped a TON as well as using rsync to move everything over to the new VM that the containers needed.

Has anyone else made the move?

I got the kick in the pants to do it after trying to restore the 120GB LXC from PBS, giving up after 2 hours, and restoring it in totality from a standard Proxmox backup instead which only took 15 minutes.

view more: next ›