this post was submitted on 23 Jan 2024
1 points (100.0% liked)

Sysadmin

12 readers
1 users here now

A reddit dedicated to the profession of Computer System Administration.

founded 2 years ago
MODERATORS
 
This is an automated archive.

The original was posted on /r/sysadmin by /u/Verukins on 2024-01-23 06:23:29+00:00.


Hi all,

as per the title.

Working in a mid-size org with a couple of hundred TB of data across lots of file servers - which have been horrendously badly set up.

I'm currently writing a re-design document which will be the basis of standardizing servers, back-end storage, DFS-N namespaces, DFS-R replication groups, classifying data etc etc.

One thing im a bit stuck on is data tiering.

Azure files with Azure File sync takes the approach of storing everything in cloud and using file servers as a local cache.... leaving behind pointers (which is good) and with our amount of data the $ is just too much.

Azure Blob storage cant automatically tier and doesnt allow the use of NTFS ACL's - so while the $ are much better - its not going to be a fit for us.

We can buy cheap storage for on-prem and use scripts to move data into archive - but that could be messy, as the data age doesn't necessarily correspond to how often its used - and with no pointer left behind - it will cause grief.

So - has anyone out there in reddit land found a file data tiering solution that works really well for them ?

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here