-  [WT]  [Home] [Manage]

[Return] [Entire Thread] [Last 50 posts] [First 100 posts]
Posting mode: Reply
Name
Email
Subject   (reply to 108923)
Message
File
File URL
Embed   Help
Password  (for post and file deletion)
  • Supported file types are: GIF, JPG, PNG, WEBM
  • Maximum file size allowed is 5120 KB.
  • Images greater than 300x300 pixels will be thumbnailed.
  • Currently 724 unique user posts.

  • Blotter updated: 2017-02-04 Show/Hide Show All

File 148842970293.jpg - (95.73KB , 1000x1000 , diversey-raid-17-5-oz-aerosol-ant-and-roach-killer.jpg )
108923 No. 108923 ID: 6057a8
So I may be looking for another NAS box. Currently, I have an 8 bay Synology ds1815+. It's really good for performance, ease of use and features, but shit starts to really fall apart when failures show up.

Keep in mind that my 8 drives are in RAID6, the following outlines my experience with Synology shenanigans.
1. When a drive dies, the box shuts down. There's no way to tell what drive has died, you have to unplug one drive at a time and hope the box boots. The box won't boot if it has a dead drive, so it's pretty much go through it one by one.
2. If you have 2 dead drives, this gets very tricky, not to mention dangerous, as more than 2 drives dying = yer fucked bud.
3. Once you've replaced the bad drives, a rebuild is 23 hours. I guess that's not bad for rebuilding a 16TB striped parity volume. Woohoo performance?
4. If the rebuild fails, your stuff may still be OK, but there's no way to access it. Maybe some ganoo lenux wizards could space magic in there, but from the Synology forums and from my experience, you HAVE to email Synology, and ONLY they are capable of remote-accessing the box to give you a chance to back-up your info.
5. Scrambling to find 11TB of spare HDD space to back up all the things isn't fun.
6. The feeling of being dependent on remote-access from a company to recover important files is not a good one.

At this point, I almost feel like buying a second ds1815+ so I can use it as a mirror. The odds of a simultaneous failure of two separate boxes or more than 4 drives at once is pretty damn low even for someone as unlucky as me... But I really don't feel like giving Synology another 1250 syrup dongs.

tl;dr are there other options for an 8ish bay NAS box that doesn't lock you out or shut itself down at the drop of a hat? What are my options for building a hardware RAID computer that could give me comparable performance to the Synology box? I've built quite a few computers, but I'm still pretty new to RAID stuff. If it's about the same deal as building a computer, just with a hip new flavor of motherboard or something, I'd be down to give it a shot.

picture somewhat unrelated
Expand all images
>> No. 108926 ID: 6d8591
>>108923
Sounds like something serv can answer. He has a pretty decent setup, and I didn't hear him bitching too loudly when his 3TB Seagates did the 3TB Seagate thing and let the magic smoke out.

Hell, I'm curious about his setup too because I wanna get a NASbox going too.
>> No. 108928 ID: 4ce9b6
Yeah what's the best way to go about this. I don't need 16TB really but I would like to build something to back up all the computers to which would need at least 6 TB
>> No. 108931 ID: a651e7
>>108928
For just 6tb, you could probably mirror UHCP drives. Price per failure might be higher, but you wouldn't need to rebuild anything, just replace and copypasta. Setup cost would be nothing as you can literally just stick them into your regular computer and have that drive network accessible.

Unless you want the frills and trimmings of a dedicated NASbox I guess.
>> No. 108937 ID: 632b3e
>>108931

Well that's just it, there are many devices in here and it'd be nice to have them all back up to another device. I have two mirrored 4 TB drives in my desktop for storage already.
>> No. 108943 ID: 813f6b
I have about a hundred synology boxes (RS814+ & RS815+) out "in the wild" at remote offices as cheapo local backup storage.
I have literally never seen any of them lock up when a drive fails. I actually just bought a simple DS416play for myself. No issues there so far.

Do other people report the same issues as you do? Cuz that seems very weird to me. There would be quite a bit of outrage I'd imagine if this were true.

There is no magic in rebuild speed btw. It's limited by disk speed, CPU and/or raid controller performance. The "stronger" your machine and the "faster" your disks, the faster it'll rebuild. Synology, QNAP, ReadyNAS etc all have similar rebuild times for similar devices.

PS: stay away from Intel Atom based NASes, there's an issue with some of them currently which can cause them do die at the drop of a hat due to Intel fucking up in the creation process. Celerons etc are fine.
>> No. 108945 ID: 19518e
File 148856934586.png - (54.98KB , 1454x362 , Capture.png )
108945
>>108943
>I have literally never seen any of them lock up when a drive fails
That's not really what happened. One of my buddies and I both have ds1815+ (both with 8 drives in RAID6), and both machines turn off when a drive fails, and both machines require you to physically disconnect the drives one by one until the machine boots, like "disconnect drive 1 of 8, try to boot. Reconnect 1, disconnect 2, try to boot" and so on.

What happened with my box is that 2 drives failed simultaneously. I managed to find the 2 dead drives, and tried to fix the RAID6. The rebuild failed, and the box locked me out. I wasn't locked out of the Synology log-in, I was locked out of the volume, and when the rebuild was over (and failed), this is what happened. Pic related is the log entries from the Synology.

Note that "shared folder [Media] was deleted" made me flip a bit of a shit, as the [Media] folder is the volume name that contained 11TB of my everything.

Throughout the process of drives being lost, the contents of the volume were not accessible. Basically, if a drive fails, you can't access your stuff until you finish fixing it, whatever the process might be. If something in the rebuilt fails, you're dependent on tech support to try and fix it. From the Synology forums, this seems to be the case with other units like mine.

Getting a second ds1815+ might be the answer. If one box has a dead drive, the files will still be accessible on the other box. If the rebuild fails, I can just delete the volume and make a new one, then transfer from the good box to the new volume.

The reliance on remote-access tech support is my main gripe. Even if it was complicated, I would much rather try and fix it myself.
>> No. 108946 ID: bc78c2
I run a white box NAS. eight 3tb drives in RAID 6 for 18TB of storage. It's almost full and in need of new hardware.


I would check out the wiki here to start https://www.reddit.com/r/DataHoarder/wiki/index and move from there. Prebuit consumer NAS like synology, qnap, or drobo is pretty meh. You are paying a big premium for a fancy custom enclosure and software.

You can can get both better compute performance, better throughput, and more storage for less. The only downside is more power draw and time/effort.
>> No. 108947 ID: 776b28
>>108946
>Drives with Ok Reviews, Use At Your Own Risk

>Seagate ST3000DM001

LAWL.

So serv, how did this exact model of drive do by you? I don't think I'll be using any info from that guide.
>> No. 108950 ID: 19518e
>>108946
Thanks serv, that's pretty much exactly what I needed.
>> No. 109035 ID: f11f4d
File 148945579916.gif - (4.27KB , 582x282 , type I II error.gif )
109035
I use some QC5000-ITX/PH board for my NAS. Picked it up for about $60 a few years ago. It's passively cooled and last I checked it sat around 12-15watts on my kill-a-watt while idle. Only got a case and a tiny power supply. Already had spare ram.

I have OpenMediaVault loaded on an old obsolete 70gb msata drive, that formerly sat in my laptop. It's hooked up to a USB3 adapter and sits inside the case.
2x 8TB HGST Ultrastar He8's in RAID1. Acquired these drives several months ago, doubling my former capacity.
2x 4TB HGST Ultrastars. These old drives used to be my main RAID1 array and served me well for years. Eventually I needed some more room and these are now set in RAID0 and used to store extra parity information via OMV's Snapraid plugin. Essentially these now act like a weekly "backup"+extra error checking. These are typically powered down. There is no issue if either of these drives fail.
1x 8TB Seagate Archive HDD. This is a cheapo and slow as fuck shingled drive that is perfect for offsite (protected from fire/theft) backups. I don't add much to my data horde anymore, so I only retrieve it to do an actual backup every 2 months or so.

I'll will never ever bother with RAID5/6.

Also, here's a protip:
Shitty SATA cables can cause hard to identify issues with hard drives in raid. If you have a hard drive that works fine in an enclosure, but starts acting up when stuck in a raid array, try a _different brand_ of cable.


Open Media Vault is pretty baller, but the whole permissions thing can be a pain in the dick to figure out when setting up SMB shares and the like. If you are new to this and a big dummy like me, expect to have to start from scratch a few times before you finally learn from trial and error. People that actually understand linux (OMV is debian based) will probably have a much easier time.
Other than that, OMV is pretty straightforward with everything being based on plugins and the web interface. I've never had to use the command line or any of that bullshit, thank the devs.
If anyone bothers with OMV, I'd be happy to shrug my shoulders at your questions and nod sympathetically, but I may not notice them for a bit.
>> No. 109038 ID: 19518e
File 148947301673.gif - (0.97MB , 290x231 , 133462710380.gif )
109038
>>109035
>mfw reading that

For some reason I never considered 8 to 10tb UHCP drives. I think I was stuck in the "they're so expensive" mentality without considering total cost of 8+ drive RAID build and dollar per GB. In addition, I would be using this kind of thing as a backup to the backup, so it would see less use, making drive replacement cost smaller relative to system use.

I was parting out a RAID6 build and saving up but now... Now I really don't think I'll have to bother. Just a cheap build and some UHCPs...
>> No. 109039 ID: 5cad20
>>109038
UHCD*

Derp.
>> No. 109040 ID: f11f4d
File 148947856795.png - (59.38KB , 994x526 , -g- gets into a school district's system 2.png )
109040
>>109038
Glad that was helpful. It was basically what I ran into when deciding how wanted to set things up years ago. Did I want lots of cheap moving parts, or do I want less moving parts for not all that much more cost-wise? After coming across a fuckton of "my RAID5/6 array is all fucked!" threads, the decision was easy. Either I care about complete redundancy, and use RAID1 or RAID10, if I actually wanted to save thousands of movies or something, or I don't and use RAID0/Linear.

I've basically just doubled my data storage a few times over the past decade and a half, starting with 2TB, moving to 4TB, and now finally 8TB. This has allowed me to easily re-purpose my previous old drives for extra redundancy.

Realistically, with a proper setup you can easily find 8TB drives that are cheaper than ultrastars. I personally like ultrastars because they've been the closest thing to a gold standard that is accessible to a consumer's wallet, and hey, I have a job and don't drink so that cost premium isn't an issue to me. But as long as you have redundancy and incremental backups (that can't get fucked by power surges, fire or theft), then you can easily get away with "perfectly fine" and even "adequate" when it comes to reliability. Another thing to factor in is the rate at which you acquire data. With my recent upgrade I should be set for another 4-5 years, at which point consumer-class phased holographic hypercube drives should allow me to double my storage again affordably.

The ultrastar HE8's were about $360 for an single drive.
Many same capacity RAID compatible drives should be around or under the $300 range. WD Reds were pretty solid last I checked.
Seagate's archive drive was about $220. Don't use this in a RAID array. They are meant for occasional use. Do incremental, compressed backups to these.

You can use a browser plugin called "camelcamelcamel" to see historic prices for an item on amazon. For hard drives, this tells you when to hold off buying. Stay away from oddly low prices and stick to the listings with hundreds of recent positive reviews.
>> No. 109047 ID: 19518e
>>109040
Originally I went with smaller drives in RAID6 because I knew I was going to use it a lot every day, so drive failure was absolutely a matter of "when" and "how many". Replacing a $350+ HDD every couple of months versus a $150 drive and rebuilding didn't look too bad to me, even considering the slightly higher setup cost.

Plus I started out needing 10+TB of redundant storage when 6TB drives were a LOT more than what they are now, and 8+TB drives didn't even exist.

At this point, I'll keep using the Synology, I mean I've already purchased it. I have spare 3TB drives already, and it still makes sense from a daily use/upkeep cost perspective.

But for offsite backups, you're right on all accounts, and that's what I'll do. Time to do a bit of homework on drive prices and stay on the lookout for sales.

Thanks.
[Return] [Entire Thread] [Last 50 posts] [First 100 posts]


Delete post []
Password  
Report post
Reason