I look at it differently. Rather than rely on a data recovery service that generally costs $1000 and up, my strategy is to never need to go there.
My own backup strategy relies on simple JBOD disks that could be recovered by a data service if things got to that point. The only issue is the backup plan and frequency of backup, and etc. And I would not feel comfortable with multiple Drobos as the basis for my recovery plan. The technology itself becomes a potential single point of failure (although arguably fairly remote).
I mentioned I like the Synology disk expansion scheme better because it is based on Linux MDADM, but that is not available on any direct attached storage for Windows/Mac, which has always been Drobo's bread and butter application and my own requirement.
I suspect the Synology scheme is the "sweet spot" in terms of overall reliability and reasonable simplicity and potential recoverability. So strictly in terms of NAS storage I agree with you, although for many other reasons too.
I actually do believe that Drobos are somewhat more susceptible to multiple drive failures and I do believe that that is the root cause of most or all the Drobo array failures. My opinion is based on assessing 4 or 5 years of Drobo support forum threads. A very imperfect process since we rarely have all the information and the nature of the beast is that there are always uncertainties as to cause.
Here is the reason:
When a conventional Raid array rebuilds, the good drives are read sequentially, and the new drive is rebuilt with sequential writes. That process is not only fast but "easier" on the drives.
Drobo Beyond Raid stores data in 1GB zones. In order to provide the total flexibility of drive expansion, those zones may be striped or mirrored, depending on the sizes and numbers of drives and the upgrade history.
The result is that the array rebuilds are more or less "random access". And as was reported here, depending on the size of the drives and the Drobo model a single disk rebuild can take 24 to 48 hours. During that time all the drives are thrashing around at up to a 100% duty cycle in a more or less random fashion (as opposed to the sequential rebuild of conventional raid and Linux MDADM raid usedf by NAS boxes).
(older Drobos like the V2 rebuild at far less than a 100$ disk duty cycle - they are processor bound and that is why their rebuild times are so excessive. But newer Drobos like my S may come very close to 100% disk duty cycle)
Typically consumers do not run their arrays at or near 100% duty cycle for long periods of time. The exceptions would be the initial load-up of a lot of data or perhaps when mastering new backup copies. But how often is that done?
So now consider a typical situation where a storage array with single disk redundancy is lightly used (as it would be for a typical photographer). You have up to 4 or 5 lightly used drives that are either aging or brand new (and unproven and therefore susceptible to "infant mortality").
One drive fails. You replace the drive. The Drobo launches itself into a 24-48 hour rebuild. During that time a subsequent drive failure will kill the array.
That rebuild is a great stress test to find weak sisters but that is not what we want to do at that particular moment in time . It is likely that the rebuild will stress the drives far beyond anything they have ever seen in their lives, and the drives in many cases are quite old. Especially since a lot of people use Drobos to conglomerate the capacity of a bunch of old leftover drives.
I have come to the conclusion that it is not feasible to try to find some sort of "silver bullet" in search of a "fail safe" storage array. I am not suggesting Drobo multi-drive failures are common or necessarily more likely than other forms of failure, in the aggregate. I just point out that the above is, at least in principle, a slight negative for the technology, in addition to the practical issues related to doing array upgrades.
My old Drobo V2 required about 64 hours to rebuild a single drive at a time when I had about 3TB of data stored on it. I had those results 3 times and it is consistent with other user reports. I figured out quickly that it would be better to just kill the array and restore from a backup rather than upgrade more than one drive in a short period of time. That is why I do not believe the Drobo V2 is "fit for purpose" with modern drive capacities or even with what was available when it was introduced.
I believe my Drobo S only required about 12 hours to rebuild a similar amount of data the one time I did a disk upgrade. That is one reason I would never own a Drobo V2 again, in addition to the fact that I think 60-some hours is too long to be vulnerable.
Newer Drobos, released within the last 6 months, are marketed as being "very fast", "far faster" than older Drobos. I have not studied them in depth and therefore have not seen any real world user tests yet (given they are very new). So this whole issue is a moving target that is being resolved as Drobo puts better processors in their boxes.
There is another quirk of the Drobo and that is that Drobos are very particular about drives. They appear to fail drives that other devices (and Windows/Mac OS's in internal attachment schemes) will accept.
The official Drobo position on this is that they fail drives before other devices recognize impending failure, thus minimizing the chance of concurrent multiple drive failures or data loss from "flaky" drives.
That is all well and good but does that mean that the Drobo might also fail a 2nd drive during a rebuild, even if the 2nd drive is capable of performing more than long enough to complete the rebuild and get back to a redundant condition?
I do not know the answer to that but I believe that to be the core question that no asks, except those few that have delved deeply into at least the user experiences reported in Drobos support forum. In my mind this is the $64 million Drobo question.
(the best solution to the above is to use a 5+ bay Drobo in Dual Redundancy mode. I don't have any specific opinions/knowledge of the failure rate of dual redundancy Drobo arrays; I just assume them to be at least significantly safer than single redundancy disk arrays)
This is truly a double edged sword but the above is a cynical and arguably worst-case assessment of things.
If you spend some time studying user reviews of all the major consumer Raid devices you will find they ALL have legions of unhappy users that relied on them as a fail safe device and then lost their arrays. This is particularly true of direct attach devices used with consumer drives.
And, of course, not all array failures can be recovered- the data has to be there and that alone calls into question the idea of relying on data recovery as a feature selection.
People even manage to screw up Drobos, simply due to user error, but surely a Drobo is the most fool-proof Raid device ever made for those that are not particularly Raid savy. And the problem is that no matter how much research you do on the net most of us simply do not have a lot of real world experience with all the possible flavors of Raid failure.
A lot of Raid array losses are simply due to operator error of some sort. I think you have to factor in the big picture of all the possible failures before worrying too much about the Drobo issue I mentioned above, or any other single issue viewed in isolation. And in the end, the best solution is redundancy, not reliance on data recovery.
I can buy about 24TB of disk drives for the cost of a single data recovery incident.
And that is more or less what I did, even though my Drobo does not at the moment hold much or any critical "first line" working data. My decision there had nothing to do with Drobo reliability and in fact I traded off first line reliability for other features I deemed more important but I am always re-evaluating that decision. I trust the Drobo a lot more than those JBOD drives.