I have about 335,000 images cataloged in iMatch. If I had to guess, about 50,000 might be edited but most of them are a niche subject that is more or less oriented toward a product shooting workflow where everything shot needs to be edited, output and archived.
We both shoot wildlife, which I think is peculiarly difficult to catalog and manage. For example, cataloging specks on original NEFs that represent many various species of sparrows and other difficult to attribute (at least for me) birds. And it requires categorizing each image, image by image. Sports might be similar if one were cataloging images by participant.
I'm a lifelong software developer by trade so I am very cognizant of issues like data corruption and how to backup and maintain large databases. My personal imaging database and image archive is orders of magnitude larger than the commercial app I work with, which is targeted at large enterprises but it is mostly text based so its databases are puny in comparison
(I find it ironic that the processing horsepower required by amateur photographers just to edit a NEF file and store it surpasses the needs of many enterprise level systems, and certainly any personal software development suite I've ever worked with)
I recently revisited the cataloging issue because I am tired of waiting for iMatch to release an update. On the other hand, Camera Bits has been talking about a catalog about as long as iMatch has talked about his "Next Generation" version. That is appropriately named because I may not live to see it; maybe my kids will, assuming he outlives me . It's not clear to me who will win the horse race those two are in right now.
I like iMatch and it probably handles large databases as well as any cataloging app could. iMatch has always considered large scales to be its forte. It claims to have customers with 500K images or more in one database. I believe that.
iMatch has two major problems for me...
1. It has no built in versioning although it supports (VB) scripting so it is possible to roll your own versioning setup, more or less.
2. It has no support for soft cropping, which is a PhotoMechanic feature I heavily rely on. Just the soft crop feature alone, assuming it is carried over to their cataloging app, might be worth the pain of attempting a migration.
Another potential problem with iMatch is that it has no support for category or keyword synonyms, which could be critical for someone maintaining a stock portfolio.
To solve the versioning problem, I've written my own scripts to run through my database and aggregate images from the same NEF file. I've done this based on either shot time (including sub-seconds to segregate Ch sequences) or a (hopefully) unique image number that should be embedded in the original and derivative images.
Neither method is perfect and this is a work in progress. Success is heavily dependent on one's personal workflow and in particular one's dedication to precisely following that workflow . (and in particular file naming conventions!)
iMatch has a considerable repository of user-written scripts. Some of those are targeted at the versioning problem but none of them worked for me and my workflow, and what I wanted versioning to do for me.
I'm not sure if iMatch's next version (if it ever happens) will support soft crops and that is the major reason I'm watching what CameraBits comes up with, if *they* ever get their product out. But iMatch is here and now, and it is very stable (for me at least, and I do not see anyone complaining about their database integrity).
iMatch also interoperates well with PM and CNX2, although I do not do any metadata work in CNX2. I am not sure if PM, CNX2 and iMatch can all interoperate together at the metadata level because CNX2 does not support XMP sidecars (that is a long story).
I looked at IDImager several times over the years but every time I start reading their support forum I see people complaining about database corruption problems and stability. That scared me off because I know how much blood and sweat I put into my iMatch database and how difficult it can be to even identify corruption in such a large database.
Above all else I don't ever want to have to rebuild my catalog and re-categorize my images. Ever.
I don't recall IDImager's versioning scheme, the flexibility and how it handles images across folder structures, etc. I'd have to download a fresh eval to do that. IDImager also has its own built in user written scripting interface but I do not know the extent or limitations of that scripting environment.
I'm suggesting here that if the ability to version is important to you then the ability to roll your own scripts could be crucial regardless of what the stock app offers. It's worth researching that aspect of an image cataloger. LR offers no scripting interface so if you don't like what it does you have no alternative. This assumes one already has either scripting skills or interest in learning how to script.
My iMatch database is 10GB and growing. The size of the DB is mainly dependent on the user-defined size of the image thumbnails. Mine are 400 pixels but you can make them whatever size you want. To use IDImager I think I would need to use MS SQLServer and I had a problem with that because IDImager needs a specific version of SQLServer. My commercial software work has its own demands for certain SQLServer versions and in the past they were different. That is a peculiar problem for me that is actually going away shortly and would likely not be a problem for other users.
SQLServer licensing is not a problem because SQLServer Express is free and as far as I know it will mechanically handle and scale to the requirements and it appears to conform to MS's licensing restrictions (which are suprisingly broad for MS). I assume IDImager can deal with SQLServer Express.
One of the nice things about IDImager is that because it uses a standard SQL database it is possible (and easy if you have the skills) to query the database outside the app. iMatch uses a proprietary database; I'm not even sure what engine he uses.
I looked at Lightroom recently. In past versions of LR I had huge concerns about its scaleability. LR 4.1 seems to be better in that regard although I have not researched real world experiences of people scaling it up to the 100K image level and beyond.
The big problem I saw with LR, and it was a total deal breaker for me, was not how to change my workflow for future shoots but how to deal with my edited CNX2 archives. LR is simply incompatible with what Nikon is doing with these raw files, and I don't see how any catalog app could deal with that and also do what LR does.
The problem, of course, is that a CNX2-centric workflow typically relies on the edited embedded JPG in the Nikon raw file, where LR must ignore that embedded JPG because it is doing its own edits residing in the XMP sidecars. As a result LR is totally unaware of those edits and does not expose the embedded edited raw file JPG.
The solution is to output all the edited NEFs into TIF or PSD files (or maybe a derivative JPG) just to represent the CNX2 edits. And stack the original and derivative together. If that is your normal workflow then you don't have the problem I do. But I am unwilling to go back and do that, nor do I want to consume the disk space required to store 50-100K new TIFs or PSDs. I consider the Nikon approach (edited NEFs) to be a uniquely efficient way to handle large numbers of raw files.
(I only create TIFs or PSD files if I need to do something I can't do in CNX2 and I avoid that wherever possible.)
I was also not satisfied with how LR deals with "versioning". Yes, it has image stacking but it only stacks related images in the same folder. It cannot cross folders to stack (and therefore version) images. That was deal breaker #2 for me, because I am unwilling to collapse my archives into a folder structure where all the original and derivative images reside in one folder. If I were starting from scratch I might view that differently but I am 335K images and 5200 folders away from "starting from scratch".
Lightroom also has a problem with automatically stacking based on shot date/time because it is apparently unaware of shot time sub-seconds which are critical for differentiating high frame rate images. iMatch also has some limitations in that area (within the stock app) although with custom scripting the sub-second tag is easily and reliably accessed. I assume there is some issue with industry-wide standardization of sub-second metadata tags but never researched that point.
One of the great things about iMatch is its optional "Offline Cache". It allows you to maintain a small lower resolution and lower quality copy of all your images in a reasonably compact form. I maintain an offline cache of my 335,000 catalogued images in the form of 1200-1400 pixel wide "medium resolution" JPGs, which consumes about 38GB of disk space, including the iMatch database. In the modern world that easily fits in a tiny corner of any laptop hard drive.
In this age of terabyte laptop drives it would be viable to cache larger and higher quality images. Those are user-definable settings. Some day I may go back and revisit my own settings, which were chosen in the era of 100GB hard drives.
The ability to go on the road and still have access to at least those downsized cache images is a huge benefit for me, given my lifestyle. I could never do that with my main image archive, which is pushing 3TB and sits on a Win7 Pro file server.
I don't think LR offers that "offline cache" capability. I don't recall if IDImager does that either.
I am and have always been laptop based. For that the iMatch Offline Cache is critical. For a desktop bound environment on a wired network offline caching would not be an important feature and would probably not be used at all.
An important consideration could be licensing (and activation) issues. I figured out long ago that NOTHING new should ever be done in an imaging catalog app without first doing it in a test environment. It is too easy to hose a database or even the main image archive.
In many cases that requires not just a test database but a totally dedicated test environment run on a separate machine. Many "new" things require app setting changes I don't want to do on my "production" system. It's too easy to forget something and not change it back. And all this software needs to inter-operate in complicated ways.
iMatch is sold with a licensing agreement that allows the user to spawn off as many copies as he or she sees fit, as long as only one user is using the product. So I can do whatever I need to do - including setting up multiple test environments in virtual machines. Not so with Lightroom where I think I would need a second license just to set up a basic test environment. That can get expensive.
(I have a huge preference for software that does not require activation, or if it does that it allows for what I consider to be basic requirements for proper testing. And that requires multiple installations on multiple machines. That is increasingly lacking in the modern post-XP world. And partly because I know, from real world experience, that many software companies play fast and loose with their activation policies. I had my own horror story with Adobe a few years ago where they tried very hard to back out of a $1000 perpetual license agreement)
I've gone into all this because you and I have a lot in common in terms of subject matter (wildlife), which I think has particular requirements for cataloging. And also the scale of our image archives. I am very reluctant to heavily cull my wildlife images, for a number of reasons. All relating to the fact that as I learn more about birds I find things in my old images, the importance of which I just never previously understood.
You should research heavily all the issues related to scale, including speeds and feeds. Even with modern hardware speeds and feeds there is a huge amount of blood and sweat shed cataloging a large database, especially a large wildlife database. An app that looks good with a quick test of 10K images may not look so good with 100K+ images. In some cases it may not be possible to scale up an eval in the time period usually given. Unless you have the time and a spare machine to dedicate to the task 24/7 until its done .
If you happen to have a lot of old D2H (but not D2Hs) images you have a peculiar and very difficult problem I could go into in more detail.