That is a good question....i don't know what their motivation is (maybe processing time?), but 'lossless' is not a claim, it is a well known computer algorithm. Here is a post from MotoMannequin, from May 8th, 2012, which explains the process nicely:
"....PSA: Uncompressed NEF is for those of the mindsent who would try to comare image quality of 2 different brand CF cards (yes these people exist). There actually is a free lunch with lossless compression - the result is exactly the same data as the uncompressed file, just as when you ZIP then unZIP a file, the unzipped one is the same as the original. Therefore, always take advantage of lossless compression.
Lossless compression takes advantage of repeating patterns in data. The simplest and most easy to understand is run-length encoding (RLE). Imagine I'm trying to compress a B&W image, using "B" for a black pixel and "W" for a white pixel. I have a row that looks like this:
...meaning 12 Ws, followed by 1 B, followed by 12 Ws, etc. This compressed 67 bytes down to 18 without any loss of data. I can fully recover the original string from the data in the compressed string. Of course, some data doesnt' compress down so well. The string:
Which is the same length of the original. Data compression algorithms talk about the amount of "entropy" in the data which involves the amount of repeating patterns and how far the data can be compressed. When you try to zip a jpeg you will find it doesn't compress much at all. Even though jpeg runs a different compression algorithm than zip, once the data is compressed is has very low entropy, and zip can't find much to compress. What ZIP does is greatly more clever and complicated than RLE, but you get the idea.
Lossless means you can 100% recover the original data. The only theoretical downside would be additional processing time to do the compression, but it turns out that writing the bytes to flash takes longer than compressing them, so you have a net gain in processing time when using losless.
I guess the other possible downside is that a single byte corrupted in a compressed file might render the entire file unusable, while a corrupted byte in an uncompressed file might only corrupt one pixel. In pracitce, this is a vanishingly rare issue, and one I'm personally not concerned about.