Thanks a lot for the info, Shaknum...
Maybe resolution is the main obstacle here. I was hoping this would make for something portable and cheap, but in fact the resolution here would not be 1/2, but 1/4, of the resolution of standard two-camera DIY designs. First, half the resolution because you're making two images in one shot, then half it again because you're taking two pages instead of one. Or even less, considering that to get the whole book to be visible in the images, you'd probably be shooting some of what's beyond the edges of the book, either on the sides, or on the top and the bottom.
(In fact, I've assumed it would be desirable
to include some of the area around the book in the images; I imagine this could help to get accurate information on page dimensions and the location of content on the pages, which in turn could help with automatic layout recognition---identification of section headings, page numbers, etc.)
So a single 18MP camera would give you a resolution that's similar to what you could get with two 3MP cameras in current designs. Maybe you'd really need a Leaf Aptus to do this properly with one camera.
I didn't know about the 3D stereo lenses. I had a look at some Web sites that sell them, and they look really nice. Obviously only for professional cameras, but I'm guessing they'd work better for both focusing and 3D image processing than the DIY post-lens periscope I suggested.
So maybe the least expensive way to do this is in fact with two good consumer cameras. Decapod is using two 14.7MP Canon G10 cameras, and suggests as a minimum resolution 12MP. http://wiki.fluidproject.org/display/fluid/Hardware+Design
Looks like a great project! (Regarding their software---OK, now, this should really go in a Software forum thread---I wonder what the main differences are with ScanTailor---I mean, other than programming language and UI differences, what are the major differences in the algorithms and user processes? Maybe some synergy between the two codebases could be achieved? Maybe ScanTailor could have as an option the inclusion of stereo/3D data in algorithms, so that it could be taken advantage of when appropriate?)