It is impressive. But, I have to wonder: what does this get you above-and-beyond taking a short video of an object, and then allowing the view to "scrub" back and forth within the video?
One thing that's illustrated in the demos is that you can zoom into detail in the photosynth images that you couldn't in a video.
I imagine there could eventually be better interactivity with the underlying 3D model than video could provide. Certain surfaces could be links to more information or another photosynth, for example. It kind of reminds me of some of the VRML demos from the 90s, but without the plugins and working backwards from photos instead of forward from models.
Photosynth collages can be created by stitching together a lot of disparate photos. So you can have a 3D, interactive representation of, say, Trafalgar Square, created from photos available on Flickr.