I've tried both the DK1 and the vive, and the vive is basically at the point where games are very playable.
Some people complain about the "screen-door" effect, which basically translates to the pixels are so large that it's like you are looking a screen door. But in my experience that's not a problem. If you stop and concentrate you can see the pixels, and they are quite large, but with game-like graphics your brain is more than happy to fill in the gaps between pixels.
This doesn't translate to text. Text needs to be massive before you can read it. There is no way you can have multiple monitors at desktop distances. Best case is a single, low resolution monitor (about 720p resolution) so close to your face (or far away and massive) that it takes up your entire field of view. To see any extra monitors, you would have to rotate your entire head.
When the 'in-world' resolution can match 1920x1200 at 2ft
Sitting about 2 ft from my 24" monitor at work, I'd estimate that it would take 9 such monitors to get to around the same field of view in my HTC Vive. So you're talking about 5760x3600 pixels there.
Maybe we're going to need iris tracking as an optimization? That's a lot of pixels to push at 90 Hz.
Agreed that pixel density is an issue, but that can be worked around via supersampling. I just saw the guys at /r/vive discovered a global supersampling setting. Someone post this before and after:
Of course, the issue now is GPU horsepower. Rendering two viewports at 1200x1080 at 90fps is no easy task. Supersampling at 2x means rendering 2400x2160 per eye at 90fps. Anything short of a 1080 is going to have performance issues at this level of supersampling, and even then it depends on how complex the game environment is and its settings. I messed around a bit with BigScreen and I can certainly see the potential for VR as monitor replacement for $some_tasks. I have yet to see supersampling settings for it or if these global settings affect 2D screen projections in 3D space.
So yes, current densities might be enough to get by without fuzziness and hard to read text if you have enough GPU muscle. I don't think we've figured out all the tricks with first gen VR yet. There are going to be a lot of little surprises like these I suspect. At the very least we know AMD and Nvidia are supporting performance enhancing features that no one has really implemented yet.[1] I suspect the Vive of June 2016 is going to be a very different experience compared to the Vive of December 2016. The same way console launch games don't look as good as games released towards the end of the console's life.
[1] Yesterday one of the PoolNationVR devs said he got a 20% performance increase using Nvidia's Multi-Res Shading. This change goes live in July. A 20% performance increase at no cost? Crazy.
I wonder how many good tasks there are suited to the current pixel density. You can't read text across a 12 virtual-monitor array, but there must be other things that work well.
Maybe it could replace a wall of monitors displaying surveillance footage in a security room. You could even do impossible things enlarge the screens with lots of movements, and shrink or fade out ones without any motion detected. Show each video feed overlaid on a map of the premises.
I have no idea exactly what daytraders are looking at, but I know it's a stereotype for them to have tons of monitors. You could spawn a few dozen virtual screens displaying charts, have indicators when a chart behind you changes within some rules you've defined (something similar to the "you're being shot from behind" indicator in shooter video games), maybe with different colors, shapes and scales depending on the rules you set.
There's probably a ton of potential being passed up just because these don't do text very well yet.
That probably is the way it works. But the problem is that the resolution of the VR devices at the moment is 2160 x 1200, i.e. the equivalent of a single monitor.
This monitor is responsible for filling your entire field of view, therefore the pixel density of any 'virtual monitor' must be quite low, therefore it'll not work 'like you're sitting at a desk of monitors'.
Imagine the inverse. Imagine you've got a single curved monitor at your desk that fills your entire field of view. Sounds great right? Now imagine that monitor is 2160 x 1200. Not so great.
Pixel density is a huge issue. With current Oculus and Vice displays, you can not only distinguish individual pixels but also see the red, green, blue subpixels.
I can't imagine doing work with that. Only some specialized tasks could work, I can see how Tilt Brush could work in CAD work. But projecting your usual 2d app windows to VR sounds awful with the current consumer products in mind.
This hasn't been the case for me. I don't really have much problem reading text on the CV1 at all. The fact that you're not looking at a static image makes a pretty big difference. Tiny head motions give you kind of a temporal anti-aliasing effect that makes the resolution seem better than it is.
I've tried both the DK1 and the vive, and the vive is basically at the point where games are very playable.
Some people complain about the "screen-door" effect, which basically translates to the pixels are so large that it's like you are looking a screen door. But in my experience that's not a problem. If you stop and concentrate you can see the pixels, and they are quite large, but with game-like graphics your brain is more than happy to fill in the gaps between pixels.
This doesn't translate to text. Text needs to be massive before you can read it. There is no way you can have multiple monitors at desktop distances. Best case is a single, low resolution monitor (about 720p resolution) so close to your face (or far away and massive) that it takes up your entire field of view. To see any extra monitors, you would have to rotate your entire head.