Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So if I understand right, for the real-time version rather than querying the NeRF to compute the frame pixels on the fly, they instead use the NeRF to pre-generate 3D Voxel data representing the scene which can then be rendered in real time using more traditional voxel rendering?


Yes and No.

This preserves the exact lighting equation that the NeRF learned, while traditional voxel rendering is limited to traditional lighting equations.

You would have a hard time voxelizing a NeRF, because you can't extract a traditional lighting equation out of it.


I think this is sensitive to Hinton's work on capsules, which I believe is a more reprojectible primitive. Maybe you can coax a voxel




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: