Remote Rendering in Virtual Reality
The uses of Virtual Reality and its efficiency in portraying realistic, virtual images to users have become increasingly prevalent. Remote rendering aims to accommodate for the issue that some virtual scenes can be very big and therefore have noticeable delay that exists from the user's movement to what is actually being shown to them through their device. There are three different conditions to be conducted on subjects: a virtual space where an RGBD image was modified to simplify the space and provide a representation of the space visualization through mesh compression, a virtual space where an RGBD image was rendered on the server, transmitted to the client, and the scene changes accordingly to user head movement, and a virtual space where rendering was not utilized so the client will have the entire scene stored. We observe data from subjects of volunteers who already own VR headsets and are willing to participate, and ask them to complete tasks, some which are seen in Forsberg et al.'s experiment in studying 3D vector field visualization: 1. identifying if a given point is a critical point and 2. identifying if there is a "flow" or a "path" in the vector field between two given points (Forsberg 2009). The actual experiment has not been conducted yet on subjects, but the hypothesis is that the experience will generally be better in the entire scene without rendering, since there is no lost information in that scene, albeit not as efficient in terms of storage and processing.