Most of us are now reasonably familiar with the basics of 3D computer graphics from TV and films. A model of the required objects/scenes is constructed using a series of vector lines assembled into a 3D 'wireframe' using just enough polygons to give the required degree of detail to the form. This open structure is 'clothed' with a skin of texture giving it the appearance of a solid object. Almost any object or scene, real or imagined can be constructed in this way, including detailed landscapes. Light sources are placed within the model to illuminate it, shading the surfaces to provide a more realistic impression of a 3D world, and cameras are placed to allow the observer to determine camera views and angles using virtual lenses of various capabilities. Objects, lights and camera can all be moved within the scene, their changing positions over time recorded in various ways, allowing animation. The scene can now be 'rendered' at a suitable resolution.
The process of rendering is important to understand. In effect the model - which exists only as a mathematical entity in the memory of the computer - is made visible as an array of coloured pixels by the rendering process. The display of the model on the screen is created according to the balance of speed against quality that the user has decided upon. A simple wireframe or roughly textured version of the scene is rendered almost instantaneously for ease of use. Final versions for printing or recording to film or video can take minutes or even hours to render frame-by-frame at very high 'photo-realistic' levels of quality. Between these extremes, very good levels of quality can be achieved for animation or real-time VR display by compromising the quality - e.g. transparency and reflections are faked using a number of rendering tricks.
As computer power has improved, the speed of this animation has increased to the level that quite complex scenes can be rendered on demand 'in real time', allowing the viewer to navigate through a scene interactively. This is the basis of Virtual Reality.
Once a model can be rendered on demand, it is necessary to devise controls to allow the user to navigate through the scene and interact with objects found there. At the simplest level of 'Desktop VR', the user views the scene through a camera-like viewfinder on the screen itself - usually a window of some kind - navigating by means of on-screen icons, key presses, or by dragging the input device.
Full immersive VR systems use a number of devices to enhance the feeling of being immersed in the virtual world rather than viewing it on a screen. Small LCD screens placed in a headset just in front of the eyes are a starting point in this process. 'Shutter' systems can display slightly different versions of the scene to each eye alternately to create a stereoscopic view but this is still a non-interactive technique. Ideally, the headset should have tilt sensors to detect head movements, updating the display as the viewpoint changes. This gives a true interactive effect but only from a fixed position in space. To move through the environment, you need tilt sensors on your body to detect the way you lean, forward or back, left or right, using the angle to control direction and speed. In the Osmose system, a further sensor detects your chest pressure as you breath in or out, controlling vertical movement much like a scuba diver. All together, these control mechanisms give you an intuitive way to move through a virtual environment almost as if you were flying over the landscape.
© Internet Archaeology
URL: http://intarch.ac.uk/journal/issue8/larkman/3d.html
Last updated: Fri Aug 25 2000