In 3D video games, a virtual camera system aims at controlling a camera or a set of cameras to display a view of a 3D virtual world. Camera systems are used in video games where their purpose is to show the action at the best possible angle; more generally, they are used in 3D virtual worlds when a third-person view is required.

Virtual camera system demo showing parameters of the camera that can be adjusted

As opposed to filmmakers, virtual camera system creators have to deal with a world that is interactive and unpredictable. It is not possible to know where the player character is going to be in the next few seconds; therefore, it is not possible to plan the shots as a filmmaker would do. To solve this issue, the system relies on certain rules or artificial intelligence to select the most appropriate shots.

There are mainly three types of camera systems. In fixed camera systems, the camera does not move at all, and the system displays the player's character in a succession of still shots. Tracking cameras, on the other hand, follow the character's movements. Finally, interactive camera systems are partially automated and allow the player to directly change the view. To implement camera systems, video game developers use techniques such as constraint solvers, artificial intelligence scripts, or autonomous agents.

Third-person view edit

In video games, "third-person" refers to a graphical perspective rendered from a fixed distance behind and slightly above the player character. This viewpoint allows players to see a more strongly characterized avatar and is most common in action games and action adventure games. Games with this perspective often make use of positional audio, where the volume of ambient sounds varies depending on the position of the avatar.[1]

There are primarily three types of third-person camera systems: the "fixed camera systems" in which the camera positions are set during the game creation; the "tracking camera systems" in which the camera simply follows the player's character; and the "interactive camera systems" that are under the player's control.

Fixed edit

Selection of shots in Resident Evil 2 that aim at creating tension

With a fixed camera system, the developers set the properties of the camera, such as its position, orientation or field of view, during the game creation. The camera views will not change dynamically, so the same place will always be shown under the same set of views. Games that use fixed cameras include Grim Fandango (1998) and the early Resident Evil and God of War games.[2]

One advantage of this camera system is that it allows the game designers to use the language of film, creating mood through camerawork and selection of shots. Games that use this kind of technique are often praised for their cinematic qualities.[3] Many games with fixed cameras use tank controls, whereby players control character movement relative to the position of the player character rather than the camera position;[4] this allows the player to maintain direction when the camera angle changes.[5]

Tracking edit

An illustration of a protagonist whom a player controls and a tracking camera just behind, slightly above, and slightly facing down towards that character

Tracking cameras follows the characters from behind. The player does not control the camera in any way – they cannot for example rotate it or move it to a different position. This type of camera system was very common in early 3D games such as Crash Bandicoot or Tomb Raider since it is very simple to implement. However, there are a number of issues with it. In particular, if the current view is not suitable (either because it is occluded by an object, or because it is not showing what the player is interested in), it cannot be changed since the player does not control the camera.[6][7][8] Sometimes this viewpoint causes difficulty when a character turns or stands face out against a wall. The camera may jerk or end up in awkward positions.[1]

Interactive edit

Instead of staying behind Mario, the camera intelligently rotates to show the path (Super Mario 64).

This type of camera system is an improvement over the tracking camera system. While the camera is still tracking the character, some of its parameters, such as its orientation or distance to the character, can be changed. On video game consoles, the camera is often controlled by an analog stick to provide good accuracy, whereas on PC games it is usually controlled by the mouse. This is the case in games such as Super Mario Sunshine or The Legend of Zelda: The Wind Waker. Fully interactive camera systems are often difficult to implement in the right way. Thus GameSpot argues that much of the Super Mario Sunshine' difficulty comes from having to control the camera.[9] The Legend of Zelda: The Wind Waker was more successful at it - IGN called the camera system "so smart that it rarely needs manual correction".[10]

One of the first games to offer an interactive camera system was Super Mario 64. The game had two types of camera systems between which the player could switch at any time. The first one was a standard tracking camera system except that it was partly driven by artificial intelligence. Indeed, the system was "aware" of the structure of the level and therefore could anticipate certain shots. For example, in the first level, when the path to the hill is about to turn left, the camera automatically starts looking towards the left too, thus anticipating the player's movements. The second type allows the player to control the camera relatively to Mario's position. By pressing the left or right buttons, the camera rotates around Mario, while pressing up or down moves the camera closer or away from Mario.[11][12]

Implementation edit

There is a large body of research on how to implement a camera system.[13] The role of a constraint solver software is to generate the best possible shot given a set of visual constraints. In other words, the constraint solver is given a requested shot composition such as "show this character and ensure that he covers at least 30 percent of the screen space". The solver will then use various methods to try to create a shot that would satisfy this request. Once a suitable shot is found, the solver outputs the coordinates and rotation of the camera, which can then be used by the graphic engine renderer to display the view.[14]

In some camera systems, if no solution can be found, constraints are relaxed. For example, if the solver cannot generate a shot where the character occupies 30 percent of the screen space, it might ignore the screen space constraint and simply ensure that the character is visible at all.[15] Such methods include zooming out.

Some camera systems use predefined scripts to decide how to select the current shot for commonly seen shot scenarios called film idioms. Typically, the script is going to be triggered as a result of an action. For instance, when the player's character initiates a conversation with another character, the "conversation" script is going to be triggered. This script will contain instructions on how to "shoot" a two-character conversation. Thus the shots will be a combination of, for instance, over the shoulder shots and close-up shots. Such script-based approaches may switch the camera between a set of predefined cameras or rely on a constraint solver to generate the camera coordinates to account for variability in scene layout. This scripted approach and the use of a constraint solver to compute virtual cameras was first proposed by Drucker.[16] Subsequent research demonstrated how a script-based system could automatically switch cameras to view conversations between avatars in a realtime chat application.[17]

Bill Tomlinson used a more original approach to the problem. He devised a system in which the camera is an autonomous agent with its own personality. The style of the shots and their rhythm will be affected by their mood. Thus a happy camera will "cut more frequently, spend more time in close-up shots, move with a bouncy, swooping motion, and brightly illuminate the scene".[18]

While much of the prior work in automated virtual camera control systems has been directed towards reducing the need for a human to manually control the camera, the Director's Lens solution computes and proposes a palette of suggested virtual camera shots leaving the human operator to make the creative shot selection. In computing subsequent suggested virtual camera shots, the system analyzes the visual compositions and editing patterns of prior recorded shots to compute suggested camera shots that conform to continuity conventions such as not crossing the line of action, matching placement of virtual characters so they appear to look at one another across cuts, and favors those shots which the human operator had previously used in sequence.[19]

In mixed-reality applications edit

In 2010, the Kinect was released by Microsoft as a 3D scanner/webcam hybrid peripheral device which provides full-body detection of Xbox 360 players and hands-free control of the user interfaces of video games and other software on the console. This was later modified by Oliver Kreylos[20] of University of California, Davis in a series of YouTube videos which showed him combining the Kinect with a PC-based virtual camera.[21] Because the Kinect is capable of detecting a full range of depth (through computer stereo vision and Structured light) within a captured scene, Kreylos demonstrated the capacity of the Kinect and the virtual camera to allow free-viewpoint navigation of the range of depth, although the camera could only allow video capture of the scene as shown to the front of the Kinect, resulting in fields of black, empty space where the camera was unable to capture video within the field of depth. Later, Kreylos demonstrated a further elaboration on the modification by combining the video streams of two Kinects in order to further enhance the video capture within the view of the virtual camera.[22] Kreylos' developments using the Kinect were covered among the works of others in the Kinect hacking and homebrew community in a New York Times article.[23]

Real-time recording and motion tracking edit

Virtual cameras have been developed which allow a director to film motion capture and view the digital character's movements in real time[24] in a pre-constructed digital environment, such as a house or spaceship.[25] Resident Evil 5 was the first video game to use the technology,[26] which was developed for the 2009 film Avatar.[25][27] The use of motion capture to control the position and orientation of a virtual camera enables the operator to intuitively move and aim the virtual camera by simply walking about and turning the virtual camera rig. A virtual camera rig consists of a portable monitor or tablet device, motion sensors, an optional support framework, and optional joystick or button controls that are commonly used to start or stop recording and adjust lens properties.[28] In 1992, Michael McKenna of MIT's Media Lab demonstrated the earliest documented virtual camera rig when he fixed a Polhemus magnetic motion sensor and a 3.2 inch portable LCD TV to a wooden ruler.[29] The Walkthrough Project at the University of North Carolina at Chapel Hill produced a number of physical input devices for virtual camera view control including dual three-axis joysticks and a billiard-ball shaped prop known as the UNC Eyeball that featured an embedded six-degree of freedom motion tracker and a digital button.[30]

See also edit

References edit

  1. ^ a b Rollings, Andrew; Ernest Adams (2006). Fundamentals of Game Design. Prentice Hall. ISBN 9780131687479.
  2. ^ Casamassina, Matt. "fixed-camera". giantbomb.
  3. ^ Casamassina, Matt. "Resident Evil Review". IGN. Archived from the original on 25 March 2009. Retrieved 22 March 2009.
  4. ^ "A eulogy for tank controls". PC Gamer. 20 February 2015. Retrieved 5 March 2018.
  5. ^ Matulef, Jeffrey (26 January 2015). "Bringing out the Dead: Tim Schafer reflects back on Grim Fandango". Eurogamer. Retrieved 5 March 2018.
  6. ^ "Sonic Adventure Review". IGN. Archived from the original on 11 February 2008. Retrieved 22 March 2009.
  7. ^ "Tomb Raider: The Last Revelation Review". IGN. 11 December 1999. Retrieved 22 March 2009.
  8. ^ Carle, Chris. "Enter the Matrix Review". IGN. Archived from the original on 25 March 2009. Retrieved 22 March 2009.
  9. ^ Gerstmann, Jeff (4 October 2002). "Super Mario Sunshine Review for GameCube". GameSpot. Archived from the original on 26 March 2009. Retrieved 22 March 2009.
  10. ^ Casamassina, Matt (25 March 2003). "The Legend of Zelda: The Wind Waker Review". IGN. Archived from the original on 26 March 2009. Retrieved 22 March 2009.
  11. ^ "15 Most Influential Video Games of All Time: Super Mario 64". GameSpot. Archived from the original on 26 March 2009. Retrieved 22 March 2009.
  12. ^ "The Essential 50 Part 36: Super Mario 64 from". Retrieved 22 March 2009.
  13. ^ " The virtual camera control bibliography". Retrieved 6 May 2011.
  14. ^ Bares, William; Scott McDermott; Christina Boudreaux; Somying Thainimit (2000). "Virtual 3D camera composition from frame constraints" (PDF). International Multimedia Conference. California, United States: Marina del Rey: 177–186. Archived from the original (PDF) on 10 July 2010. Retrieved 22 March 2009.
  15. ^ Drucker, Steven M.; David Zeltzer (1995). CamDroid: A System for Implementing Intelligent Camera Control (PDF). ISBN 978-0-89791-736-0. Archived from the original (PDF) on 5 June 2011. Retrieved 22 March 2009. {{cite book}}: |journal= ignored (help)
  16. ^ Drucker, Steven M.; David Zeltzer (1995). CamDroid: A System for Implementing Intelligent Camera Control (PDF). ISBN 978-0-89791-736-0. Archived from the original (PDF) on 5 June 2011. Retrieved 15 March 2015. {{cite book}}: |journal= ignored (help)
  17. ^ He, Li-wei; Michael F. Cohen; David H. Salesin (1996). "The Virtual Cinematographer: A Paradigm for Automatic Real-Time Camera Control and Directing" (PDF). International Conference on Computer Graphics and Interactive Techniques. New York. 23rd: 217–224. Archived from the original (PDF) on 28 August 2008. Retrieved 22 March 2009.
  18. ^ Tomlinson, Bill; Bruce Blumberg; Delphine Nain (2000). "Expressive autonomous cinematography for interactive virtual environments". Proceedings of the fourth international conference on Autonomous agents (PDF). Vol. 4th. Barcelona, Spain. pp. 317–324. CiteSeerX doi:10.1145/336595.337513. ISBN 978-1-58113-230-4. S2CID 5532829. Archived (PDF) from the original on 29 March 2005. Retrieved 22 March 2009.{{cite book}}: CS1 maint: date and year (link) CS1 maint: location missing publisher (link)
  19. ^ Lino, Christophe; Marc Christie; Roberto Ranon; William Bares (1 December 2011). "The director's lens". Proceedings of the 19th ACM international conference on Multimedia. ACM. pp. 323–332. doi:10.1145/2072298.2072341. ISBN 9781450306164. S2CID 14079689.
  20. ^ "Oliver Krelos' Homepage".
  21. ^ Kevin Parrish (17 November 2010). "Kinect Used As 3D Video Capture Tool". Tom's Hardware.
  22. ^ Tim Stevens (29 November 2010). "Two Kinects join forces to create better 3D video, blow our minds (video)". Engadget.
  23. ^ Jenna Wortham (21 November 2010). "With Kinect Controller, Hackers Take Liberties". The New York Times.
  24. ^ Hsu, Jeremy (27 February 2009). ""Virtual Camera" Captures Actors' Movements for Resident Evil 5". Popular Science. Archived from the original on 2 March 2009.
  25. ^ a b Lewinski, John Scott (27 February 2009). "Resident Evil 5 Offers Sneak Peek at Avatar's 'Virtual Camera'". Wired. Retrieved 25 February 2015.
  26. ^ Lowe, Scott (27 February 2009). "The Tech Behind RE5". IGN. Retrieved 24 February 2015.
  27. ^ Thompson, Anne (1 January 2010). "How James Cameron's Innovative New 3D Tech Created Avatar". Popular Mechanics. Retrieved 25 February 2015.
  28. ^ "Optitrack InsightVCS". Retrieved 15 March 2015.
  29. ^ Michael McKenna (March 1992). "Interactive viewpoint control and three-dimensional operations". Proceedings of the 1992 symposium on Interactive 3D graphics - SI3D '92. ACM. pp. 53–56. CiteSeerX doi:10.1145/147156.147163. ISBN 978-0897914673. S2CID 17308648.
  30. ^ Frederick Brooks Jr. (June 1992). "Final Technical Report – Walkthrough Project" (PDF). Tr92-026. University of North Carolina at Chapel Hill. Archived (PDF) from the original on 23 September 2015. Retrieved 23 March 2015.