Reference: Mark Haigh-Hutchinson. 2009. "Real-Time Cameras: A Guide for Game Designers and Developers." Elsevier.
As we come closer to completing this series of articles, I have a varied assortment of guidelines from Haigh-Hutchinson that I have not shared yet. Many of these guidelines are echoed in Remi Lacoste's GDC Talk on the camera for Tomb Raider 2013, which is recommended viewing for anyone interesting in camera or conveying emotion in video games. His powerpoint for that talk highlights a key difference between the camera I made in Part 10 and the camera it was meant to emulate: "Instead of using an animated layer playing on top of our cameras, we used a physics based camera shake system allowing us to embrace a more custom approach." My Timeline-based implementation served as a good way to experiment with timelines, but in this post I recreate those camera behaviours using physics instead. I found Timelines useful in demonstration of my intention in Hugline Miami too, but ultimately physics provides better implementation when collision is involved. Today's grab-bag of wisdom happens to include three guidelines about camera collision.
- "Prevent the camera passing through (or close to) game objects or physical environmental features. If the near plane of the camera view frustum intersects render geometry, unwanted visual artifacts will be produced. These will certainly detract from the graphical quality of the game, and a best seem very unprofessional. This problem is typically easily avoidable; it should be considered as simply unacceptable in modern camera systems. A passable solution to avoid this problem is to apply transparency effects to the geometry in question. By effectively removing the geometry (and indeed actually doing so according to camera proximity), the camera may be allowed to pass through without creating distracting visual artifacts...
- Do not allow the camera to pass outside the game world. In the vast majority of cases, the camera is required to remain within the visible geometry of the world to prevent the player from seeing the construction of the game environment and thus destroying the illusion of the game play. There are limited cases where it is necessary to position the camera outside of the main game environment but only when care has been taken to hide construction details from the player...
- Avoid enclosed space with complex geometry (third person cameras). There needs to be sufficient space for the camera to be positioned such that the player character may be viewed in relation to the game play elements with the environment. Small enclosed spaces will require the camera to be moves outside the environment to achieve this. If kept in close proximity to the player character it will likely result in poor framing and frequent loss of line of sight. Environmental construction should take camera framing requirements into consideration from the initial design stages."
We donot have to build a new camera class to get a simple physics-based effect on our camera. Let's add a component to the character and attach it with a spring, then track that Target component's position instead of the character capsule position. Remember to only let the Spring stretch and shrink along its main axis, as we do not want our camera to wobble from side-to-side and give players nausea. In the next image I show the Anchor as a block sphere and the Target as a gold sphere. The anchor maintains its position relative to the capsule, but the Target bounces up and down due to the Spring component, which is a Physic Constraint Component. Note that I used the Blueprint Side-Scroller template for these tests.
Notice the linear limits isolate Z as the only axis of motion. Linear Limit Soft could be used, but I found its dampening less effective than the Linear Dampening of the Target component itself. All of the Angular Limits are locked to avoid unwanted rotation on any of camera's axes. Without Linear Position drive enable for Z axis, the Spring component does not have any ability to move the Target against gravity. Here the intended Z location of the Target is 0.0 with respect to its parent Capsule.
The gold Target sphere is a parent to an exact copy of the template's SpringArm1, with an added child Scene Component. Our new Camera Actor interpolates to the position of Scene, which by default attaches to the end of the SpringArm copy.
Note that only Target has Simulate Physics enabled. The most important value is Linear Damping, which reduces the amount of times the camera bounces before settling into a stable position. Typically one bounce is plenty for cameras.
One of the chief benefits of this approach as opposed to the timeline approach I used in Part 10 is that the object will respect collision with world geometry and stay within the bounds of the level. Timelines do not have the ability to adjust the position of objects under their control due to collision, although in some applications a designer could modify their output to match the intended collision result. My advice is to use Timelines when there are no expected collisions, or collisions will only impact the location of the objects that are not under Timeline control. In Part 10, I found a bug with my camera that allowed it to move under the ground plane when the player jumped and landed on the ground. Enabling collision on the camera would not solve the problem because the collision with the ground would not be smooth. Here, physics allows us some room to adjust values for collision on the camera and dampen the impact using Hooke's Law (a commonly used equation for smoothing camera position).
The other benefit Lacoste mentions is that it enables developers "to create shakes that were unique and suitable for different situations." For example, each drop-down or explosion reaction in a game with a physics-based camera shake system could be baked as a unique behaviour or even created at run-time for maximum variation in camera behaviours.
A full-scale production implementation of a physics-based camera may have increased costs compared to an animated camera, but it does have notable benefits. Animated cameras offer complete control, but variation is expensive and global issues must typically be solved in every instance of the camera instead of tweaking global physics parameters. A Camera Designer must weigh the costs and benefits of each approach, and consider the overall game design, before choosing between physics-based and animation-based implementations.
Bonus Content:
http://gdcvault.com/play/1917/Directing-the-Prince-Of-Persia
If you are interested in creating gameplay in enclosed spaces without having the camera banging off the walls, check out Prince of Persia 2008 or the link above for a technical camera GDC talk by Jonathan Bard. The talk covers many aspects of camera development with a focus on sustainable practices to create game agnostic cameras that can be used for multiple projects. While its focus is not level design, it highlights camera challenges, details the "camera artist" role, and mentions level design as part of the artistic workflow for Ubisoft camera artists. The game's solution for enclosed areas is to open up the ceiling and floor when the walls get close together, and it works beautifully to give room for the camera movements and create captivating backgrounds for their in-game camera shots.