Tuesday 14 May 2013

Components of game engines.



Components of Game Engines

Graphic rendering:

Graphic rendering is what keeps the game going without lagging or through the loss of frames, through the majority of games, graphic rendering usually works by loading the section of the map that the player can see within the view perspective, this can also help to minimalise glitches across the game, and reduce system usage, if the player is in one part of the level/world, there would be no reason to have CPU usage wasting due to a still functioning AI across the other side of the map. In some game engines, such as the Frostbite engine, the graphics will always render before the player(s) have entered the game, as the game's maps have to be quite large, the buildings and objects in the game usually have to be quite basic otherwise the engine could produce problems and glitches for the player(s). Occasionally the engine will glitch and the graphics can render blurry and discoloured but that rarely happens and also to a very small part of the area. Anti-aliasing is a feature used in most video games to smooth off edges of a map to make the game look tidier and nicer over all, when two parts of a map (in this case a building and the ground) could collide they would look very flat and sharp on the edges so the anti aliasing effect would help to make the game seem more realistic and boost the graphics output. If the player chooses the anti aliasing option on a ps3, it might lower the frame rate of the game compared to the xbox due to more graphical output coming out of the system. Most modern game engines will have some sort of shadow effect built into them, shadows give a realistic effect which works when a light source in the game, if the setting is dark, the shadows might not be necessary so they can sometimes be turned off to keep the frame rate up. Shadows work with subsurface systems to help calculate when and where the shadows will be needed in the world. In some games, there will come a time when the graphics need to be lowered to help with the frame rate and rendering of the world, when the quality of the graphics is at its highest,  there will be more pixels to load and every aspect of the game will be higher causing the game’s frame rate to drop(especially if the graphics card quality is bad). Radiosity is used when light sources such as the sun cannot directly reach an area (such as a building) and creates an indirect light source (lights, windows, etc) to illuminate an interior of a building. Depth testing is a method of graphical rendering that calculates if an object is in the player’s line of sight and if it isn’t it will render as the player draws closer to it, as this happens, objects opposite the player will disappear to prevent loss of frame rate and too many objects being in the world at any time. Most game engines feature some kind of pixel rendering unit, which operates using GPU to help render pixels using lighting and bump mapping and can be changed with the scene orientation to create silhouettes and other in game effects.
 Culling techniques are a part of the graphics system that help render an image based on the objects in visible view, there are 3 different types of culling methods:

 Back face culling:
 because meshes are hollow not solid objects , the backside of some polygons will never face the camera, this typically means there are no reason to draw the faces, this causes the effect in video games where if the camera is inside the mesh, it usually disappears.

Contribution culling: 
often, objects are so far away that they do not contribute to the final image that much. These objects are thrown away if the screen projection is too small.

Occlusion culling:
  objects that are completely behind other opaque objects may be culled. This is a very popular mechanism to speed up the rendering of large scenes that have medium to high depth complexity.

Animation systems:

Animation systems are a sequence of models linked together to play one after the other which give the illusion that players or objects etc, the animator would link certain parts of the object, for example the spine to the shoulders to the arm to the hand. This would give the computer an idea of how the structure of the object would look. The animator would then program sequences of this object together for an animation. The game engine would then have to use this animation clip to make the object play. Animation systems work in still objects as well, such as buildings and structures which will destruct on impact of explosion or a vehicle. An animation artist will pre-program the animation sequence to work with the collision detection and damage multiplier, if the damage multiplier is high enough the wall will collapse in the sequence created by the animator.  Another type of animation is forward kinematics. Forward kinematics refers to the use of the kinematic equations of a robot to compute the position of the end-effector from specified values for the joint parameters. There are two main spaces of Kinematics, Forward Kinematics and Inverse Kinematics. It is the motion of bodies without taking into consideration of the force or movement that is caused. Forward Kinematics is a mapping from joint space Q to Cartesian space W: F(Q) = W. Forward Kinematics problem isn't complexity deriving the equations. Animations can also be a sequence of a drawing or GCI where multiple copies are made with a slight change in each one, in an order, if these images are played quickly it resembles an object moving, this type of animation was the first to be used and has been used before electricity was invented, these were known as phenakistoscopes. Particle systems are often needed in game engines to create real world elements such as snow, dust, sand and smoke, in order to use particle systems, they must be animated to work however. When a particle system is created it would normally be created with options to be able to change the animation speed and technique, for example, if the world creator wanted to use snow, they might want it light, heavy, thick or thin,  so whatever options wanted would be changed as such in the game engines system editor.

Systems:

The systems are the stats of the game engine that control the actual game play, they are monitored by the game engine to keep the game working correctly, these functions include: AI functions, system CPU usage, Graphics systems, and frame rate. The system is the program that allows the game engine to function, it will often include effects or pre-programmed scripts  which help the system to function, some of these include: subsurface scattering, causistics and networking. Subsurface scattering is included in some game engines and is the effect that allows light to travel through objects and water. For translucent objects, the light is projected through and made to blur so it gives a blurry effect on the light.  Some Subsurface systems are included in the CryEngine and the Unreal engine. Causistics is when like is projected on and off objects by creating photons to help create mirrors, reflections, flares and concentrated light sources. Networking is where multiple different people or servers can connect to the game engine to provide multiplayer game play. The network must be connected by at least 2 different devices to function or the networking would be pointless. The data is transferred by connective cables which tell the game engine what will happen on the screen by fault of another player or game object.

Artificial Intelligence:

AI is a code written and designed for a computer controlled character or command, such as a race car in a game such as need for speed, or how a neutral character would act during a game such as Skyrim where a guard would have a set route which would change during certain days or after a an act of the player. Artificial intelligence does not always apply to Characters in a game; it could also be applied to scenery, buildings, and other objects. It is more commonly applied to characters though as small coding scripts are normally applied to inanimate objects instead. Most animations use path based coding where the object such as a person or a vehicle. A path is a set of way points linked together in an order that makes them seem like they are functioning or walking in a set line. The waypoints can be set to an environmental object such as a car or a wall for cover. The waypoints will always be an untextured object. World navigation is when you or another player will have to move around a world when one way is not possible, the game will calculate the way or the start of a way to move the player back on track, this is usually achieved by using a minimap with a GPS or an arrow which shows the player where to go. AI controlled characters also use this method, it is usually used when they need to follow the player or to calculate the easiest route to get to him, most characters are individually programmed, but more recent games have the ability of AI to make the engine create the route without the need of programming.

Middleware:

Middleware is the software that stands between game engines and physics; it is described by games designers as “software glue” as it helps piece things such as 3D models and in game assets together. The middleware is not an operating system or a game engine or even part of a management system, it is merely a small piece of software built into the game engine that helps designers work easily by linking the assets needed into the game engine from their fill. Overall, middleware is what designers use to help with the input/output and communication of games so they don’t have to spend time programming physics and game engines together.

No comments:

Post a Comment