Wednesday, 19 June 2013

Constraints

6. Constraints

Polygon Count and File Size
The two common measurements of an object's 'cost’ or file size are the polygon count and vertex count. For example, a game character may stretch anywhere from 200-300 polygons, to 40,000+ polygons. A high-end third-person console or PC game may use many vertices or polygons per character, and an iOS tower defence game might use very few per character.

Polygons Vs. Triangles
When a game artist talks about the poly count of a model, they really mean the triangle count. Games almost always use triangles not polygons because most modern graphic hardware is built to accelerate the rendering of triangles.
The polygon count that's reported in a modelling app is always misleading, because a model's triangle count is higher. It's usually best therefore to switch the polygon counter to a triangle counter in your modelling app, so you're using the same counting method everyone else is using.
Polygons however do have a useful purpose in game development. A model made of mostly four-sided polygons (quads) will work well with edge-loop selection & transform methods that speed up modelling, make it easier to judge the "flow" of a model, and make it easier to weight a skinned model to its bones. Artists usually preserve these polygons in their models as long as possible. When a model is exported to a game engine, the polygons are all converted into triangles automatically. However different tools will create different triangle layouts within those polygons. A quad can end up either as a "ridge" or as a "valley" depending on how it's triangulated. Artists need to carefully examine a new model in the game engine to see if the triangle edges are turned the way they wish. If not, specific polygons can then be triangulated manually.

Triangle Count vs. Vertex Count
Vertex count is ultimately more important for performance and memory than the triangle count, but for historical reasons artists more commonly use triangle count as a performance measurement. On the most basic level, the triangle count and the vertex count can be similar if the all the triangles are connected to one another. 1 triangle uses 3 vertices, 2 triangles use 4 vertices, 3 triangles use 5 vertices, and 4 triangles use 6 vertices and so on. However, seams in UVs, changes to shading/smoothing groups, and material changes from triangle to triangle etc. are all treated as a physical break in the model's surface, when the model is rendered by the game. The vertices must be duplicated at these breaks, so the model can be sent in renderable chunks to the graphics card.
Overuse of smoothing groups, over-splittage of UVs, too many material assignments (and too much misalignment of these three properties), all of these lead to a much larger vertex count. This can stress the transform stages for the model, slowing performance. It can also increase the memory cost for the mesh because there are more vertices to send and store.


Rendering Time
Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life. Several different, and often specialised, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photo-realistic rendering, or real-time rendering.

Real-time
Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. In real-time rendering, the goal is to show as much information as possible as the eye can process in a fraction of a second, i.e. one frame. The primary goal is to achieve an as high as possible degree of photorealism at an acceptable minimum rendering speed (usually 24 frames per second, as that is the minimum the human eye needs to see to successfully create the illusion of movement). In fact, exploitations can be applied in the way the eye 'perceives' the world, and as a result the final image presented is not necessarily that of the real-world, but one close enough for the human eye to tolerate. Rendering software may simulate such visual effects as lens flares, depth of field or motion blur. These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These effects can lend an element of realism to a scene, even if the effect is merely a simulated artefact of a camera. This is the basic method employed in games, interactive worlds and VRML. The rapid increase in computer processing power has allowed a progressively higher degree of realism even for real-time rendering, including techniques such as HDR rendering. Real-time rendering is often polygonal and aided by the computer's GPU.

Non Real-time
Animations for non-interactive media, such as feature films and video, are rendered much more slowly. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a hard disk then can be transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second, to achieve the illusion of movement.

When the goal is photo-realism, techniques such as ray tracing or radiosity are employed. This is the basic method employed in digital media and artistic works. Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and subsurface scattering (to simulate light reflecting inside the volumes of solid objects such as human skin).

The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home computer system. The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using compositing software.

Reflection/Scattering - How light interacts with the surface at a given point
Shading - How material properties vary across the surface





3D Development Software

5. 3D Development Software

3D Studio Max

Autodesk 3ds Max, formerly 3D Studio Max, is 3D computer graphics software for making 3D animations, models, and images. It was developed and produced by Autodesk Media and Entertainment. It has modelling capabilities, a flexible plugin architecture and can be used on the Microsoft Windows platform. It is frequently used by video game developers, TV commercial studios and architectural visualization studios. It is also used for movie effects and movie pre-visualization.



In addition to its modelling and animation tools, the latest version of 3ds Max also features shaders (such as ambient occlusion and subsurface scattering), dynamic simulation, particle systems, radiosity, normal map creation and rendering, global illumination, a customizable user interface, and its own scripting language.






Maya

Autodesk Maya, commonly shortened to Maya, is 3D computer graphics software that runs on Microsoft Windows, Mac OS and Linux, originally developed by Alias Systems Corporation (formerly Alias|Wavefront) and currently owned and developed by Autodesk, Inc. It is used to create interactive 3D applications, including video games, animated film, TV series, or visual effects. The product is named after the Sanskrit word Maya, the Hindu concept of illusion.



Maya 1.0 was released in February 1998. Following a series of acquisitions, Maya was bought by Autodesk in 2005.[8][9] Under the name of the new parent company, Maya was renamed Autodesk Maya. However, the name "Maya" continues to be the dominant name used for the product.






LightWave


LightWave is a software package used for rendering 3D images, both animated and static. It includes a rendering engine that supports such advanced features as realistic reflection and refraction, radiosity, and caustics. The 3D modeling component supports both polygon modeling and subdivision surfaces. The animation component has features such as reverse and forward kinematics for character animation, particle systems and dynamics. Programmers can expand LightWave's capabilities using an included SDK which offers LScript scripting (a proprietary scripting language) and common C language interfaces.


Blender

Blender is a free and open-source 3D computer graphics software product used for creating animated films, visual effects, interactive 3D applications or video games. Blender's features include 3D modeling, UV unwrapping, texturing, rigging and skinning, fluid and smoke simulation, particle simulation, animating, match moving, camera tracking, rendering, video editing and compositing. It also features a built-in game engine.





Cinema 4D


CINEMA 4D is a 3D modeling, animation and rendering application developed by MAXON Computer GmbH of Friedrichsdorf, Germany. It is capable of procedural and polygonal/subd modeling, animating, lighting, texturing, rendering, and common features found in 3d modelling applications.

Four variants are currently available from MAXON: a core CINEMA 4D 'Prime' application, a 'Broadcast' version with additional motion-graphics features, 'Visualize' which adds functions for architectural design and 'Studio', which includes all modules. CINEMA 4D runs on Windows and Macintosh PC's.



Initially, CINEMA 4D was developed for Amiga computers in the early 1990s, and the first three versions of the program were available exclusively for that platform. With v4, however, MAXON began to develop the program for Windows and Macintosh computers as well, citing the wish to reach a wider audience and the growing instability of the Amiga market following Commodore's bankruptcy.






http://cinema-4d.en.softonic.com/




ZBrush


ZBrush is a digital sculpting tool that combines 3D/2.5D modeling, texturing and painting. It uses a proprietary "pixol" technology which stores lighting, colour, material, and depth information for all objects on the screen. The main difference between ZBrush and more traditional modelling packages is that it is more akin to sculpting.



ZBrush is used as a digital sculpting tool to create high-resolution models (up to ten million polygons) for use in movies, games, and animations. It is used by companies ranging from ILM to Electronic Arts. ZBrush uses dynamic levels of resolution to allow sculptors to make global or local changes to their models. ZBrush is most known for being able to sculpt medium to high frequency details that were traditionally painted in bump maps. The resulting mesh details can then be exported as normal maps to be used on a low poly version of that same model. They can also be exported as a displacement map, although in that case the lower poly version generally requires more resolution. Or, once completed, the 3D model can be projected to the background, becoming a 2.5D image (upon which further effects can be applied). Work can then begin on another 3D model which can be used in the same scene. This feature lets users work with extremely complicated scenes without heavy processor overhead.




Sketchup
SketchUp is a 3D modelling program for a broad range of applications such as architectural, civil, mechanical, film as well as video game design — and available in free as well as 'professional' versions.

The program highlights its ease of use,[4] and an online repository of model assemblies (e.g., windows, doors, automobiles, entourage, etc.) known as 3D Warehouse enables designers to locate, download, use and contribute free models. The program includes a drawing layout functionality, allows surface rendering in variable "styles," accommodates third-party "plug-in" programs enabling other capabilities (e.g., near photo realistic rendering) and enables placement of its models within Google Earth.



File Formats
Each 3D application allows the user to save their work, both objects and scenes, in a proprietary file format and export in open formats.

A proprietary format is a file format where the mode of presentation of its data is the intellectual property of an individual or organisation which asserts ownership over the format. In contrast, a free format is a format that is either not recognised as intellectual property, or has had all claimants to its intellectual property release claims of ownership. Proprietary formats can be either open if they are published, or closed, if they are considered trade secrets. In contrast, a free format is never closed.
Proprietary formats are typically controlled by a private person or organization for the benefit of its applications, protected with patents or as trade secrets, and intended to give the license holder exclusive control of the technology to the (current or future) exclusion of others.

____________________________________________________________________________
 

Mesh Construction

4. Mesh Construction


Although it is possible to construct a mesh by manually specifying vertices and faces, it is much more common to build meshes using a variety of tools. A wide variety of 3d graphics software packages are available for use in constructing polygon meshes.


Box Modelling

One of the more popular methods of constructing meshes is box modelling, which uses two simple tools:

1. The subdivide tool splits faces and edges into smaller pieces by adding new vertices. For example, a square would be subdivided by adding one vertex in the center and one on each edge, creating four smaller squares.

2. The extrude tool is applied to a face or a group of faces. It creates a new face of the same size and shape which is connected to each of the existing edges by a face. Thus, performing the extrude operation on a square face would create a cube connected to the surface at the location of the face.














 
Extrusion Modelling

A second common modelling method is sometimes referred to as inflation modeling or extrusion modelling. In this method, the user creates a 2d shape which traces the outline of an object from a photograph or a drawing. The user then uses a second image of the subject from a different angle and extrudes the 2d shape into 3d, again following the shape’s outline. This method is especially common for creating faces and heads. In general, the artist will model half of the head and then duplicate the vertices, invert their location relative to some plane, and connect the two pieces together. This ensures that the model will be symmetrical.











Primitive Modelling

Another common method of creating a polygonal mesh is by connecting together various primitives, which are predefined polygonal meshes created by the modelling environment. Common primitives include:

Cubes
Pyramids
Cylinders
Spheres
2D primitives, such as squares, triangles, and disks











Specialised Modelling

Finally, some specialized methods of constructing high or low detail meshes exist. Sketch based modeling is a user-friendly interface for constructing low-detail models quickly, while 3d scanners can be used to create high detail meshes based on existing real-world objects in almost automatic way. These devices are very expensive, and are generally only used by researchers and industry professionals but can generate high accuracy sub-millimetric digital representations.




 



 _________________________________________________________________


Geometric Theory


3. Geometric Theory


Geometry

3D computer graphics employ the same principles found in 2D vector artwork, but use a further axis. When creating 2D vector artwork, the computer draws the image by plotting points on X and Y axes (creating coordinates) and joining these points with paths (lines). The subsequent shapes can be filled with colour and the lines stroked with colour and thickness if required.

Cartesian Coordinates System


3D programs operate on a grid of 3D co-ordinates. 3D co-ordinates are pretty much the same as 2D co-ordinates except there’s a third axis known as the Z or ‘depth’ axis.






Geometric Theory and Polygons
The basic object used in mesh modeling is a vertex, a point in three dimensional space. Two vertices connected by a straight line become an edge. Three vertices, connected to each other by three edges, define a triangle, which is the simplest polygon in Euclidean space. More complex polygons can be created out of multiple triangles, or as a single object with more than 3 vertices. Four sided polygons (generally referred to as quads) and triangles are the most common shapes used in polygonal modeling. A group of polygons, connected to each other by shared vertices, is generally referred to as an element. Each of the polygons making up an element is called a face.

In Euclidean geometry, any three non-collinear points determine a plane. For this reason, triangles always inhabit a single plane. This is not necessarily true of more complex polygons, however. The flat nature of triangles makes it simple to determine their surface normal, a three-dimensional vector perpendicular to the triangle's surface. Surface normals are useful for determining light transport in ray tracing.

A group of polygons which are connected by shared vertices is referred to as a mesh, often ferred to as a wireframe model. 





In order for a mesh to appear attractive when rendered, it is desirable that it be non-self-intersecting, meaning that no edge passes through a polygon. Another way of looking at this is that the mesh cannot pierce itself. It is also desirable that the mesh not contain any errors such as doubled vertices, edges, or faces. For some purposes it is important that the mesh be a manifold – that is, that it does not contain holes or singularities (locations where two distinct sections of the mesh are connected by a single vertex).




Primitives
In 3D applications, pre-made objects can be used to make models out of various shapes, the most basic of this shapes are the Standard Primitive Objects, or the Common Primitives, these shapes vary from the basic cube or box to spheres, cylinders, pyramids (both triangular and square based) and cones. They are used as the starting point for modelling. They can be edited once created.



http://www.webreference.com/3d/cararra/3.html

Surfaces
Polygons can be defined as specific surfaces and then have colour, texture or photographic maps added to them to create the desired look. The example below shows how a map is displayed as if the object has been unwrapped.



 



__________________________________________________________________________


Displaying 3D Polygon Animations


 
 __________________________________________________________________________

2. Displaying 3D Polygon Animations


API


API, an abbreviation of application program interface, is a set of routines, protocols, and tools for building software applications. A good API makes it easier to develop a program by providing all the building blocks. A programmer then puts the blocks together.

Most operating environments, such as MS-Windows, provide an API so that programmers can write applications consistent with the operating environment. Although APIs are designed for programmers, they are ultimately good for users because they guarantee that all programs using a common API will have similar interfaces. This makes it easier for users to learn new programs.



Direct3D

An API for manipulating and displaying three-dimensional objects. Developed by Microsoft, Direct3D provides programmers with a way to develop 3-D programs that can utilize whatever graphics acceleration device is installed in the machine. Virtually all 3-D accelerator cards for PCs support Direct3D.

http://www.webopedia.com/TERM/D/Direct3D.html

OpenGL

A 3-D graphics language developed by Silicon Graphics. There are two main implementations: Microsoft OpenGL, developed by Microsoft and Cosmo OpenGL, developed by Silicon Graphics. Microsoft OpenGL is built into Windows NT and is designed to improve performance on hardware that supports the OpenGL standard. Cosmo OpenGL, on the other hand, is a software-only implementation specifically designed for machines that do not have a graphics accelerator.


Graphics Pipeline

In 3D computer graphics, the terms graphics pipeline or rendering pipeline most commonly refer to the way in which the 3D mathematical information contained within the objects and scenes are converted into images and video. The graphics pipeline typically accepts some representation of a three-dimensional primitive as input and results in a 2D raster image as output. OpenGL and Direct3D are two notable 3d graphic standards, both describing very similar graphic pipelines.

Stages of the graphics pipeline

Per-vertex lighting and shading

Geometry in the complete 3D scene is lit according to the defined locations of light sources, reflectance, and other surface properties. Some (mostly older) hardware implementations of the graphics pipeline compute lighting only at the vertices of the polygons being rendered. The lighting values between vertices are then interpolated during rasterization. Per-fragment or per-pixel lighting, as well as other effects, can be done on modern graphics hardware as a post-rasterization process by means of a shader program. Modern graphics hardware also supports per-vertex shading through the use of vertex shaders.

Clipping

Geometric primitives that now fall completely outside of the viewing frustum will not be visible and are discarded at this stage.

Projection Transformation

In the case of a Perspective projection, objects which are distant from the camera are made smaller. This is achieved by dividing the X and Y coordinates of each vertex of each primitive by its Z coordinate (which represents its distance from the camera). In an orthographic projection, objects retain their original size regardless of distance from the camera.

Viewport Transformation

The post-clip vertices are transformed once again to be in window space. In practice, this transform is very simple: applying a scale (multiplying by the width of the window) and a bias (adding to the offset from the screen origin). At this point, the vertices have coordinates which directly relate to pixels in a raster.

Scan Conversion or Rasterisation

Rasterisation is the process by which the 2D image space representation of the scene is converted into raster format and the correct resulting pixel values are determined. From now on, operations will be carried out on each single pixel. This stage is rather complex, involving multiple steps often referred as a group under the name of pixel pipeline.

Texturing, Fragment Shading

At this stage of the pipeline individual fragments (or pre-pixels) are assigned a color based on values interpolated from the vertices during rasterization, from a texture in memory, or from a shader program.

Display

The final colored pixels can then be displayed on a computer monitor or other display.

http://en.wikipedia.org/wiki/Graphics_pipeline

 _______________________________________________________________________