What is render?
In our case, we are talking about transforming a three-dimensional scene into a static picture, or a sequence of frames (a sequence of frames is a type of saving many consecutive frames if we talk about rendering animation). In programs for creating 3d content (such as 3ds max, cinema4d, sketch up, etc.), scenes are rendered using mathematical calculations. Render - acc. this is an image obtained using mathematical calculations on a PC.
Rendering is one of the main sub-themes of computer 3D graphics, and in practice it is always linked to the rest. In the "graphics pipeline" this is the last important step, giving the final look to any 3d scene. With the increasing demand for computer graphics since the 1970s, it has become a more distinct subject.
Scope of application
Scene rendering is used in computer video games, simulations, films, commercials, television special effects, and architectural 3D visualization. Each field of activity uses a different balance of functions and calculation methods. Let's take a look at a few examples of using rendering in more detail:
In this ad, the manufacturer replaced the actual chip pack with a 3D model followed by a render. This allowed us to save a lot of time in the production of a commercial for different sales markets. Since the package of chips will look different from one country to another, there is no need to shoot hundreds of takes with different package variations. One roller is enough, and now you can make any bundle.
Now everyone and everything can be translated into reality on the TV screen. No mockups, mannequins, wigs, makeup needed. 3D model with subsequent rendering saves time and money needed to create special effects.
Rendering by Viarde studio, made for one of the furniture factories. Manufacturers of furniture, lighting, appliances, etc. No longer need to pay for expensive photo studios to present their products in the best possible way. In a few days and at a much lower cost, 3D visualization studios will do it.
Rendering systems
Rendering systems that are used by 3D editors for rendering (rendering) visualizations are built-in to the program or external plug-in (installed separately). More often, external rendering systems have better rendering quality than built-in ones, because they are developed independently of the 3D editor, and the development team works only on improving their product without being distracted by working with the 3D editor. The teams developing external editors have more time and opportunities to make their product the best on the market. But because of this, more often than not, unlike built-in rendering systems, you will have to pay extra for them.
Internally, rendering is an elaborate program based on a selective mixture of disciplines related to: physics of light, visual perception, mathematics, and software engineering.
In the case of 3D graphics, rendering can be slow, either in pre-rendering or real time rendering.
Pre-rendering is a rendering technique that is used in environments where speed is not a concern and image calculations are performed using multi-core CPUs rather than dedicated graphics hardware. This rendering technique is mainly used in animation and visual effects, where photorealism needs to be at the highest level.
Real-time rendering: An outstanding rendering technique used in interactive graphics and games where images must be rendered at a fast pace. Since the user experience in such environments is high, real-time imaging is required. Dedicated graphics hardware and pre-compilation of available information have improved real-time rendering performance.
Rendering in architectural 3D visualization
Today, the most popular and high-quality systems for architectural 3d visualization are Vray and Corona Renderer. Both systems are owned by the same developer Chaos Group (Bulgaria).
Vray appeared back in 2000 and has proven itself in many areas of visualization due to its flexibility and wide range of tools for inclusion in the workflow of various studios, be it animation or architecture companies.
The main advantages of V-Ray:
1. Supports network rendering by multiple computers.
2. A very wide range of settings for different tasks related to 3D graphics.
3. Huge set of materials.
4. Supports a large set of passes for composing pictures or videos.
Corona Renderer is an external modern high performance photorealistic renderer available for Autodesk 3ds Max, MAXON Cinema 4D. Development of Corona Renderer started back in 2009 as a solo student project by Ondřej Karlik at the Czech Technical University in Prague. Since then, Corona has grown into a full-time commercial project after Ondřej founded the company with former CG artist Adam Hotovi and Jaroslav Krzyvanek, assistant professor and researcher at Charles University in Prague. In August 2017, the company became part of the Chaos Group, allowing further expansion and growth. Despite its young age, Corona Renderer has become a very competitive renderer, capable of producing high quality results.
The main advantage of Corona Renderer is very realistic rendering with simple system settings. It is perfect for beginners with simple tasks.
Render speed
Rendering the system, when working like all other programs installed on a computer, requires certain resources of your PC to render the image. Basically, you need processor power and amount of RAM. Such rendering systems are called CPUs. There is also a GPU, this is a rendering system that renders images using a video card. For example Vray has the ability to render both CPU and GPU.
Rendering time depends on some basic factors: the complexity of the scene, the number of light sources, the presence of high-poly models, transparent or reflective materials.
Therefore, rendering requires a lot of power. A regular office PC will not be suitable for this task. If you are going to render, you need a special build of the computer to make this process fast. All render systems have different settings, somewhere more, somewhere less. They can be changed to get a picture faster, but you will have to save on its quality.
The best way to reduce the rendering time of a picture is to use network rendering or a ready-made render farm on the Internet. You can distribute the render between different computers via a local network or the Internet. To do this, all computers participating in the process must have the same rendering program, the same 3D editor and the same plugins as the main computer from which the render is launched.
History and fundamentals of rendering computational processes
Over the years, developers have researched many rendering algorithms. The rendering software can use a number of different techniques to get the final image. Tracking and rendering every ray of light in a scene would be impractical and time consuming. Even tracking and rendering part of the rays is a large enough volume to obtain an image and takes too long if the samples (sample - rendering of one ray of light) are not limited in a reasonable way.
Thus, four "families" of more efficient light transfer modeling techniques emerged: rasterization, including scanline rendering, examines objects in the scene and projects them to render an image without generating a viewpoint perspective effect; Ray casting treats a scene as being observed from a specific point of view, calculating the observed image based only on geometry and basic optical laws of reflection intensity, and possibly using Monte Carlo techniques to reduce artifacts; radiosity uses - elemental mathematics to model the diffuse propagation of light from surfaces; ray tracing is similar to ray casting, but uses more advanced optical modeling and typically uses Monte Carlo techniques to produce more realistic results at speeds that are often orders of magnitude slower.
Most modern software combines two or more light rendering techniques to produce reasonably good results in a reasonable amount of time.
Rendering and rasterizing scan lines
The high-level representation of an image necessarily contains elements other than pixels. These elements are called primitives. For example, in a schematic drawing, lines and curves can be primitives. In a graphical user interface, windows and buttons can be primitives. In 3D rendering, triangles and polygons in space can be primitives.
If a pixel-by-pixel rendering approach is impractical or too slow for the task at hand, then a primitive-by-primitive rendering approach can be useful. Here, everyone looks at each of the primitives, determines which pixels in the image it affects, and changes those pixels accordingly. This is called rasterization and it is the rendering technique used by all modern graphics cards.
Rasterization is often faster than pixel-based rendering. First, large areas of the image can be free of primitives; Rasterization ignores these areas, but when rendering per-pixel, they must go through them. Second, rasterization can improve cache consistency and reduce overhead by taking advantage of the fact that pixels occupied by a single primitive tend to be contiguous in an image. For these reasons, rasterization is usually an appropriate choice when interactive rendering is required; however, the pixel-by-pixel approach often produces higher quality images and is more versatile because it does not rely on as much image assumptions as rasterization.
Rasterization comes in two basic forms: not only when the entire face (primitive) is rendered, but also when all the vertices of the face are rendered, and then the pixels on the face that lie between the vertices are rendered by simply blending each vertex color. with the following. This version of rasterization overtook the old method as it allows graphics to move smoothly without complex textures. This means that you can use the more sophisticated shading functions of the graphics card and still achieve better performance because you freed up space on the map since complex textures are not needed. Sometimes people use one rasterization method for some edges and a different method for others, depending on the angle at which that edge meets the other joined edges, this can increase speed and slightly reduce the overall effect of images.
Ray casting
Ray casting is mainly used for real-time simulations, such as those used in 3D PC games and cartoon animations, where the details are not important or where it is more efficient to manually tamper with the parts to get the best performance in the computation phase. This is usually the case when you need to animate a large number of frames. The results have a characteristic "flat" appearance when no additional techniques are used, as if all objects in the scene were painted with a matte finish or lightly sanded.
The simulated geometry is analyzed pixel-by-pixel, line by line, from the point of view outward, as if the rays were thrown from the point of view. Where an object intersects, the color value at a point can be estimated using several methods. In the simplest case, the color value of the object at the intersection becomes the value of that pixel. The color can be determined from the texture map. A more sophisticated method is to change the color value using an illumination factor, but without calculating the ratio to the simulated light source. To reduce artifacts, the number of rays in slightly different directions can be averaged.
A coarse simulation of the optical properties can additionally be used: usually a very simple calculation of the ray from an object to a point of view. Another calculation is made for the angle of incidence of light rays from the light source (s). And from these and indicated intensities of the light sources, the pixel value is calculated. Or you can use radiosity lighting. Or a combination of them.
Radiance
Radiance is a technique that tries to mimic the way that reflected light, instead of just bouncing off another surface, also illuminates the area around it. This provides more realistic shading and appears to better reflect the "ambiance" of the interior. A classic example is how shadows wrap around the corners of a room.
The optical basis of modeling is that some scattered light from a given point on a given surface is reflected in a wide range of directions and illuminates the area around it.
The modeling technique can be of different complexity. Many images have a very rough estimate of the radius, just lightly illuminating the entire scene using a factor known as ambience. However, when advanced Radiosity metrics are combined with high quality ray tracing, images can show convincing realism, especially for indoor scenes.
In advanced radiation modeling, recursive finite element algorithms "bounce" light back and forth between model surfaces until some recursion limit is reached. Thus, the coloration of one surface affects the coloration of an adjacent surface, and vice versa. The resulting illumination values for the entire model (sometimes including empty spaces) are saved and used as additional input when performing calculations in the targeting or ray tracing model.
Due to the iterative / recursive nature of the technique, complex objects are especially slow to simulate. Advanced radio computation can be reserved for calculating the atmosphere of a room, from the reflection of light from walls, floors and ceilings, without examining the contribution that complex objects make to radiation, or complex objects can be replaced when calculating luminosity by simpler objects. the same size and texture.
If there is a small permutation of radiation objects in a scene, the same radiation data can be reused for multiple frames, making radiation an effective way to improve the flatness of ray casting without seriously affecting the overall render time per frame. Because of this, RadioCity has become the leading real-time rendering method and has been used to initiate and create a large number of well-known recent full-length animated 3D cartoons.
Ray tracing
Ray tracing is a continuation of the same technology developed by Scanline and Ray casting. Like them, it handles complex objects well and objects can be described mathematically. Unlike Scanline and Ray casting, ray tracing is almost always a Monte Carlo method, which is based on averaging the number of randomly generated samples from the model.
In this case, the patterns are imaginary beams of light that cross the point of view from objects in the scene. This is primarily useful when complex and accurate rendering of shadows, refractions, or reflections is a problem.
Ultimately, in a quality rendering of ray tracing work, multiple rays are usually captured for each pixel and traced not only to the first intersection object, but rather through a series of successive 'bounces' using well-known laws of optics such as' the angle of incidence equals the angle reflections ”and more advanced laws regarding refraction and surface roughness.
As soon as the beam either collides with the light source or, more likely, after the set bounce limit has been estimated. The surface illumination at this endpoint is then estimated using the methods described above, and changes during various bounces are estimated to estimate the value observed from the point of view. All this is repeated for every sample, for every pixel.
In some cases, multiple rays can be created at each intersection point.
As a brute force technique, ray tracing was too slow for live viewing, and until recently was too slow even for watching short films of any quality level. Although it has been used for special effects sequences and in advertisements where a small piece of high quality (maybe even photorealistic) material is required.
However, optimization efforts aimed at reducing the amount of computation required for parts of the work where detail is low or not dependent on ray tracing has led to the real possibility of wider use of ray tracing. There is currently some hardware-accelerated ray tracing hardware available, at least during the prototype development phase, as well as some game demos that demonstrate the use of software or hardware ray tracing in real time.
Ending
With each passing day, rendering systems are increasingly used in a wide variety of fields. For films, cartoons, architecture, advertising, industry, automotive and more. So if you see where, a static image or an animation, it is quite possible that this is the result of a render.