Friday, March 29, 2024
Search
  
Monday, March 19, 2018
 Microsoft is Adding Support for Hardware Accelerated Raytracing to DirectX 12
You are sending an email that contains the article
and a private message for your recipient(s).
Your Name:
Your e-mail: * Required!
Recipient (e-mail): *
Subject: *
Introductory Message:
HTML/Text
(Photo: Yes/No)
(At the moment, only Text is allowed...)
 
Message Text:

At GDC 2018 on Monday, Microsoft introduced a new "DirectX Raytracing" (DXR) feature for Windows 10's DirectX 12 graphics API, a rendering technique that makes computer-generated imagery in movies appear lifelike.

Raytracing and rasterization

Ray tracing is the technique modern movies rely on to generate or enhance special effects. Think realistic reflections, refractions and shadows. The technique t makes the fire, smoke and explosions of war films look real.

Ray tracing produces images that can be indistinguishable from those captured by a camera. Live-action movies blend computer-generated effects and images captured in the real world seamlessly, while animated feature films cloak digitally generated scenes in light and shadow as expressive as anything shot by a cameraman.

The easiest way to think of ray tracing is to look around you, right now. The objects you're seeing are illuminated by beams of light. Now turn that around and follow the path of those beams backwards from your eye to the objects that light interacts with. That's ray tracing.

Historically, though, computer hardware hasn't been fast enough to use these techniques in real time, such as in video games. Instead, real-time computer graphics have long used a technique called "rasterization" to display three-dimensional objects on a two-dimensional screen. It's fast. And, the results have gotten very good, even if it's still not always as good as what ray tracing can do.

With rasterization, objects on the screen are created from a mesh of virtual triangles, or polygons, that create 3D models of objects. In this virtual mesh, the corners of each triangle - known as vertices - intersect with the vertices of other triangles of different sizes and shapes. A lot of information is associated with each vertex, including its position in space, as well as information about color, texture and its "normal," which is used to determine the way the surface of an object is facing.

Computers then convert the triangles of the 3D models into pixels, or dots, on a 2D screen. Each pixel can be assigned an initial color value from the data stored in the triangle vertices.

Further pixel processing or "shading," including changing pixel color based on how lights in the scene hit the pixel, and applying one or more textures to the pixel, combine to generate the final color applied to a pixel.

This is computationally intensive. There can be millions of polygons used for all the object models in a scene, and roughly 8 million pixels in a 4K display. And each frame, or image, displayed on a screen is typically refreshed 30 to 90 times each second on the display.

Additionally, memory buffers, a bit of temporary space set aside to speed things along, are used to render upcoming frames in advance before they're displayed on screen. A depth or "z-buffer" is also used to store pixel depth information to ensure front-most objects at a pixel's x-y screen location are displayed on-screen, and objects behind the front-most object remain hidden.

Ray tracing is different. In the real-world, the 3D objects we see are illuminated by light sources, and photons can bounce from one object to another before reaching the viewer's eyes.

Light may be blocked by some objects, creating shadows. Or light may reflect from one object to another, such as when we see the images of one object reflected in the surface of another. And then there are refractions - when light changes as it passes through transparent or semi-transparent objects, like glass or water.

Ray tracing captures those effects by working back from our eye (or view camera). It traces the path of a light ray through each pixel on a 2D viewing surface out into a 3D model of the scene.

Using powerful GPUs, Ray tracing results to computer-generated images that capture shadows, reflections and refractions in ways that can be indistinguishable from photographs or video of the real world.

However, because ray tracing is so computationally intensive, it's often used for rendering those areas or objects in a scene that benefit the most in visual quality and realism from the technique, while the rest of the scene is rendered using rasterization. Rasterization can still deliver excellent graphics quality.

Microsoft's announcement

Microsoft today introduced a feature to DirectX 12 that will bridge the gap between the rasterization techniques employed by games today, and the full 3D effects of tomorrow. This feature is DirectX Raytracing. By allowing traversal of a full 3D representation of the game world, DirectX Raytracing allows current rendering techniques such as SSR to naturally fill the gaps left by rasterization, and opens the door to a new class of techniques that have never been achieved in a real-time game.

At the highest level, DirectX Raytracing (DXR) introduces four, new concepts to the DirectX 12 API:

  • The acceleration structure is an object that represents a full 3D environment in a format optimal for traversal by the GPU. Represented as a two-level hierarchy, the structure affords both optimized ray traversal by the GPU, as well as efficient modification by the application for dynamic objects.
  • A new command list method, DispatchRays, which is the starting point for tracing rays into the scene. This is how the game actually submits DXR workloads to the GPU.
  • A set of new HLSL shader types including ray-generation, closest-hit, any-hit, and miss shaders. These specify what the DXR workload actually does computationally. When DispatchRays is called, the ray-generation shader runs. Using the new TraceRay intrinsic function in HLSL, the ray generation shader causes rays to be traced into the scene. Depending on where the ray goes in the scene, one of several hit or miss shaders may be invoked at the point of intersection. This allows a game to assign each object its own set of shaders and textures, resulting in a unique material.
  • The raytracing pipeline state, a companion in spirit to today's Graphics and Compute pipeline state objects, encapsulates the raytracing shaders and other state relevant to raytracing workloads.

DXR does not introduce a new GPU engine to go alongside DX12's existing Graphics and Compute engines. This is intentional - DXR workloads can be run on either of DX12's existing engines. The primary reason for this is that, fundamentally, DXR is a compute-like workload. It does not require complex state such as output merger blend modes or input assembler vertex layouts. A secondary reason, however, is that representing DXR as a compute-like workload is aligned to what we see as the future of graphics, namely that hardware will be increasingly general-purpose, and eventually most fixed-function units will be replaced by HLSL code. The design of the raytracing pipeline state exemplifies this shift through its name and design in the API. With DX12, the traditional approach would have been to create a new CreateRaytracingPipelineState method. Instead, Microsoft decided to go with a much more generic and flexible CreateStateObject method. It is designed to be adaptable so that in addition to Raytracing, it can eventually be used to create Graphics and Compute pipeline states, as well as any future pipeline designs.

The first step in rendering any content using DXR is to build the acceleration structures, which operate in a two-level hierarchy. At the bottom level of the structure, the application specifies a set of geometries, essentially vertex and index buffers representing distinct objects in the world. At the top level of the structure, the application specifies a list of instance descriptions containing references to a particular geometry, and some additional per-instance data such as transformation matrices, that can be updated from frame to frame in ways similar to how games perform dynamic object updates today. Together, these allow for efficient traversal of multiple complex geometries.

The second step in using DXR is to create the raytracing pipeline state. Today, most games batch their draw calls together for efficiency, for example rendering all metallic objects first, and all plastic objects second. But because it's impossible to predict exactly what material a particular ray will hit, batching like this isn't possible with raytracing. Instead, the raytracing pipeline state allows specification of multiple sets of raytracing shaders and texture resources. Ultimately, this allows an application to specify, for example, that any ray intersections with object A should use shader P and texture X, while intersections with object B should use shader Q and texture Y. This allows applications to have ray intersections run the correct shader code with the correct textures for the materials they hit.

The third and final step in using DXR is to call DispatchRays, which invokes the ray generation shader. Within this shader, the application makes calls to the TraceRay intrinsic, which triggers traversal of the acceleration structure, and eventual execution of the appropriate hit or miss shader. In addition, TraceRay can also be called from within hit and miss shaders, allowing for ray recursion or "multi-bounce" effects.

Microsoft is introducing a new tool called PIX. PIX on Windows supports capturing and analyzing frames built using DXR to help developers understand how DXR interacts with the hardware. Developers can inspect API calls, view pipeline resources that contribute to the ray tracing work, see contents of state objects, and visualize acceleration structures. This provides the information developers need to build great experiences using DXR.

What Does This Mean for Games?

DXR will initially be used to supplement current rendering techniques such as screen space reflections, for example, to fill in data from geometry that's either occluded or off-screen. This will lead to a material increase in visual quality for these effects in the near future. Over the next several years, however, Microsoft expects an increase in utilization of DXR for techniques that are simply impractical for rasterization, such as true global illumination. Eventually, raytracing may completely replace rasterization as the standard algorithm for rendering 3D scenes. That said, until everyone has a light-field display on their desk, rasterization will continue to be an excellent match for the common case of rendering content to a flat grid of square pixels, supplemented by raytracing for true 3D effects.

Thanks to SEED, Electronic Arts, we can show you a glimpse of what future gaming scenes could look like.

In addition, Microsoft has been working with hardware vendors and industry developers for nearly a year to design and tune the API. In fact, a significant number of studios and engines are already planning to integrate DXR support into their games and engines, including:

  • Electronic Arts, Frostbite
  • Electronic Arts, SEED
  • Epic Games, Unreal Engine
  • Futuremark, 3DMark
  • Unity Technologies, Unity Engine

What Hardware Will DXR Run On?

Developers can use currently in-market hardware to get started on DirectX Raytracing. There is also a fallback layer which will allow developers to start experimenting with DirectX Raytracing that does not require any specific hardware support.

Nvidia and AMD support

To coincide with Microsoft's announcement, Nvidia announced "RTX technology" for enhanced DXR support in upcoming Volta graphics cards, as well as new ray tracing tools for its GameWorks library that can help developers deploy the technology faster.

"Real-time ray tracing has been a dream of the graphics industry and game developers for decades, and NVIDIA RTX is bringing it to life," said Tony Tamasi, senior vice president of content and technology at NVIDIA. "GPUs are only now becoming powerful enough to deliver real-time ray tracing for gaming applications, and will usher in a new era of next-generation visuals."

To allow game developers to take advantage of these new capabilities, NVIDIA also announced the NVIDIA GameWorks SDK will add a ray-tracing denoiser module. This suite of tools and resources for developers will increase realism and shorten product cycles in titles developed using the new Microsoft DXR API and NVIDIA RTX.

The upcoming GameWorks SDK - which will support Volta and future generation GPU architectures - enables ray-traced area shadows, ray-traced glossy reflections and ray-traced ambient occlusion.

Likewise, AMD said it's "collaborating with Microsoft to help define, refine and support the future of DirectX 12 and ray tracing."

 
Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2024 - All rights reserved -
Privacy policy - Contact Us .