Technology

Intel Use Machine Learning AI to Make GTA V Look Insanely Realistic

Intel Use Machine Learning AI to Make GTA V Look Insanely Realistic

Intel researchers have used machine learning AI to make GTA VK crazy realistic. “The goal of defining computer graphics for half a century has been photorealism,” the team wrote in their paper.

“Yet, even the most sophisticated real-time games will soon reveal that photorealism has not been achieved. They set out to create a system based on previous efforts using machine learning – which could achieve this kind of photorealism. During the rendering process, game engines produce an intermediate buffer known as a “G buffer”. Basically, it provides detailed information about what will end up on the screen like geometry, light and texture.

The main innovation of this algorithm compared to others is that the team uses this intermediate data to train their algorithms and to improve the output images. The team also used cameras (like Google maps) collected across Germany by cars known as Cityscape to train their neural networks to improve image output. Previous attempts to improve as well as render game graphics rely on computer labeling of objects.

As a result, it is often unable to render at normal speed or create additional unwanted images, such as the trees seen in the sky line shown in the video below. “Instead of trying to synthesize images, our approach improves already rendered images, integrating visual information to create geometrically and semantically consistent images, and real data does not require any annotations,” Tim explains in their paper. The results of their work are quite astonishing, providing footage that looks very close to photorealism or at least photorealistic so that you feel uncomfortable going to a standard GTA V Mara spree.

The shading is good, the hills are much greener and the roads look much smoother. Sometimes – though not always – in a nutshell it feels like you’re wandering around a real city. The team believes that by integrating more deeply into their game engines, they can improve their systems by “increasing efficiency and possibly advancing to a more realistic level.” More surprising images produced by their system can be found on their GitHub page.

Intel researchers have used machine learning AI to make GTA VK crazy realistic. “The goal of defining computer graphics for half a century has been photorealism,” the team wrote in their paper.

“Yet, even the most sophisticated real-time games will soon reveal that photorealism has not been achieved. They set out to create a system based on previous efforts using machine learning – which could achieve this kind of photorealism. During the rendering process, game engines produce an intermediate buffer known as a “G buffer”. Basically, it provides detailed information about what will end up on the screen like geometry, light and texture.

The main innovation of this algorithm compared to others is that the team uses this intermediate data to train their algorithms and to improve the output images. The team also used cameras (like Google maps) collected across Germany by cars known as Cityscape to train their neural networks to improve image output. Previous attempts to improve as well as render game graphics rely on computer labeling of objects.

As a result, it is often unable to render at normal speed or create additional unwanted images, such as the trees seen in the sky line shown in the video below. “Instead of trying to synthesize images, our approach improves already rendered images, integrating visual information to create geometrically and semantically consistent images, and real data does not require any annotations,” Tim explains in their paper. The results of their work are quite astonishing, providing footage that looks very close to photorealism or at least photorealistic so that you feel uncomfortable going to a standard GTA V Mara spree.

The shading is good, the hills are much greener and the roads look much smoother. Sometimes – though not always – in a nutshell it feels like you’re wandering around a real city. The team believes that by integrating more deeply into their game engines, they can improve their systems by “increasing efficiency and possibly advancing to a more realistic level.” More surprising images produced by their system can be found on their GitHub page.