AR in Search

Scaling Immersive experiences

AR in Search brings more useful immersive content experiences using AR from Google Search. By utilizing on-device and cloud rendered solutions, we were able to create a solution for users that wasn’t limited by smartphone hardware.

I was the Technical Art Director/Lead for content and platform development in the automotive category.

I led the improvement of content creation pipelines and real-time rendering fidelity for different render targets. I created new workflows that improved automation and allowed the small team to scale content creation using Google Cloud Platform.

It was challenging developing content pipelines for an in house real-time render platform and Unreal Engine at the same time with different content requirements. I worked with OEMs and vendors to build out over 600 unique assets for on-device and cloud rendered experiences. This included regular collaboration across these external teams to align pipeline workflows and content standards.


Cloud Rendered ar

For this project we have two render targets for AR:

  1. On-device rendered that sources a locally downloaded glb or usd ~10MB file

  2. Cloud rendered unreal scenes that use high polygon meshes with more available detail like interiors.

The cloud rendering uses an unreal scene that has been predefined to use one of our full 3D levels and streams in assets after a client connects. We serve assets via pak files on Google Cloud Storage to prewarmed game servers deployed to different kubernetes pods. We are able to scale and localize clusters depending on user location and demand.

I worked a lot on the Unreal Engine workflows to create the deployment plan for serving/streaming assets instead of large monolithic levels on Google Cloud Platform. In addition I managed the creation, runtime optimization, testing of our in context 3D environments and high resolution car assets.

The cloud rendered solution is currently available for Android, with plans for iOS soon.

UX flow from search query to 3D mode to AR

 

3D environments

Here’s a sample of the environments in action on-device.

 
 

The card at the bottom was WIP, thus wrong model name.


ON-Device rendering

On-device rendering was the initial deployment of the project, where all other AR content from Google Search is currently rendered. We do a network check to see if the current speed can support cloud rendering, if not we fall back to on-device render method. For the 300+ on-device assets, we served both gltf and usd formats to Android/iOS clients.

The 3D models for this rendering schema are a lower resolution and smaller files size to accommodate download latency. This means we are not able to serve vehicles that have full interiors given our size and render draw constraints.

I managed the on-device content development and pipeline for deploying the assets for consumption via Google Search. I added features to improve our shading and rendering quality of our assets, even though we were dealing with mobile rendering constraints.

 

Clear coat

One of the first major improvements I tried to push for was adding clear coat to improve accuracy for PBR materials in our rendering system. I prototyped this feature in gltf using the KHR_clearcoat extension and rendered in Filament, the base render stack that our AR rendering uses for Android. Prototyping in Filament helped engineering translate these features to improve both rendering platforms.

Clear coat was also available when having to build a compatible usdPreviewSurface for usd when these models are served to iOS clients. Having parity across on-device rendering stacks was key to ensure consistency of the experience regardless of what platform a user was using for AR.

 
 

No Clear Coat

Clear Coat Enabled

 

Flakes

Improving the visual quality for on-device rendered models was a big priority for the project.

The render system had a single lobe BRDF shading model, when we needed to support additional specular models for metallic flakes and a clear coat layers.

I prototyped gltf models with a flakes specific normal map as a detail normal and rendered the results in Filament to show the differences it makes in the specular reflections on the metallic paint. This helped engineers implement this into the render system we used for AR.

 
 

No Flakes

Flakes Enabled

 

Lighting

We were able to add image based lighting (IBL) to 3D mode and AR blending to improve the contextual lighting of our models in on-device rendering. This feature helped in the user flow from 3D mode to AR and the improved the quality of light estimation used from the initial spawn of the 3D object to reduce unexpected placement behaviors in AR.

The addition of improved soft shadow quality and ambient occlusion rendering also improved grounding our objects in 3D/AR modes.

 
 

Prototype of IBL rendered in Filament

 

Technical Direction
Design
Development
Prototyping
Cloud Deployments
Performance Optimization
Render development
Shaders

Previous
Previous

The Mandalorian - The Child AR

Next
Next

Synthetic Characters - Machine Learning