AppliedVR

A new way of visualizing construction

As a founding member of the appliedvr team at Google, our goal was to use VR to improve current building design and construction processes of data centers and workspaces.

I led art and technical art direction for the team and built pipelines for engineering in applying CG based workflows into automated cloud processes. With parallel computing in the cloud we were able to reduce time and rework of creating real-time ready assets from weeks to hours at the click of a button. The scope of work ranged from fully automated to bespoke custom VR scenes, both with different workstreams and requirements. UX and interactions were built out as common components, but new interactions often were made to best accomplish users needs.

Scale and flexibility were key to building applicable VR simulations for our users. The following sections outline different product offerings and pipeline processes that our team built.

Art Production Pipeline

 

CAD sourced models from various tools.

Work with designers and content experts to get correct data exported to work with our content creation tools:

  • USD

  • FBX

  • Maya

  • PiXYZ

  • Substance Tools

  • Zbrush

  • Adobe suite

Optimization through automation.

Create meshes for the level of detail needed for the project requirements. Can be run locally or in the cloud:

  • tessellation

  • removing holes

  • deleting hidden

  • defeaturing

  • decimating

  • etc.

Fix outliers.

Sometimes parts of the model need to be fixed by hand because the source data was insufficient. This is usually done in:

  • Maya

  • PiXYZ

  • Zbrush

  • Unreal Datasmith

 

Assemble.

Combine all the details from models to materials in Maya. 

Materials are usually created in Substance Designer and Painter using a non destructive workflow.

Create different LODs as needed.

Deploy To Render Engine.

80% should be done in the Maya scene and materials are reconfigured for each engine. 

Lighting, tools, and game interactions are added, then we build for target platform.

 
 
1212_mk_2.effectsResult.1080.png

Light field - High Quality

Created in Maya + Vray.

We created light-fields to show off the highest quality that could be achieved in our product offerings.

This approach allows users to see full global illumination from any offline render setup.

I was able to build this environment to be offline and real-time ready from end to end using standard production tools:

Maya, Zbrush, substance suite, and Vray.

We used Zync to render in the cloud all the different camera positions per headbox area. This saved us days by having a distributed and scalable CPU based render farm.

Light Field Technology

Seraut was used to create our light fields. This process uses images and depth data to generate proxy meshes/cards from a 1m³ headbox area then bakes images as texture to the proxy faces thus giving 6dof parallax in the headbox. This allows the light fields to be viewed on multiple headset platforms, even mobile devices.

Visual fidelity is great, but limited movement to the headbox area.

 

A single headbox is generated from 6 different orthographic views to create a cubemap.

1212_office_open.jpg

Baked - Mid to High Quality

Created in Unreal Engine.

At this quality everything is pre-calculated and dependent on the game engine’s bake system.

Baking ensured frame rate stayed consistent and quality was closer to offline rendered images designers were currently using for the projects.

Substance materials were used to replace incoming CAD based materials and allowed the team to create uniform UVs for assets. This also was helpful in creating lightmap UVs for the baking process.

1212_office.jpg

Dynamic - Mid to High Quality

Created in Unreal Engine.

Dynamic lighting was used for visualizing different lighting setups such as time of day and users were able to adjust interactively in real-time.

We used mesh distance fields which gave us better performance and automated lighting when runtime lighting changes were needed. This feature unlocked higher quality cast shadows, approximated global illumination, and approximated ambient occlusion for levels.

A drawback with distance fields is the front end calculation that is per asset/project dependent. This can take a long time to calculate the signed distance field per object and is based on the model complexity.

There can be additional trace bias tuning needed on a per object/actor basis to achieve a specific quality of distance field, this made automation more complex.

modcub.png

Automated - Draft Quality

Created in Unity and Unreal Engine.

This quality focuses on model accuracy and iteration speed. This is possible by an automated cloud processing pipeline with these predefined elements:

  • Skybox

  • Light rig

  • Basic reflection probes

  • Base color on generic materials

At this quality we could optimize performance much easier since the level was lightweight and we didn’t need to calculate complex lighting or shaders.

Cloud Processing Pipeline

Some goals of the cloud pipeline:

  • Break up large model into smaller chunks

  • Run operations in a container environment

  • Distribute work across multiple processes

  • Use Google Cloud Platform to scale reliably and securely

I configured all 3D production related components and their software configurations to work modularly in a cloud distributed pipeline that enabled automation for customers.

This included software environment configuration to work with automated tasks I created in digital content creation tools used in the pipeline. 

Also created the license servers and deployments for all third party software with corresponding network configurations to conform to security guidelines.

This also included creating GPU virtual workstations and configuring each vm environment to test that workflows would work on cloud hardware/setup before pushing to production.

I also created a deployment strategy via kubernetes using cloud orchestrate, shared storage solutions, and helping improve the tools built to scale workloads. These workstations have become more used across different projects at Google and have helped me learn a lot of what’s possible on cloud technologies.

 
pipeline.png

Parallel cloud processing

Our pipeline is designed to procedurally take a giant CAD scene, decimate it, and then cut it into thousands of cubes, and then import it into a game engine.

This is conceptually a dependency system that takes data, runs operations, exports to disk, and continues until the recipe is finished.

Cloud Processing Pipeline Technologies 

 

USD

Scalable non destructive interchange format between stages

 

UNITY/Unreal Engine

Runtime clients

FBX SDK

Interchange format used by legacy CAD software and game engines

 

dsub

Open source command-line tool to run batch computing tasks and workflows on in a cloud environment

github

Pixyz

Tessellation and decimation solver

 

Google Cloud Platform

Google Cloud Storage(GCS) and Google Compute Engine(GCE) to drive our cloud processes

MAYA

Cleanup, dispersal, chopping and combining mesh data

 

Docker

Run operations in a container environments on GCP

Creative Direction
Cloud Development
Design
Development
Pipeline
Technical Direction
UX Prototyping
Interaction Design

Previous
Previous

Synthetic Characters - Machine Learning