A few years ago we started our own VR product for Location-Based Entertainment (LBE). But we were not a game studio - there were no staff game designers, artists or modellers. We didn’t have a multi million budget to create a full featured game studio. However, we were a team of experienced software engineers with a goal to create a quality VR product that would excite our clients and us.
The early successful VR LBE products were based on modern 6DOF (6 degrees of freedom) VR technologies and exploited the fact that clients had only tried low quality 3DOF cardboard VR or other outdated VR arcades (or had not even tried VR at all). That means, frankly, it was easy to impress people when they started playing your game, but harder to advertise and actually persuade them to try. There was also a requirement to make the VR experience short - no longer than 30 minutes. The VR hardware (especially computer backpack) were too heavy for those who tried VR for the first time.
These two points (easy to impress and a short session length) helped decrease the cost of developing a quality VR LBE product. You didn’t have to be a full featured game studio and create a long and complex game to create a great experience. We think that VR LBE still preserves the same traits today.
We have experience developing both types of VR LBE products - full-immersion games and active e-sport games. Each type has its own challenges and set of expectations, but there are of course some common difficulties, especially for a less experienced and not fully featured game studio. We will discuss them below.
In full-immersion games it’s important to create a rich atmosphere by using photo-realistic visuals, quality audio effects, and haptic feedback. Ideally, such content should be developed in house and will require artists and modellers to create high quality assets. But we chose to focus on the software and effects part to explore the modern VR potential. Most 3D models we purchased from the Unity Assets Store and only hired freelancers to work on specific parts. It put the quality of the final product at risk, but was far cheaper and faster than starting from the ground up.
One of the main aspects of immersion are high-quality models. Players must be able to look at objects such as guns, interior parts, surfaces and NPCs at very close range - literally right before their eyes. That means that models should have enough polygons and use PBR (Physically based rendering). Full-immersion games require powerful computers or non-trivial optimization techniques and huge development efforts for a mobile VR platform like Oculus Quest.
In active e-sport games players move quickly and interact with each other and the game environment at a very high speed. The goal is to get the highest score and stay ahead of your competitors. This fact lowers the 3D models requirements - they don’t have to look very real, you can even get away with using low-poly models for some content.
It’s much more important to tune the game balance, dynamics and play area geometry. But it doesn’t mean you can just throw a bunch of primitives on the Scene and ignore creating 3D models and paying artists - a smaller level of detail doesn’t entail lower quality. Due to budget limitations, we didn’t hire staff modellers and instead bought 3D models from Synty Studios. We think they produce the best low-poly models that can be used even with low-power mobile VR hardware like Oculus Quest without significant optimization efforts. Of course, if the performance cost was lower, we would prefer to use high quality photo-realistic assets for e-sports as well.
Whenever you buy 3D models in a public store like Unity Assets Store, you run the risk of finding the same assets in a competing product, especially if you use something unique, like Synty Studios assets. However, we think it’s less of an issue in VR LBE games than in PC and console games - VR LBE and the general gaming market have different target audiences. The main problem is that in VR we can use only a small subset of the vast model selection of public stores. Is there a way to solve this without hiring 3D artists and modellers? Let’s figure it out.
Let’s imagine you are walking in some atmospheric place and suddenly come up with an idea for a VR quest that will be based on the location you’re in. You capture the location in a video using your smartphone. At home you open the editor of your choice and import the video from the cloud. In a few minutes you have a ready to use 3D scene, perfectly capturing the location you were in.
Then, you select the half-rotten couch and rotate it up-right - an NPC will wait for the players here. You noticed there are cranes demolishing another building in the distance. You decide they should be a bit closer, so you select and move them. All crane movements have been captured as well.
You launch the demo using your mobile VR device. Here you can move around freely and plan future quest scenarios.
Is this achievable today? Obviously not, but we think the technology is getting there - it’s already possible to scan individual objects, buildings, streets and even whole cities. After some processing, the scanned objects can be used by modern 3D games, VR, 3D printing devices and in other technical applications. Real time 3D scanning is used by self-driving cars and robots. Most AAA projects use scanned assets to achieve excellent graphics quality and reduce model creation time. But the scanned models still have to be processed by modellers.
Can we potentially do away with manual modelling? Probably not, even if the scanning technology starts working perfectly - it’s impossible to scan something which only exists in your imagination. Pure 3D scanning is useless for Sci-Fi and fantasy projects and objects that don’t have real world references. But someday we might have GANs (generative adversarial networks) which can generate 3D models and even full scenes just based on sketches.
This article is the first in a series that will try to answer the following questions:
- How to get a virtual model of the real object or scene? Is it possible to use only a smartphone?
- How to use the captured model after the scan? Is it possible to use it on mobile hardware?
How to scan reality
Today there are several ways to scan real world objects:
- Ordinary 2D photos (photogrammetry).
- Structured light solutions.
- Time-of-Flight cameras and lasers.
- Light field cameras.
These approaches will be further discussed in the series, with each article focusing on a single approach. The next article discussing photogrammetry is available here - “A Scanning Dream: using a camera”.