Today we have announced our Series C fundraising to help us build the next generation of voice-controlled games and entertainment – a multi-platform voice experience that engages millions of players and brings consumers the best emerging capabilities in generative AI and interactivity.
Realizing this vision means solving a novel and complex series of problems and building first-in-the-market capabilities - not just making great, fun games, but expanding what consumers think is possible with voice interaction and AI at the core of our products.
Perfecting Speech Recognition and Low-Latency Response
We believe voice is the most intuitive and human-friendly control interface, and so it is natural that games and entertainment should be accessible with just your voice. Voice interaction is the core of our products, and realizing our vision of dozens of quality titles across multiple home device platforms requires a robust speech recognition capability that works anytime, anywhere, with any player. The journey to perfect this means:
- Blending on-device and network-based speech recognition models to optimize for speed and accuracy across devices - Alexa devices, TVs, mobile, and more
- Leveraging multimodal models and fine-tuned Automatic Speech Recognition (ASR) models to support the whole spectrum of human expression, from buying letters in Wheel of Fortune, to singing your heart out on Karaoke Quiz
Towards A Multi-Platform Voice Interaction Network
Our vision is to bring our players a flawless voice gaming experience no matter where they are. Our roster of titles on Alexa and smart TVs bring gaming into the heart of the home, and this also extends to mobile. Players' phones ought to be not just a gaming platform on its own, but also a mechanism to connect to play on TV and other smart home devices. A connected and seamless cross-platform experience means:
- Solving for seamless voice interaction and game latency even when running on unstable cellular networks
- Sharing networking, model and business logic across platforms while keeping high-performance and user-interactive code on the native layers
- Building our core technology infrastructure to enable a "write once, run everywhere" development environment as we expand our games to additional OSs and platforms
LLMs and AI At The Heart of Gaming
Our vision is that players should experience games with freely-conversing and engaging characters, be able to create and experience new works, all within the rules and bounds of a well-designed game and within acceptable latency. Our games will become more interactive and more personalized - recommending users between games based on their preferences, and dynamically generating safe and personalized content such as custom playlists in Song Quiz or custom categories in Jeopardy!. This means that we are:
- Building LLM agents to inject personality, open-world exploration and natural language understanding across our existing titles and new games (like our AI-hosted '20 Questions', coming soon to Roku TV)
- Working with generative image technologies so users can visualize new worlds they are creating with their voice interactions
- Developing machine learning trained on voice and audio to interpret our users prompts regardless of their input device, and to personalize content for players
Helping us build the future of gaming
In the months and years ahead, we will be scaling our teams in Engineering, Product and other functions to build this future. If you are inspired by these challenges, and want to be at the future of a new chapter in gaming and entertainment, we encourage you to check out our open roles and see if you may fit into our growing team. We are determined to build a diverse and collaborative team across all our functions and particularly encourage candidates of diverse and non-traditional backgrounds to explore our open roles.
Brian Ng is a Senior Software Engineer at Volley, working on AI game experiences. Alex Ivlev is a Senior Mobile Engineering Manager at Volley, working on our mobile & cross-platform capabilities