Illuminarium opened at AREA15 this spring with two interactive shows and delightful eats at the in-house Lumin Café & Kitchen. Since then, a new spectacle entitled “SPACE” has debuted along with a brand new nightlife experience, the Ultra-Lounge at Illuminarium, an after-hours event on Friday and Saturday nights. To get some insight into the interactive world of Illuminarium, we talked with the company’s Chief Technology Officer, Brian Allen.
A Look at Illuminarium’s “SPACE” in Las Vegas
Illuminarium is extremely interactive with visitors. Can you tell us how you make that work and how you create such a multi-sensory environment?
Allen: We’re definitely trying to bring as much reality to the space as possible. I think one of the things that make us unique is the ability to layer technology and to be able to hit all of the senses of those who enter the space.
One of the key learnings throughout both opening Las Vegas and operating Atlanta for a little over a year now has been that the interactivity and things like the haptics really leave an impact on guests, and they tend to talk about those things most often. I think it adds the most impact to the show.
We have been extensively trying to add interactivity to all of our shows and increase the number of interactions per show. I think that’s the way that this kind of, let’s say, immersive attraction, content is heading. In the future, most things will be generative, most items will be interactive, and most things should be personalized. The way that we approach that is it evolves, it’s ever-evolving, group interaction and design, when you have a space where people have free agency to move around. I think it creates a lot of opportunities for us to be able to guide people using the design language, and a big part of the interaction design is showcasing or allowing other people to experience it.
It’s all powered on a game engine; we run most of our interactivity on Unreal Engine, so we develop every generative piece that we do in Unreal; we have the ability to support other game engines, which kind of makes us platform agnostic. So when other creators come in, who are more comfortable on different game engines, or different platforms, we can accept those as well.
We have a 3D LIDAR system that allows a point cloud to be generated from the space about 20 times a second. That gives us a complete 3D picture of how people are moving throughout the space in real-time, and that feeds all of our interactivity. We’re just scratching the surface of what we can do with all that data.
In addition to the interactivity, that data is critical to us because it informs future content decisions, where we can now determine dwell time, on average, and where people spend the most time in the venue. For instance, I know that, at minute 13, there’s an elephant that comes up to the screen, and from the LIDAR data, if most people are staring at that wall at that moment, we know that was a successful shot. And we maybe should save that wall, or maybe put an interactive in that wall, because it’s getting triggered most often. So it’s kind of a dual-purpose system for us, and we’re excited to bring that even further in the future.
Because of the limited source material for “SPACE,” was it a more difficult experience to create?
Allen: It was definitely a more rigorous process. I would say in terms of the rendering and the pipeline; we had to build it. There are certain pipeline tools on the content creation side that have stayed true throughout all shows. But we definitely had to increase the number of tools and interchange the approach when we were doing “SPACE.” A lot more rendering horsepower was needed on that show.
But I do think it is our best work to date. I think we’ve taken so many learnings from the previous shows and applied them here. Things like interaction, how the chapters flowed together, the room one approach, the pre-show approach experience and the soundtrack; it’s quite an amazing show. I purposely shielded myself from watching the whole show until I could experience it with some guests. It was definitely an emotional experience. It was quite fantastic.
And can you tell us a bit about the future of Illuminarium?
Allen: We have an incredible pipeline setup for locations in the future, and we’re very excited to be able to unlock those. We have leases in major cities across the US—Miami, Chicago and Dallas, so we’re looking into a bright future where we will be able to unlock those locations. We’re also looking into joint venture opportunities both inside the United States and overseas. The ability to bring Illuminarium comes to Europe, in Asia, South America, and Latin America is really exciting to us. We’ve had some good conversations since we’ve been able to get back on planes and travel to those places over the last few months.
In terms of content and spectacles, we just released our first fully interactive and fully generative piece of content. We’re testing that out in Atlanta; it’s a piece of content called “Waking Wonderland.” It’s made by Entertainment One, which is a subsidiary of Hasbro; they’re our Canadian partners. It’s fantastic to be able to push content like that; it’s a show that entirely runs off of the game engine on Unreal and, to my knowledge, has the largest Unreal cluster in the world. It’s really exciting to be able to do that from a technological standpoint, and it gives me a ton of hope for the future of spectacles and immersive attractions. At the end of the day, we’re all creating this stuff for the end user; that’s where I get the most joy out of it. Seeing the end user shocked or wowed or just coming out of the experience really happy.