Generative AI might make interoperable game items more viable
Developers in generative AI are eying how the tech could help items transfer between games.
If I were to ask your rank-and-file game developer "would it be possible for me to take a hat from Fortnite into Marvel's Spider-Man?", the natural response I'd probably get is "no, what is wrong with you, that is technically and artistically unfeasible."
And they'd have good reason to, especially after three years of blockchain-fueled mania was built on the idea of creating game assets that could transfer seamlessly from game to game. It's a feature riddled with problems, and seemingly only dreamed up to help give financial value to digital assets.
And for most developers making tie-in content between games (think about how Final Fantasy XV randomly crossed over with Assassin's Creed Origins), it's just easier to remake a given asset from the ground up.
But let's say for some reason you were a developer trying to still crack that problem. Maybe you're not trying to give value to digital goods, maybe you just played Roblox and thought "hey, it's cool that these items can transfer between different games experiences." Or maybe you're interested in a network of games where it's fun for players to have items change form to fit the type of game they're playing--like how animated films or movies will have characters change form when jumping between worlds with different animation styles.
What process might help that feature replicable?
At MIT's Gaming Industry Conference currently underway in Boston, Massachusetts, generative AI founders Cory Li (founder of Spellbrush) and Hilary Mason (co-founder of Hidden Door) made some decently interesting pitches for how the technology could be used to create new game mechanics during a panel on the technology.
Of course, this pitch—and other AI pitches like it—would only work if AI toolmakers can address some of the big hanging questions about the technology.
Generative AI can be used for more than speeding up game production
Though it was Li who proposed the idea of using generative AI for interoperable game assets, Mason kicked off this branch of the conversation by quizzing her fellow panelists. Most of the discussions on the panel had been about how to speed up conventional game production. But what unique game mechanics could be built around the technology itself?
To probably little surprise, there weren't a lot of ideas that could be put forward (partly because...no one's made a commercially successful game built on generative AI mechanics). Li, however, pointed out that an idea from the blockchain fever dream looked more favorable if you contemplated the use of AI tools.
To start, Li pointed out the fundamental problem with all pitches for interactable game objects: if you take an asset built for one game and transport it into another, it will look bad. Even if you clean up the technical flaws of such an object, it won't match the other game's art direction.
"Most game directors wouldn't allow it because you'd want to bring a voxel-based sword into a photorealistic game and it just wouldn't work," he said.
A generative AI workflow (probably one that doesn't just reduce everything to that Midjourney mush) might help solve that art direction issue.
Taking Li's thought to a functional end, this feels like a feature you'd want to implement if players had legitimate attachment to items--actual intrinsic value that isn't just monetary in nature (Neal Stephenson had some interesting thoughts about this at DICE last month). The transferability itself wouldn't be what makes the item worth importing, it would be whatever feature (like customization or sentimentality) that makes the effort worthwhile.
Another panelist, Unleashed Games founder Irena Pereira, mused that such tools could let developers make game assets that function equally well on any hardware platform. Her pitch wasn't too far from what we've heard from developers like Unreal Engine CTO Nick Penwarden, who've talked to us about the need to make game assets that could serve equal purpose in film/TV production and games.
Though improved tools and features have made it easier to make individual assets perform well on different platforms and in different use cases, it's not a completely solved problem.
It is worth noting that there's a double-edged sword to Li's pitch. In a perfect world, a generative AI tool could be used to turn a voxel sword into a photorealistic blade and keep the same gameplay properties. But based on how generative AI tools work right now (by heavily referencing existing, sometimes ethically dubious datasets), the first iterations of this system might need a blander, less distinctive art style in order to allow this functional overlap.
And plenty of critics of the interoperable object dream have already warned that a mushy, bland art style might be an inevitable outcome of this feature.
Are AI developers trying to solve game development problems, or just save money?
Conversations around generative AI have swung in wild directions in the game development world. Engineering-minded devs and creative folks have both dreamed about how the tools could be used to honestly speed up cumbersome processes, and critics have warned how relying on this tech could give executives a chance to cut jobs, or cut artists out of game development by dumping all their work into proprietary datasets.
The MIT-hosted panel...didn't really tackle these concerns. The enthusiasm over how generative AI tools could speed up conventional game production, or let smaller teams create games that usually take 400+ people to create, drove the discussion. Pereira at one point lamented how sub-specialization in different departments had ballooned.
"You think you're going to be an artist but there's not just artists [in games]," she said, before rattling off the different art and animation roles that have indeed become quite specialized. "There's this fractioning of the roles on the team and you need every single person in that specialty to be able to accomplish something as basic as making a character swing a sword."
For some context, Pereira was also describing how the time it takes to make that sword swing costs a lot of money, and money is not an infinite resource for developers. The focus on saving money to better use capital was a very prominent part of the discussion.
It was this point that captured something unusual about the enthusiasm to eliminate "time-consuming" jobs in game development. Though some developers probably do dream about having to spend less time on mundane tasks, many of those roles are filled with an incredible amount of technique and precise knowledge. An experienced 3D animator might not be able to render a realistic sword swing, but maybe they know the magic trick to making each swing feel just right.
Pereira's comment made me think about the recently released Dungeons and Dragons movie (bear with me for a second, this will make sense, I promise). Co-director John Francis Daley has been spending the last week on Twitter showing off incredible behind-the-scenes footage that shows how the movie relied on real-world practical effects. They're effects that cost money and rely on talented (and unionized!) labor. It might have been cheap enough to hand some of these moments off to the VFX team already making characters like the dragon Themberchaud. But choosing to go practical with characters like Jarnathan made the movie stand out.
OH, JARNATHAN!@LegacyEffects #DungeonsAndDragonsMovie pic.twitter.com/1vhgczPdyN
— John Francis Daley (@JohnFDaley) April 2, 2023
I'm sure Pereira wasn't completely downplaying the vital role that specialized game development jobs have in the game-making process, but the fixation on speeding up development to save money does miss a key fact: sometimes you need that time to make something players will remember.
About the Author(s)
You May Also Like