A New NFT Card Game Was Designed and Written by AI

Comments are off for this post.

Could artificial intelligence ever truly replace the work of a human in a creative environment? Entropy Cards creator and digital artist (who operates under the alias) NA1 thinks not. Even with projects like OpenAI’s DALL-E gaining massive notoriety this year as a certified meme printer, NA1 believes that human curation and input remain a crucial step when working with AI on creative projects. This is particularly key in the NFT art world, where collections can exceed thousands of pieces, with the potential to gross millions of dollars in revenue.

For now, at least.

The entire Entropy Cards project is a testament to that potential. Each trading card in this multi-deck collection features art spun up by an AI using text prompts. But, critically, this NA1’s AI doesn’t get text prompts from humans: it comes from a separate AI trained on inputs from existing card games.

A battle of two AI minds

This is where the namesake, “Entropy,” comes into play. The two AIs are pitted against each other, which could easily result in an incomprehensible mess ⁠— an entropy of meaning. However, thanks to the careful curation of NA1 and the rest of the Entropy Cards team, these AI battles created a comprehensible collection of trading cards that might even sell at your local hobby shop.

Entropy Cards creator NA1 spoke with nft now about the 10-month-long process of putting the collection together, in addition to his journey toward utilizing publicly available AI tools as modes for creative expression.

Can you tell us a little bit more about you and your team’s past projects?

ZNO and I both have art backgrounds but of very different sorts. NA1 is a new alias for me, but under another name I’ve made a career in building digital artwork mostly of the physical kind — installations, sculptures, and generally things that relate to the body and the built environment. In that capacity, I work mostly with light and light-generating media like displays, projection, and sculptural lighting.

Since about 2015, I’ve been experimenting with machine learning as a way of mapping one domain into another. I’ve spent years working on experiments in training models for supervised image-to-image translation. Some of that stuff I think will come to fruition in another series later this year.

ZNO did the card designs and all the original symbols in the card art. They were responsible for the comic and the website for this project. Ian [the shopkeep featured on the site] was their creation, as was Ithaca Hobby and Off-Track Betting, as was the interface that makes the site so juicy and fun to use. They’re a comics fan and a CCG [collectible card game] fan, and that stuff really comes through.

One of your website’s about pages states that one day the AI tools you used to develop the project will feel “hilariously 2022.” Can you elaborate on that?

That ‘hilariously 2022’ comment was in part a reference to how quickly image-generation techniques are changing. As an obvious example, I created our artworks with a CLIP+VQGAN implementation that has a very particular look to it — it’s a look we feel references, in a dreamlike way, the style of the sort of early-90s collectible card game artwork we grew up with. I spent months on prompt engineering R&D to get at that style, but the stuff still looks GAN-generated to the trained eye. In my mind, GAN [generative adversarial network] is a medium of a certain time and place.

When it comes to the text output, which was predominantly done by [OpenAI’s] GPT-3 with a little RNN [recurrent neural network] bootstrap at the start, there are few factors that in retrospect will, I think, place the work in a certain period of history as well.

One is that GPT-3 has its own aesthetic. Its outputs feel today quite varied and unpredictable and surprising, but the same could be said of DALL-E, and I feel that it would be shortsighted to think that in just a few years’ time, we won’t look at those images or this text output and think ‘I can’t believe we used to be amazed by those.’

Source: DALL-E mini

Do you think AI could replicate the steps you took to keep the project from falling into chaos?

That’s a definite yes.

In fact, that sort of thing is happening on a certain level under the hood of the image side as the generator and discriminator of the GAN do their own kind of curation process, or as CLIP steers the output to satisfy text prompt criteria. Later image generation techniques will almost certainly accept and follow more nuanced and subjective guidance in prompting.

I also did a pretty serious amount of work automating the rejection of text outputs from GPT-3 that were unusable. That was to make sure that the results were at least viable as text, and I could absolutely see somebody training a model that looks more strictly for reasonable use of punctuation and coherent grammar and limited repetition, etc. As it is, GPT-3 writes like E.E. Cummings sometimes.

We’re seeing a lot of lore in the flavor text of these cards. Can you share your creative process on that?

The only hand I had in the lore was just selecting for options from GPT-3 that had some continuity from generation to generation.

I didn’t do that universally, because GPT is all too happy to just repeat the same thing with tiny variations if you allow it. But if I saw themes developing — characters, modes of gameplay, meta-narrative, in-jokes, or lore, as you mention, beginning to pop out generation after generation in a deck, I let it through in curation.

GPT really just made all this stuff up, and it was more than happy to elaborate when it saw the material coming back in prompt after prompt. I guess it was kind of like that baseline interview from Bladerunner 2049 where they just keep repeating the same thing back and forth to the replicant and checking if it loses it along the way.

I’m really very excited for the time when we’ve revealed enough of these cards for fans to read whole decks in sequence because each one has its own little world going on and they’re all completely different. It’s almost like the AI was writing serialized tiny fiction or poetry across the generations of these decks.

Can you talk about your partnership with Chain/Saw NFT and their role in development?

Frank from Chain/Saw is an old friend of mine from our experimental/noise music days [in NYC]. Frank reached out to me to discuss helping Chain/Saw with another artist’s work last summer, and I came back over the top and pitched him my own idea, which was not what he had in mind.

Chain/Saw really took a risk on us because we don’t have an audience under these names, and we’re starting from scratch in terms of reaching people. We’re not really their normal type of project — they tend to help established artists execute ideas in the NFT space, so the audience comes baked in. But most of their artists don’t really have the digital capabilities to build the final product, and that’s what Chain/Saw brings to the table.

Do you think AIs will fully replace people working in creative industries?

I think in the creative field, AI will soon be an exponentially more powerful tool than anything we’ve had before, but I don’t think it will entirely replace the judgment of a human in the contexts in which it matters most, and almost definitely not in art.

In part, I think that’s because we, as humans, won’t care what an AI has to say in the same way that we care what a human says. As an example — if an AI were to make an opinionated documentary, all on its own, or to write an essay making a case for something, would you give that argument the same weight as if a person made it? I think we’re ways out from that. It’d be a novelty for sure, but I think there would also be a strong element of ‘but what does that machine know about X issue?’

That’s, even more, the case for entirely AI-created art. Already I look at people presenting as art a pairing of a prompt and the naked AI output and think, ‘OK that’s a nice image, but so what?’ The raw output is meaningless without context, and the context still needs to come from a person with a reason, a point of view, or an intention to communicate something.

Until AIs are sentient, they will only be tools, not creators. Once they’re sentient, the type of things they feel compelled to say might not mean much to us. Their art would most naturally be for other AIs, and anything they made for human consumption would be a kind of translation and probably a dumbing-down of their intent.

Can you tell us more about your curation process? What prompts were fed into the AI to get that first deck going?

I bootstrapped this system by generating the titles of the first-generation cards with an RNN trained on titles from other collectible card games. I did that because ‘cold prompting’ GPT-3 with something like ‘This is a fantasy collectible card game card title generator. TITLE:’ doesn’t come up with much that is usable, even when allowed to churn out thousands of options. It needs several examples to set the tone.

For the prompts fed to GPT-3 to create the other card fields, I began the process with a handful of randomly-chosen corresponding fields from other CCGs, always following the title of the card being created as the dangling end of the prompt. That served to anchor the content of all the other fields to the title already chosen for the card.

As the generations progressed in a deck, I included all of that deck’s previous generations’ card copy in the long and growing prompt, which is, of course, where the recursion and emergent through-lines came from. As there was more material to use from previous cards in the deck being generated, I used less and less material from other CCGs to prompt.

For all later generations, I used GPT-3 to do the titles as well as the rest, and as I proceeded through the process I continued to start all that recursive prompting from earlier in the deck with just one random sample from other collectible card games. That was necessary to keep the voice anchored in the genre. This also served to break up any neurotic repetitions GPT got stuck in.

Anyone who has played with GPT-3 will attest that it is all too happy to fantasize, hallucinate, improvise, make sophomoric jokes, or otherwise lose the plot, but if you turn the […] parameters down too much to hem that in, it’ll basically spit out verbatim what you gave it, or regurgitate stuff from the internet.

Can you actually play with these cards? Have any attempts been made in-house?

I would absolutely love to try to play with these, but the thing is — there aren’t any rules.

We’ve discussed building an interface to allow collectors to ‘talk to the crazed AI that made the cards’ (a tie-in to the origin story in our comic) to try to get it to explain the rules to them, and that’s still very much on the table.

We’ve also talked about letting collectors contribute their version of a set of rules to a database so people can compare notes and try out different play styles. For now it’ll have to be like ‘Calvinball’ — you’ll have to make up the rules as you go, and you need to remain very flexible.

Source: Bill Watterson

Not only are there a ton of ‘table-flipping’ cards that suddenly change the rules of the game, or end the game, but there are a bunch that ask you to express very personal things to one another, or escort other players to the bathroom, or involve people who aren’t playing, etc.

It’d be incredible to have a tournament with that and film the results.

Editor’s note: This interview has been edited for clarity and length.

The post A New NFT Card Game Was Designed and Written by AI appeared first on nft now.

Share this article

Comments are closed.

error: Content is protected !!