Generative Adversarial Neural Networks: Infinite Monkeys and The Great British Bake Off

Adversarial Neural Networks are oddly named since they actually cooperate to make things.



By Freya Rajeshwar, Recognant

Header image

If you had an infinite number of monkeys typing at keyboards, could you produce Shakespeare? Yes, eventually. But how you would know once they’d typed Shakespeare? Now there’s the rub.

In this example, monkeys are what are called Generators in AI, and the English student who checks their work to see if they have written Shakespeare (or anything good) is called a Discriminator. These are the two components of an Generative Adversarial Neural Network.

Adversarial Neural Networks are oddly named since they actually cooperate to make things.

Cooks in the Kitchen:

I am obsessed with the TV show “The Great British Bake Off.” Let’s say I made it my life goal to win GBBO (with a little help from my AI) and earn that coveted glass cake stand and flower bouquet. We have a whole week to practice for the first round, and so we are going to develop the greatest cake recipe in the history of the world (I don’t do anything half-assed).

We can agree that we intrinsically know what tastes good, or “awful.” You bite into a cake and think to yourself, “That’s really good” or you may spit it out in disgust. Either way you instantly have an opinion. But can everyone make a cake? Can everyone invent a new flavor or recipe for a cake? That requires more skill. Or does it? I mean, there are a lot of people on the show that are pretty clueless, but even they make it past a few rounds.

Mechanics of the Problem:

Let’s say that for round one of the GBBO competition, our base cake batter must contain Butter, Sugar, Eggs, Flour, Baking Soda, Milk, and Water. You can then add Sour Cream along with Vanilla and Cocoa to change the flavor (let’s call these secondary ingredients). If you add too many secondary ingredients to the base cake batter it changes the texture, so you have limits on the range of what’s possible. For example: 2 teaspoons of Vanilla is fine, but 2 cups is far too much.

Given these constraints, there are a few thousand combinations that could be produced. They wouldn’t all make me a GBBO champion, but I could try to bake them.

Generative Adversarial Networks don’t have to be a Neural Network. Any system that pairs a “cook” (called a Generator) with a “taster” (called a Discriminator) is a Generative Adversarial Network. In this example, the cook has a limited range of choices (the aforementioned base ingredients, and 2 cups of any “secondary” ingredients), and is going to try and find the “edges” of what is acceptable to optimize for the “best” cake.

Batter Optimization:

In our cake example, a chocolate cake would have the base ingredients, plus Sour Cream, Vanilla, and Cocoa. We would know that the base supports 2 cups of secondary ingredients (Sour Cream, Vanilla, and Cocoa), so the 3 secondary ingredients can’t total more than that. There has to always be some Sour Cream or the Baking Soda won’t work. We’d also use teaspoons (1/48 of a cup) as our minimum unit of measure. The minimum unit determines how many tries there could be. If we were to try at a resolution of quarter teaspoons we’d have to iterate exponentially more times. I’d be spending a lot more time in the kitchen.

If you have ever tasted a cake with an entire cup of Vanilla Extract you’d know right away that that is too much. If you ever tried a cake with just 2 teaspoons of Cocoa you wouldn’t even be able to taste it.

In the same way that the “cook” in our analogy (called a Generator or Generative Network) is going to create a cake recipe, the “taster” (called a Discriminator if it is simple, and an Evaluation Network if it is a complex system) will score it.

Reducing Infinity:

The result is that the system finds the best result by testing nearly all possible combinations, but once a trend is found that is limiting success, those combinations are “pruned.” In our example, anything with more than 12 teaspoons (1/4 of an 8 oz bottle https://amzn.to/2KRzYvo ) of Vanilla would be eliminated, and anything with less than 1/4 of a cup of Cocoa would also be eliminated. The system would also determine that Vanilla should be tested in teaspoon intervals, but Cocoa should be tested in 8 teaspoon intervals because each “unit” of Cocoa has less incremental impact.

The result is that to find the ideal mix there are 12 levels of Vanilla possible, 11 levels of Cocoa, and whatever is left is Sour Cream. So there are 132 combinations to test (12*11), but this may be reduced as the limits of Vanilla and Cocoa are found -- so that there are probably only about 40 combinations that need to be tested.

And just like that, suddenly an AI is making me the best baker in Britain.

Learning through Adversity:

In a Generative Adversarial Network, the two sides don’t necessarily have to be AI. An AI could be generating the recipes and humans could be performing the evaluation.

A human could simply bake every possible cake combination, and take them all to the judges of the Great British Bake Off to have them evaluate. In this case, rather than simplifying the problem (pruning out “unacceptable” levels of ingredients), every possible combination would be tested and the evaluation would be a very annoyed human. The output generated during that type of process would just be all possible combinations. It would take a lot longer, but ultimately arrive at the same results.

In either scenario, Generative Adversarial Networks are how machine generated content is generally made, and they work not by being “good” at creation, but by trying lots of combinations and judging them via constant feedback.

Generative Adversarial Networks often work best when the Discriminator and the Generator are not the same type of AI, or possibly when the Discriminator is not AI at all.

A Few Examples of Real Life Adversarial Networks:

Bots in video games:

Many video games don’t ask the difficulty you want to play at. They simply ratchet up and down the difficulty based on how well you are doing. This keeps your stress level and addiction balanced. In this example, the AI is adjusting the aggressiveness, accuracy, and possibly damage of the bots in the game in response to the player. The Discriminator is likely a simple rubric of “how long is it taking the player to achieve the objective?” and “how often is the player’s health going down?”

Speech Recognition:

If you have ever called your bank and shouted “REPRESENTATIVE” ten times into your phone hoping it would get the computer voice on the other end to patch you through to an actual human, it is likely you were training an AI. When Siri or Google’s keyboard mishears you and you repeat a variation on your original input, the retry is flagged, and the system attempts to “hear” the original input as the repeated input so that it can ultimately get better at speech recognition.

Forecasting:

Forecasting anything from weather to the number of attendees at an event to which items will be the most purchased all use a Generator to make those forecasts. How well the system did can be measured by several metrics for success by a Discriminator. That could be accuracy in magnitude, or how close the forecast timing was, or any other determiner of how good the system is -- but in all cases this would be a Generative Adversarial Network.

Key Takeaways:

Machines make very good revisionist creators. As humans we don’t want to try every combination, but the AI is perfectly happy to loop endlessly making small tweaks to find every optimization possible. AI is the ultimate perfectionist creator. And while there is perfection achieved, there is no creativity -- it is perfection achieved via brute force. It is quite literally monkeys at typewriters, but the efficiency is created by an accompanying Discriminator that says, “That was close, but it doesn’t rhyme,” or “Silly monkey, it’s a play about Hamlet -- not a Ham Sandwich.”

If the Generator simply tries every combination, it is unlikely to ever generate brilliant prose in English (2 million words randomly ordered in a 2,000 word document makes for a lot of random gibberish). If instead the Generator has rules (remember the cake example), it can eliminate many of the iterations. The more variables involved, the more computationally intense generation is. This means that AI for making the winning GBBO cake is likely to be achieved in our lifetime, but AI replicating Hamlet or Macbeth is probably not.

 
Bio: Freya Rajeshwar is the Chief Product Officer (CPO) at Recognant. She has almost a decade of experience in Product Management, DevOps, and Analytics. At Recognant, she works on a number of products, including ones to combat human trafficking. Outside of work, she partners with programs to promote Computer Science education for children.

Original. Reposted with permission.

Related: