DeepMind Wants to Reimagine One of the Most Important Algorithms in Machine Learning
In one of the most important papers this year, DeepMind proposed a multi-agent structure to redefine PCA.
I recently started a new newsletter focus on AI education and already has over 50,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:
Principal component analysis(PCA) is one of the key algorithms that are part of any machine learning curriculum. Initially created in the early 1900s, PCA is a fundamental algorithm to understand data in high-dimensional spaces which are common in deep learning problems. More than a century after its invention, PCA is such a key part of modern deep learning frameworks that very few question it there could be a better approach. Just a few days ago, DeepMind published a fascinating paper that looks to redefine PCA as a competitive multi-agent game called EigenGame.
Titled “EigenGame: PCA as a Nash Equilibrium”, the DeepMind work is one of those papers that you can’t resist to read just based on the title. Redefining PCA sounds ludicrous. And yet, DeepMind’s thesis makes perfect sense the minute you deep dive into it.
In recent years, PCA techniques have hit a bottleneck in large scale deep learning scenarios. Originally designed for mechanical devices, traditional PCA is formulated as an optimization problem which is hard to scale across large computational clusters. A multi-agent approach to PCA might be able to leverage vast computational resources and produce better optimizations in modern dep learning problems.
To reinvent PCA as a multi-agent game, DeepMind needed some basic rules which, ironically, came from PCA itself. The most important contribution of DeepMind is to reformulate PCA as a way to finding a Nash equilibrium in a suitable game. The target game itself consistns of eigenvectors which capture relevance variances in a dataset. Each player controls an eigenvector and can increase their score by explaining variance within the data. However, the players are penalized if they are too closely aligned with other players. From that perspective, some players care about minimizing variance while others care about minimizing the alignment with other players in the environment.
DeepMind’s EigenGame game can achieve a Nash equilibrium is all players play optimally trying to maximize their utility. In the following figure, you can see how EigenGame guides each player from the empty circles to the optimal solution highlighted by the arrows.
Image Credit: DeepMind
The multi-agent nature of the EigenGame means that the optimization problem can be distributed across large computation clusters. Each player can compute specific optimizations in individual devices which can then be aggregated using the game dynamics.
Image Credit: DeepMind
EigenGame represents one of the most notable examples of designing machine learning problems as multi-agent game systems. Reformulating PCA as a multi-agent dynamic is fascinating but the principles certainly expand to many others optimization problems in machine learning.
Original. Reposted with permission.
- Beyond the Nash Equilibrium: DeepMind Clever Strategy to Solve Asymmetric Games
- DeepMind’s MuZero is One of the Most Important Deep Learning Systems Ever Created
- Matrix Decomposition Decoded