Figuring Out the Algorithms of Intelligence
Marvin Minsky, the father of AI, passed away this year. One of his inventions was the confocal microscope, which we used to take this high-resolution picture of a live brain circuit. Something in these cells allows them to automatically identify useful connections and establish useful networks out of information.
By Nathan R. Wilson, Ph.D., Nara Logics.
Data science, and knowledge discovery, are among the most “brain-like” operations that a company does, and its practitioners have a unique vantage point into the utility of artificial intelligence. With the emergence of deep learning now upending AI, it is worth exploring how this powerful class of techniques relates to knowledge and understanding, using our own brain as a gold standard for how information is stored for synthesis and insight.
In Search of the Master Algorithm
Is there a general “process” by which data can be turned into knowledge, or a “rule” for learning rules? Most neuroscientists think so, and so do deep learning researchers. They comprise two search parties, looking for the self-organizing logic that is the magic key for turning data into knowledge. But both agree that there is something special about the nature of information, passed through a general structure intended to dynamically filter for veracity and novelty. Such a possibility makes it feasible to envision a true “brain” for our data, and thus knowledge at the organizational level. What will our data brain look like?
Inspired by Biology – Data Storage Will Start to Reflect the Natural World
The way we store and interact with data is already changing, and becoming more like the “connectionist” models that, after decades of falling in and out of favor in machine learning, may at last be here to stay, thus converging with neuroscience and other dominant models of information processing in the natural world (genetics, ecology and systematics, immunology, etc). Data in our machine systems are still stored in rows and columns but it is their relations to other data (which increasingly comes with weights), that define the value of each quantum. New tools, storage and programming methodologies are arising that make it possible for data to be readily connected both through better curation and recirculating automation. The dynamism that results looks less like a fixed circuit and more like an organic system.
From an evolutionary perspective, the relational structure of databases discovered in the 1970s became the early scaffold for structuring and connecting data, a true breakthrough that now underpins data storage in every industry, and whose value is only now starting to be truly appreciated. The difference between now and the future is that these connections are still binary (pointers to other tables) whereas brain structures, including those produced by deep learning, are “associational” – learning the strength of relatedness between stored concepts.
In the future, growing data trees of associations will be increasingly fused (like the unified “data lakes” that are evolving in advanced organizations). Data records will not be duplicated into many different places and fragmented, but rather different places will connect to the same record in different ways. This seems to be how biology has mastered information, with your brain maintaining a master record for each concept (such as the “Halle Berry” cells in one famous study), and this record is accessed and retrieved in many different ways from completely different brain areas (for example, seeing a picture of Halle Berry through the eyes excites the same cells as hearing the spoken name through the ears). Unified representations of course, once realized, have advantages for efficacy and maintainability.
Role of the Data Scientist: Pathways not Manual Updates
Data science is already evolving to be less about one-off or static reports, and more about constructing “living” systems for real-time and recurring insights. The role of the data scientist will increasingly be to establish “brain pathways” through which data can flow, synthesize, and conditionally transform, to produce new knowledge. Like with gardening, it will be the careful positioning and stabilizing of high-pressure automated pipes that will bring in and transform whole planes of data. And like a concert, it will be about orchestrating and balancing the levels of these pipes - orthogonal algorithms with differing informational purposes, just as the brain dynamically balances incoming pathways for maximal information gain.
As a result of such work, companies and their knowledge stores will increasingly qualify as intelligent entities, and the accuracy with which they make decisions -- perhaps a rudimentary “GI” score -- can be construed as their culture and methods for indexing and promoting connections between data.
Future Workflows: “Bottom Up” vs. “Top Down”
Managing a business in an increasingly volatile world is clearly becoming less about manually directing or “hard coding” specific initiatives and more about creating the adaptive conditions where initiative can flourish – a resilience that evolved intelligence discovered early in its inception. Similarly, rather than defining a single high level “objective” or “objective function” in advance (as with current “top down” methods of data science), and attempting to search for factors that can help optimize that, intelligence in the organization will increasingly be constructed in a “bottom up” fashion, where all signals are sensed, but only select ones will be responded to.
Processing networks will synthesize primary signals into higher order representations, with an emphasis on concentrating veracity and novelty. Attentional mechanisms will be triggered by the arrival of veracity or novelty, and offline spotlights (data dreams) will “probe” for new combinations that could give rise to the same. Most of all, rather than being a sequential set of processing stages as with current workflows, it will be about many parallel updates to the “state” of the frame, that over time and organizational experience begins to approximate better and better truth.
Beware the Black Box – the Chance for Curation
The “mixed blessing” reality is that we are about to be immersed in a much more complex world, awash with automated agents who are constantly adjusting our data like rogue Rosy robots. At the heart of this system needs to be a methodology for direct human interplay and oversight. Machines will make mistakes. Individual dependencies will need to be added or adjusted. Systems where the human cannot be the final adjudicator will be bypassed. And systems that allow for the direct and facile input from experts will expand and advance.
The natural implementation will be one where massive streams of data are passing through the nexus and interacting, but a human can observe what is going on, what the connections mean, and perform targeted updates. The data library will be a teeming garden, but one that we will continue to trim and otherwise fertilize to maximize knowledge health and cultivate truly unique and beautiful insights.
Bio: Nathan R. Wilson, Ph.D., is a scientist and entrepreneur focused on actualizing powerful new models of brain-based computation. After years at MIT working towards the mathematical logic of neural circuits, Nathan co-founded Nara Logics, a Cambridge, MA artificial intelligence company developing “synaptic intelligence” that automatically finds and refines connections across data for recommendations and decisions within enterprises. Nathan holds many patents in AI and his research has been featured in Nature, Science, PNAS, and the MIT Press.
Related: