Putting Together A Full-Blooded AI Maturity Model

Here is a proposed “7A” model that is useful enough to capture of the core of what AI offers without falsely implying there is a static body of best practices in this area.

Data Brain

Some people seem to regard artificial intelligence (AI) as a secular faith.

That’s misguided. AI is primarily a software development and engineering discipline. It’s also a longstanding research focus in computer science, the cognitive sciences, and other fields. And, of course, it’s a popular obsession of science fiction authors and other creative artists. But AI is not a religion and it does not have a holy scripture that sanctifies some approaches while denigrating others.

That’s why I grow uncomfortable when tech industry conversations turn to “true AI,” as in this recent Bernard Marr article. These philosophical musings don’t hold much value for AI practitioners who are tuning algorithms to analyze real-time video feeds, recognize faces in a crowd, or guide an autonomous vehicle toward its appointed destination. AI developers primarily want their handiwork to achieve intended results. They are not concerned with whether it approximates some platonic faculty called “intelligence.” And they generally don’t care whether it conforms to someone’s arbitrary gospel of what constitutes by-the-book AI.

AI practitioners don’t need utopian and metaphysical visions of the technology’s manifest destiny. What they do need is a solid maturity model to guide how they use AI to realize business value. The maturity model would help enterprise architects to focus their AI initiatives. It should provide a useful framework for identifying AI’s potential strategic business impact, assessing an organization’s current AI capabilities, and prioritizing investments in AI technologies, skills, and processes needed to boost readiness and achieve desired outcomes. It would also help practitioners to determine what’s in-scope and out-of-scope of AI. That latter benefit of an AI maturity model is absolutely essential in an era when seemingly everything is being thrown into this bucket.

Considering the rapid pace of innovation in this field, it’s not clear where practitioners would find a maturity model that doesn’t inadvertently lock them into older ways of doing AI and thereby bias them against experimenting with newer approaches. Though there may never be any “true AI” in this fast-changing arena, there will always be cutting-edge approaches that early adopters will want to explore as soon as they’re available.

So how can we build a maturity model that’s useful enough to capture of the core of what AI offers without falsely implying there’s static body of best practices in this area?

In crafting an AI maturity model, one approach would be to adopt the “pragmatic AI” framework of “building blocks,” such as proposed here by Mike Gualtieri of Forrester Research. Conceivably, a planning framework such as this, organizing AI-enabling technologies into functional layers, could be evolved by adding new “building block” technologies to various layers while evolving the scoping of what’s already there.

From Gualtieri’s presentation, however, it’s not clear how this layering would translate into a phasing of AI initiatives in real-world contexts. Also, his current framework seems to be putting the cart before the AI horse, implementation-wise, by placing “deep learning” in level 1 and “machine learning” in subsequent level 2. Considering that deep learning is in fact a more sophisticated, multilayered form of machine learning, it would make more sense for Gualtieri to switch their order in his AI maturity model.

Another approach for building an AI maturity model might be to align the model’s layering with broad capabilities to be achieved from the technology’s predominant real-world applications. Such an approach wouldn’t tie the layers to specific technological enablers, but rather to the abstract AI-derived benefits to be derived from various established and emerging approaches.

In that regard, I recommend that you consider the capabilities framework I introduced in this recent Dataversity article. Though it doesn’t allude to any phasing of AI capabilities in the context of an enterprise architecture, it does provide a high-level framework for identifying the extent to which your current technologies, skills, and processes enable delivery of AI capabilities in any or all of the following areas:

  • Anthropomorphism: Are you building AI apps that can emulate natural human conversation to such a fidelity that they can drive avatars and impersonate flesh-and-blood individuals?
  • Automation: Do your AI apps enable automation of cognitive processes to such a degree that the need for manual attention, judgment, and supervision is greatly reduced or eliminated entirely?
  • Acceleration: Does your AI accelerate cognitive processes far beyond what humans would be able to achieve unassisted?
  • Anticipation: Can your AI anticipate human intentions and reactions to a greater degree through continual iteration of predictive models from fresh training data?
  • Adaptivity: Is our AI able to adapt its cognitive models to fresh data, to interactions with humans, and to changing contexts in order to hone its cognitive skills to a finer degree?
  • Assistivity: Does your AI bring cognitive intelligence into everyday decision-support and other applications through cognitive chatbots and other virtual intelligent assistant?
  • Augmentation: Is your AI able to augment users’ organic powers of cognition, reasoning, natural language processing, predictive analysis, and pattern recognition?

The advantages of this maturity layering are several. First, you can add new capabilities to this list as AI’s use cases and enabling technologies evolve in the real world. Also, it doesn’t imply that you need to deliver all of these capabilities on every AI initiative. And it doesn’t tie any of these capabilities to any particular current or future enabling technology.

Yet another approach for an AI maturity model would be to assess your ability to implement any or all of the high-level AI methodological “schools” that I discussed in this article from last year:

  • Connectionist: Does your AI practice leverage machine learning, deep learning, and other algorithmic approaches that rely on creating artificial neurons and connecting them in feedforward networks with backpropagation and adaptive weights?
  • Symbolist: Does your AI practice work from existing knowledge patterns, using inverse deduction to fill in gaps by starting with some premises and conclusions and working backwards to acquire missing knowledge through analysis of existing data sets?
  • Evolutionary: Does your AI practice model data-driven analytics on existing ecological patterns, applying to data an analogy of natural selection as it operates on genomes in nature?
  • Bayesian: Does your AI practice model probabilistic patterns using statistical inferences to take hypotheses, apply “a priori” thinking, and then updating the hypothesis as they see more data?
  • Analogizer: Does your AI practice investigate existing proximity patterns by matching data elements to each other and using the “nearest neighbor” principle to can give results similar to neural network models?

One of the advantages of this maturity layering is that it provides a framework for identifying the list of established AI methodological paradigm that you could combine in various ways within any project. Your AI staffing, platform, and tool strategies would need to support the “schools” in which you’ve grounded your AI practices.

Conceivably, you could deepen your AI maturity model under any of the building blocks, capabilities, or “schools discussed above, or in any aspect of AI development, deployment, and management that I haven’t yet touched on in this post. Some of the key areas where you might build out the levels of your AI maturity model include:

What I’ve just spelled out is guidance for building a comprehensive AI maturity model with all the bells and whistles. However, you shouldn’t equate such a model with the illusory vision of “true AI.” A comprehensive maturity model should simply help your organization to build out a AI roadmap to unlock added layers of value that on top of your baseline investments in the technology.

You may never need to implement all these capabilities and methodologies in your particular AI practice. Your organization’s particular implementation is no less “true” if it’s just at the starting point in every facet of this or any other AI maturity model.