Where have we been, where are we now, and where are we headed?

KYield UML 2009

Where have we been?

I’ve written this in first person format so the reader can see it through my eyes from the trenches, including observations, conclusions and questions. In looking back over time, it helps me to revisit milestone articles I’ve written and read.

Although my first published paper on our AI R&D was in 2002, I’ll start by looking back at a futuristic scenario on the American healthcare system published in 2010. A decade is a nice round number and was also the same year my old friend and former business partner Russell Borland passed away unexpectedly. Russell was a close friend from the early 1980s on. He was involved with our journey at KYield since inception in the mid 1990s, so losing him was a shock.

I emailed the healthcare scenario to Vint Cerf who asked me if he could share it — the paper was on the web so of course I said yes. The next thing I know enormous numbers of downloads were occurring (don’t underestimate Vint’s network, or Google’s). We stopped counting at several million views from healthcare institutions all over the world, and that’s just on our site (others have published the paper on the web without permission).

Healthcare has been excruciatingly slow to deploy even minor machine learning applications, but the massive industry cluster is finally making progress. Unfortunately, the majority of focus is still limited to R&D, insurance, and billing, when we need to focus on systemic reform and patient empowerment. I don’t believe I ever shared in public, but the main character in the scenario was loosely patterned after my father who died in 2007 from complications of diabetes after a long illness.

2012 was the first year I became convinced that our KYield OS was technically viable, due to a test of data on one of the largest financial networks through a mutual vendor. 2012–2014 was still early in the commercialization process, primarily limited to supercomputers and OEM, not distributed as I envisioned from day one.

It has been a wild ride for the industry and markets in the nearly six years since my article about fear of AI was featured at Wired . Although widely read, the article apparently had little impact as stories hyping an AI apocalypse have appeared regularly ever since, undoubtedly due to the volume of clicks they attract. 2014 was the same year Google acquired DeepMind for a reported $500+ million. The amount was surprising to many given the stage of the company, no known customers or revenue, and a mission more aligned with basic science than business. Indeed, speculation about friction between DeepMind’s research oriented mission and associated losses at Alphabet has been increasing, supported by some evidence.

Another major investment occurred later in 2014 that took many by surprise, this time in the form of a syndicated corporate investment in Sentient for $103.5 Million, led by Tata. A few months earlier I received an unsolicited phone call from the same unit at Tata, which was the first of many from a variety of companies and industries. These deals were noteworthy within the trend of sharply increased investment in AI companies; $1.147 billion was invested in 2013 and by 2018 venture capital (VC) investment in AI reached $9.334 billion. Investment slowed in 2019, particularly in China, which briefly surpassed the U.S. in VC, though as is often the case a few very large investments can distort averages.

In March of 2015, DeepMind published a notable paper in Nature titled “Human-level control through deep reinforcement learning”. Senior management teams were beginning to take notice, particularly in areas deemed to be vulnerable to disruption.

In 2015 I wrote an article as part of a series for an applied AI column at ComputerWorld on recent trends in AI algorithms, with the participation of Jürgen SchmidhuberYoshua Bengio, and Sepp Hochreiter, among others. As has been the case throughout his career, Jürgen was focused on achieving super intelligence by empowering machines to learn to learn (Metalearning). His efforts have since included becoming a co-founder and chief scientist of NNAISENSE. Sepp was getting back into algorithmics and was working on reinforcement learning, which would manifest as RUDDER (Return Decomposition for Delayed Rewards). Yoshua was working on unsupervised learning and reinforcement learning, which would lead to co-founding Element AI. Element has raised $257 million since its inception, including from Microsoft, the Gouvernement du Québec, and McKinsey & Company, among many others. Investment has certainly expanded enormously, but how much of that investment is sustainable? The jury is still out.

The most popular article with CEOs of large organizations I’ve published in recent years was in 2017 on the need for a new type of operating system enhanced with AI to survive when facing aggressive competitors (adapt or be displaced — your choice). Many of the Fortune 100 companies have decided to build custom AI systems internally from scratch. In several that I’ve invested time with, redundancy and waste falls within a range between 20–40%, and is increasing, which brings up serous questions for directors of the world’s leading organizations.

Most of these efforts appear to be old fashioned turf protection in action, not prudent decision making toward the mission of the organization. Whether anyone wants to admit it or not, the majority of functions within the enterprise are universal. Since our systems technology allows us to tailor automatically to specific needs, many of the benefits in custom systems development can be achieved while also enjoying much of the value offered by commoditization — in a single system. An important innovation not yet fully understood even in the top tier. It took decades to begin to overcome the IT commoditization paradox that has transferred massive wealth globally at the expense of the nations that invested in the R&D and invented the technologies.

Where are we now?

20 AI companies have raised unicorn-sized funding rounds in the past 12 months. Autonomous vehicles continue to see very high levels of investment and testing. The total is now reported to be over $100 billion of committed capital across 40+ companies. Toyota was among first to react to Waymo with a $1 billion investment a few months after my first letter to their CEO, which rapidly grew to $2 billion and is now at $50 billion. Each company learned a similar lesson — the first part of the process is the easiest and least expensive, but the further one goes towards full autonomy the more difficult and expensive the process becomes. A few have also discovered that the amount of redundancy increases as well. Most automakers have wisely decided to partner by investing in smaller companies focused on nothing else. They can provide a more optimal environment and culture to get the job done, free from large company turf battles and other conflicts, and provide more effective financial incentives through stock options, which are rare in mature companies. In my experience and observations, these types of efforts are best accomplished by small elite teams free from other burdens or responsibilities, typically at a fraction of the cost.

Another intensive area of investment and adoption is of course with voice bots, or digital assistants. Like autonomous cars, the first investments were decades ago in highly experimental efforts that were primitive by today’s standards. Siri was among most publicized that survived, enjoying significant investment from DARPA in R&D over a long period, yet still did not have a mature product when the company was sold to Apple. We built and operated a small digital assistant project in our lab during the late 1990s called “Lookout!” — a personalized scout of sorts. Cutting edge at the time, Lookout! was severely limited by component technology. Twenty years later Amazon’s Alexa surpassed 100 million units sold, including partners and OEM. An impressive number. More interesting to me is how Amazon has integrated Alexa into the rest of its networked business empire, which has raised serious concerns about privacy and security, even in the Washington Post, which is owned by Jeff Bezos.

Autonomous warfare is one of the most concerning developments in recent years. Paul Scharre wrote a good book on the topic titled An Army of None, which is well worth your time. Paul has literally been in the trenches of warfare for decades, and has studied the issue carefully, including ethics. Although I share Paul’s concerns about autonomous weapons, most of my expertise is focused on enterprise-wide AI systems and the implications thereof. China’s efforts to integrate the two is troubling to say the least. Last January I warned of the real possibility that the U.S. could face a billion armed drones. Although the Army has developed good defenses for small numbers, if anyone has developed an effective system to stop a billion drones, I haven’t seen it. A more likely risk are large numbers of drones from terrorist groups, but that could still be in the thousands. A high priority for military labs.

One area of certainty is autonomous cyberwarfare and industrial espionage as it has increasingly occurred at scale for many years. Another certainty is that this technology is sufficiently powerful to rapidly change the global security calculus. Our adversaries are well aware of it, investing heavily, and moving more rapidly than the U.S. in some areas — particularly with competitive enterprise-wide AI systems. The old bureaucracies in the West are now more of our weakest link than our greatest strength.

In terms of applied AI in government and business, although much progress has been made, the majority of adoption is still yet to come. The latest McKinsey survey on AI adoption is consistent with our experience, revealing a 25% increase in adoption under the broad umbrella, but the gap continues to grow between the leaders and laggards, particularly enterprise-wide systems.

From 2015 to the end of 2018 the vast majority of efforts in AI were focused on machine learning (ML) and deep learning (DL). Analysts were advising managers to start with small projects, learn slowly, and then expand incrementally. Unfortunately, while small ML/DL projects have been successful for harvesting low hanging fruit, the leaders in AI who spend vast sums on consulting were apparently receiving much different advice — they were going bold, very bold. Unlike traditional software projects, competitive AI systems must be very well designed from inception. Large AI systems have more in common with designing a power plant or space ship than small software projects. Small ML projects don’t magically morph into AI systems. For many companies in the second and third tier, starting small was terrible advice as the leaders in AI systems did just the opposite. However, it’s also true that a few went bold and missed big.

In 2019 we finally saw the mainstream turn as the world began to notice that AI leaders were pulling rapidly ahead of everyone else. Market leaders in every industry are now investing in ML and DL. 53% of global data and analytics decision makers claim to have implemented some sort of AI system. Most of are ML projects, not AI systems. Very few have adopted anything close to our KYield OS, which is an enterprise-wide AI OS in distributed format.

Robotic Process Automation (RPA) has been growing very rapidly (63% in 2018), particularly when compared to enterprise software as a whole, which is saturated. RPA is a bit like ‘automation for dummies’ — basic technology, low risk for customers, and can be scaled rapidly by repeating small steps on widely installed systems like logistics. Several of the RPA companies are attempting to morph into AI companies with excessive investment. We’ll see how it goes, but call me a skeptic. Intelligent Process Automation (IPA) is the new new thing — sort of the smarter sibling of RPA that goes along for the ride, learns as they go, and calculates challenges for the others. IPA essentially describes in generic form some of the specific functionality we’ve had in our KYield OS all along. The only thing new about IPA is that analyst and VC firms have adopted the acronym.

In science, we’ve seen significant progress in the last few years. One of my personal favorites in algorithms are improvements in evolutionary algorithms, such as Ken Stanley’s work in neuroevolution (combining neural networks with evolutionary techniques). I met Ken at the Santa Fe Institute (SFI) several years ago when he and Joel Lehman were writing their provocative book Why Greatness Cannot Be Planned, which reveals that society’s obsession with goals is more counterproductive than we realize. Often is the case in practice when we are our own enemies, and would be better off favoring serendipity rather than marching blindly towards objectives that in hindsight may have been misguided. I didn’t realize it at the time, but the authors had partnered in a startup that was later acquired by Uber, providing the foundation for the AI investment at Uber Labs (along with Jeff Clune).

I remember sitting out on the back patio at SFI with Ken and Jeff after a presentation by Jeff, primarily listening as they discussed their work — one of those serendipitous moments neuroevolution attempts to exploit. Oddly enough, this was the precise table and chair I sat in a couple years earlier during an extensive discussion on the financial crisis with a leading economist (a very good one at that). A nice example of their work can be found at Nature: “Designing neural networks through neuro evolution”. I’ve followed neuroevolution more closely in recent years, becoming a proponent of the methods employed even though it can be expensive computationally and isn’t suitable for many applications. We’ve integrated select methods of their published work into our KYield OS. One must be careful with evolutionary algorithmics for critical systems as they are by definition unpredictable. The methods can be powerful for discovering unknown unknowns nonetheless.

Another area of interest that falls outside of traditional machine learning, yet has enormous implications for artificial intelligence, is of course quantum computing. Google recently surprised the world by unveiling an experiment that achieved “quantum supremacy”. The quantum supremacy experiment was run on a fully programmable 54-qubit processor named “Sycamore.” The team reports applying a new type of control knob that turns off neighboring qudbits. The results were stunning — the quantum computer produces in 200 seconds the same calculations that are estimated to require the world’s fastest supercomputer 10,000 years. I should note that quantum and classical computing have strengths and weaknesses, so it’s not necessarily fair to compare the two–-it depends on the specific type of computation, though still an impressive breakthrough, assuming report is accurate. It just so happens that Google’s supremacy announcement arrived a couple of months after I unveiled our synthetic genius machine, which is an ideal application for quantum computing.

Where are we headed?

Depending on who one talks to or which article one chooses to read, we could be either entering an AI winter in 2020 or a Cambrian explosion. While it is very clear that quite a few companies are overfunded and overvalued, which has created many individual micro bubbles, I am not expecting an AI winter. Perhaps a late spring blizzard but nothing severe. Nor do I see a Cambrian explosion other than with exceptional individual products and systems. It’s much more likely we’ll see a continuation of the same pattern with a few surprises along the way.

My reasoning is that while it is true that decisionmakers in large organizations who control much of the distribution in the economy can be tortuously conservative in the adoption of new technology — to the point of recklessness, few I’ve known are fools. Although AI in the form of super intelligence has yet to be demonstrated, specific functionality in defined areas are surpassing humans and delivering very attractive ROI that can’t be achieved otherwise. It is particularly valuable when used in hybrid form to enhance human work. Moreover, competitors who have invested heavily in the technology are experiencing significant success and rapidly expanding their lead, which greatly increases risk for many incumbents. Being left behind at the altar of history is the same motivation that drove adoption in previous generational advancements in technology. My old friend Les Vadasz has shared good stories about this dynamic in the early days of Intel. In that regard, I see the PC revolution of the 1970s and 80s very similarly to the AI revolution today. Organizations that fail to adopt wisely will rapidly fall behind and suffer the consequences.

I expect to see a continuation of occasional breakthroughs, competition for clicks in media (and associated hype), follow-on funding by investment syndicates protecting earlier investments, and a fair amount of early-stage investment. We will continue to see deflation of overhyped individual companies. Excessive investment and hype leads to market dysfunction, predatory capital practices, enormous waste, and disillusionment in customers and investors, none of which is good for anyone other than an unethical few who practice pump and dump schemes. As someone who has been through more than his share of bubble expansions and deflations, I welcome rational exuberance, steady improvement in adoption, enhanced productivity, crisis prevention, and increased security. I also celebrate genuine breakthroughs even from competitors as I appreciate how difficult they are and what they can do for society.

If we take Forrester’s AI predictions for 2020, which seem rational, the actual impacts within a few years will still be enormous and very widespread: “25% of the Fortune 500 will add AI building blocks to their Robotic Process Automation (RPA) efforts to create hundreds of new Intelligent process automation (IPA) use cases.”

Let’s assume this 25% in 2020 includes a few healthcare companies with tens of millions of customers — that could easily result in a 10% improvement in healthcare outcomes within a couple of years, which would be enormous positive impact. The question then becomes which portions of those benefits will patients see, and what if any will be passed on to improve healthcare economics? The answer depends in part on legislation and in part on business modeling — will new competing models emerge?

Now consider similar improvement in productivity for physicians and nurses, in pharma R&D (beginning to be realized), and in health insurance, we begin to make serious gains in transforming the healthcare system into something approaching sustainable. No hype needed here — a continuation of this trend would represent hundreds of billions of dollars within five years, impacting millions of lives.

Banking is one of the larger investors in AI systems, but we obviously still have a long way to go. Just last week my wife noticed a charge on our credit card that appeared fraudulent. It was from a partner of a cybersecurity firm I used a decade ago in a packaged offering. The amount was small and so didn’t raise a red flag. Further investigation revealed that the company had been charging us regularly for the entire decade, even though they admitted the license was for one year and should have been canceled. The PC was recycled nine years ago so obviously hasn’t been an active account, no software updates, etc. To their credit, the bank we use for that card is covering the fraud charge, but neither of us caught it. Although this major bank has strong fraud prevention, it obviously isn’t good enough. I can clearly see how to prevent this type of fraud with a relatively simple ML application, which would generate a large ROI at scale. Much work to be done yet in banking, retail, industrials, tourism and transportation — any and all sectors, take your pick.

Consider a similar 10% improvement in cybersecurity and the ensuing positive economic impact. Now consider an annually compounded productivity improvement of 10% (or even 5%) over a decade across the global economy. The associated outcomes are well into the trillions USD, not to mention large numbers of lives saved in medicine, transportation, and prevention of all types of accidents that would not have occurred otherwise. This is precisely what the heavily indebted West needs from an economic perspective, and is readably doable today with current technology. It just needs to be implemented rapidly and prudently with well-designed systems (not one-off projects or by creating more silos).

If we take IDC’s forecast, which is slightly more optimistic than Forrester: “by 2024, AI will be integral to every part of the business, resulting in 25% of the overall spend on AI solutions as “Outcomes-as-a-service” that drive innovation at scale and superior business value.” This sounds like it may have been inspired by our HumCat program we pioneered a few years ago where we offer to install a more powerful program on top of our KYield OS for prevention of human-caused catastrophes, with the option to package with insurance and financing. We then take a bonus based on a fraction of the actual savings to customers. Although we were a bit ahead of the market, the HumCat should be adopted today in any high-risk organizations, including most large organizations. HumCat represents the highest possible ROI for incumbent organizations.

I was recently interviewing prospective board members for KYield and had several conversations that shed light on slow adoption across the enterprise. Long-term public company directors concluded that we were way ahead of the market in our work. While true in forward looking R&D, my response was “actually, the KYield OS has been technically viable or years, and it’s now competitive within our small peer group, which includes a few of the most successful companies in the world. We aren’t ahead, your companies are behind, which is the purpose of the KYield OS –- to enable your companies to compete and survive.”

If systems like our KYield OS are not adopted, or if organizations wait for laggards to reverse engineer and deliver as a commodity simultaneously worldwide, thus providing no competitive advantage, it would be a terrible mistake for organizations and society as a whole. While no one can be certain, we may well need these systems to survive as organizations and as a species at some point in the future and, we have no way of knowing when that might be. So, we need to get on with it. This is fundamental leadership, whether in the public or private sector.

What to expect by 2025

1. AI-enhanced human workflow across all installed networks. We have a serious global problem with flat productivity combined with historically high debt levels. We finally have the means to begin to address the challenge. Recent safety features in transportation are a good example. One of the benefits of large investment in autonomous vehicles are much smarter cars and trucks, which are much safer overall, regardless of whether full autonomy scales. The economic and health benefits that come with smart transportation are enormous. The same concept is true for any type of human work, including industrial, medical, technical, research, government, professional, consumer, etc. Safety first and foremost, particularly when it comes wrapped in an attractive ROI — financial and human.

2. Medicine. We are just beginning to witness the early stage of what will undoubtedly be a revolution in smart medicine. Artificial limbs, smart drugs, nano-level diagnostics and treatments, surgery, and prevention. This revolution is currently accelerating and I fully expect it to continue. Eric Topol does a nice job of outlining his vision from a physician’s perspective in his recent book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again.

3. The combination of AI and quantum computing. It now appears likely that applications like our synthetic genius machine will be viable within five years at the more advanced levels that require quantum computing. While this application may not achieve artificial general intelligence, it could perhaps provide something better in the form of accurate simulations of the most brilliant people in history. This system could potentially help humankind with all of our major challenges. We are already observing signs of acceleration in R&D with the assistance of algorithmics, but it is not yet well orchestrated for anything near optimal. The question is whether entrenched institutions will continue to defend, or adopt, and if they defend what types of new models will emerge? Personally, I am most motivated by accelerating R&D across disciplines.

4. Dramatically increased combinations of real and synthetic data. The super majority of focus over the previous decade in AI was centered on applications that required massive amounts of data (DL in particular). Data volume will become less important in many applications as we move forward, in part driven by security and privacy concerns. This will result in new business models that will be more respectful of individual human rights, privacy and security.

5. Autonomous cybersecurity. We of course have long enjoyed autonomous virus software. Autonomous network security is evolving rapidly, but autonomous attacks may be evolving more so. Autonomous cybersecurity is an absolute necessity and should be among our highest priorities in national and corporate security. An area of personal interest, I expect autonomous cybersecurity to be deployed across networks within five years.

Conclusion

To the extent hyperbole is ever justified, which is questionable, we no longer need to overhype AI to scale throughout society. The IBM Watson experience should have provided a convincing lesson. Hype, overinvestment, and predatory capital practices should be discouraged by everyone including customers and investors. However, we also need to finance large systems in a sustainable manner and we need to reduce overreliance on big tech. A significant gap exists in the U.S. today in so doing, making us very vulnerable to being leapfrogged by adversaries.

Although the business and investment opportunities are certainly sufficient to satisfy any private goals, we are also suffering from a competency gap with investors outside of a small group of competitors. Expert competency becomes very important in analyzing and understanding different types of systems, and frankly it’s rarely found. Most technical experts lack the depth in business while business experts lack technical competency to understand sufficiently to make wise decisions. Very few seem to be sufficiently competent in both to be able to recognize poor advice.

It requires an extraordinary personal commitment over a long period to be in a position to make wise decisions on critical AI systems, which of course no busy executive can do, regardless of importance — they must keep the lights on. They must therefore rely on advice from others, and that’s where conflicts raise their ugly heads — internally and externally. Entrenched bureaucracies typically view AI as both a threat to their institutions as well as an opportunity to expand empires during a time when we are faced with high debt levels and massive unfunded liabilities. Stein’s Law may be severely challenged in the age of infinite quantitative easing, but using the threat of China to expand U.S. bureaucracies is self-destructive none-the-less. The problem of competency extends to national security. The bureaucracy suffers greatly from misplaced ideology and internal turf battles.

What we seem to be experiencing is an extended period where the qualified who have done the work are propping up unqualified individuals in bureaucracies who then manipulate the narrative towards their personal or unit interests. To no surprise, turnover is high in such situations. We’ve witnessed several management teams change over the past few years due to technical competency in high risk organizations. I expect that trend to continue. Whether we are able to work with a particular organization or not, competency and integrity are easy to recognize when dealing with enterprise-wide AI systems.

Since no obvious alternative exists to competency, we can expect the talent war to continue — not just in science and engineering, which is all but certain at the top tier, but also in strategic leadership at the top of organizations, including CEOs and directors. In any event, the next few years will almost certainly be fascinating. Fasten your seat belts and try to enjoy the ride. Expect some turbulence along the way.

Mark Montgomery is the founder & CEO of KYield, Inc.

2 thoughts on “Artificial Intelligence in 2020

  1. Wow, this article is incredibly insightful and provides so many points I can relate to from my time in in a Fortune 100 company attempting to implement AI (or any disruptive technology) to benefit our Cloud / IT Systems. The challenge for many companies has to do with incentive mis-alignment, the unfamiliar nature of newer disruptive technologies and the hyper competitiveness inherent in large tech companies. It’s easier to say “We are doing AI” and get the proverbial career cookie, than it is to actually make a meaningful improvement to a business outcome with AI. The former provides a quick career boost, the latter takes significantly more energy and commitment.

  2. Hi Kyle – thanks. Yes, as CIOs are fond of saying “we aren’t incentivized to….” (take risk, become more competitive, or even save the company). Last I noticed the trend had mellowed, but for many years the CIO role was a bit like flipping houses or tech companies – get in, make good money and then on to the next victim. More recently it’s been CEOs. So clearly one of the big challenges we and others face is gaining sufficient intel on each and/or hopefully getting to know customer prospects well enough, to be able to make a rational judgement. Many organizations move as a herd – few are leaders in adoption of next gen tech. For enterprise-wide systems like our KYield OS it’s a long-term commitment for all concerned.

Leave a Reply