(This article was featured at Wired – 2014)

Fear of AI vs. the Ethics and Art of Creative Destruction

While it may be an interesting question whether the seasons are changing in artificial intelligence (AI), or to what extent the entertainment industry is herding pop culture, it may not have much to do with future reality. Given recent attention AI has received and the unique potential for misunderstanding, I thought a brief story from the trenches in the Land of Enchantment might shed some light.

The topic of AI recently came up at Santa Fe Institute (SFI) during a seminar by Hamid Benbrahim surrounding research in financial markets. Several senior scientists chimed in during Hamid’s talk representing computer science (CS), physics (2), neuroscience, biology, and philosophy, as well as several practioners with relevant experience. SFI is celebrating its 30th anniversary this year as a pioneer in complexity research where these very types of topics are explored, attracting leading thinkers worldwide.

Following the talk I continued to discuss financial reforms and technology with Daniel C. Dennett, who is an external professor at SFI. While known as an author and philosopher, Professor Dennett is also Co-Director of the Center for Cognitive Studies at Tufts University with extensive published works in CS and AI. Professor Dennett shared a personal case that provides historical and perhaps futuristic context involving a well-known computer scientist at a leading lab during the commercialization era of the World Wide Web. The scientist was apparently concerned with the potential negative impact on authors given the exponentially increasing mass of content, and I suspect also feared the network effect in certain types of consumer services that quickly result in winner-takes-all dominance.

Professor Dennett apparently attempted to reassure his colleague by pointing out that his concerns, while understandable, were likely unjustified for the mid-term as humans have a consistent history of adapting to technological change, as well as adapting technology to fill needs. In this case, Dennett envisioned the rise of specialty services that would find, filter, and presumably broker in some fashion the needs of reader and author. Traditional publishing may change even more radically than we’ve since observed, but services would rise, people and models would adapt.

One reason complexity attracts leading thinkers in science and business is the potential benefit across all areas of life and economy. The patterns and methods discovered in one field are increasingly applied to others in no small part due to collaboration, data sharing, and analytics. David Wolpert for example stated his reasoning for joining SFI part-time from LANL was a desire to work on more than one discipline simultaneously. Many others have reported similarly both for the potential impact from sharing knowledge between disciplines and the inherent challenge. I can certainly relate from my own work in applied complex adaptive systems, which at times seems as if God or Nature were teasing the ego of human intellect. Working with highly complex systems tends to be a humbling experience.

That is not to say, however, that humans are primitive or without power to alter our destiny. Our species did not come to dominate Earth due to ignorance or lack of skills, for better or worse. We are blessed with the ability to intentionally craft tools and systems not just for attention-getting nefariousness, but solving problems, and yes being compensated for doing so. Achieving improvement increasingly requires designs that reduce the undesirable impacts of complexity, which tend to accumulate as increased risk, cost, and difficulty.

Few informed observers claim that technological change is pain-free as disruptions and displacements occur, organizations do fail, and individuals do lose jobs, particularly in cultures that resist macro change rather than proactively adapt to changing conditions. That is after all the nature of creative destruction. Physics, regulations, and markets may allow us to control some aspects of technology, manage processes in others, and hopefully introduce simplicity, ease of use, and efficiency, but there is no escaping the tyranny of complexity, for even if society attempted to ban complexity, nature would not comply, nor would humans if history is any guide. The risk of catastrophic events from biological and human engineered threats would remain regardless. The challenge is to optimize the messy process to the best of our ability with elegant and effective solutions while preventing extreme volatility, catastrophic events, and as some of us intend—lead to a more sustainable, healthy planet.

2012 Kyield Enterprise UML Diagram - Human Skull

The dynamics involved with tech-led disruption are well understood to be generally beneficial to greater society, macroeconomics, and employment. Continual improvements with small disruptions are much less destructive and more beneficial than violent events that have occurred throughout history in reaction to extreme chronic imbalances. Diversification, competition, and churn are not only healthy, but essential to progress and ultimately survival. However, the messy task is made far more costly and painful than necessary, including to those most impacted, as entrenched cultures resist that which they should be embracing. Over time all manner of protectionist methods are employed to defend against change, essential disruption, or power erosion, eventually to include manipulation of the political process, which often has toxic and corrosive impacts. As I am writing this a description following a headline in The Wall Street Journal reads as follows:

 “Initiatives intended to help restrain soaring college costs are facing resistance from schools and from a bipartisan bloc of lawmakers looking to protect institutions in their districts.”

Reading this article reminded me of an interview with Ángel Cabrera, who I had the pleasure of getting to know when he was President of Thunderbird University, now in the same role at George Mason University. His view as I recall was that the reforms necessary in education were unlikely to come from within, and would require external disruptive competition. Regardless of role at the time, my experience has been similar. A majority of cultures fiercely resist change, typically agreeing only to reforms that benefit the interests of narrow groups with little concern for collective impact or macro needs. Yet society often looks to entrenched institutions for expertise, leadership, and decision power, despite obvious conflicts of interest, thus creating quite a dilemma for serious thinkers and doers. As structural barriers grow over time it becomes almost impossible to introduce new technology and systems regardless of need or merit. Any such scenario is directly opposed to proper governance policy, or what is understood to result in positive outcomes.

Consider then recent research demonstrating that resistance to change and patterns of human habit are caused in part by chemicals in the brain, and so we are left with an uncomfortable awareness that some cultures are almost certainly and increasingly knowingly exploiting fear and addiction to protect personal power and financial benefits that are often unsustainable, and eventually more harmful than tech-enabled adaptation to the very special interests they are charged with serving, not to mention the rest of society who would clearly benefit. This would seem to cross the line of motivation for change to civic duty to support those who appear to be offering the best emerging solutions to our greatest problems.

This situation of entrenched interests conflicting with the greater good provides the motivation for many involved with both basic and applied R&D, innovation, and business building. Most commonly associated with the culture of Silicon Valley, in fact the force for rational reforms and innovation has become quite global in recent years, although resistance to even the most obvious essential changes are still at times shockingly stubborn and effective.

Given these observations combined with awareness that survival of any organization or species requires adaptation to constantly changing conditions, one can perhaps see why I asked the following questions during various phases of our R&D:

Why not intentionally embrace continuous improvement and adaptation?

Why not tailor data consumption and analytics to the specific needs of each entity?

Why not prevent readily preventable crises?

Why not accelerate discoveries and attribute human capital more accurately and justly?

Why not rate, incentivize, and monetize mission-oriented knowledge?

The story I shared in conversation with Dan Dennett at SFI was timely and appropriate to this topic as philosophy not only deserves a seat at the table with AI, but also has contributed to many of the building blocks that make the technology possible, such as mathematics and data structures, among others.

The primary message I want to convey is that we all have a choice and responsibility as agents for positive change, and our actions impact the future, especially with AI systems. For example, given that AI has the capacity to significantly accelerate scientific discovery, improve health outcomes, and reduce crises, I have long believed ethics requires that we deploy the technology. However, given that we are also well aware that high unemployment levels are inhumane, contain considerable moral hazard, and risk for civil unrest, AI should be deployed surgically and with great care. I do not support wide deployment of AI for the primary purpose of replacing human workers. Rather, I have focused my R&D efforts on optimizing human capital and learning in the near-term. To the best of my awareness this is not only the most ethical path forward for AI systems, but is also good business strategy as I think the majority of decision makers in organizations are of similar mind on the issue.

In closing, from the perspective of an early advisor to very successful tech companies rather than inventor and founder of an AI system, I’d like to support the concerns of others. While we need to be cautious with spreading undue fear, it has become clear to me that some of the more informed warnings are not unjustified. Some highly competitive cultures particularly in IT engineering have demonstrated strong anti-human behavior, including companies I am close to who would I think quite probably not self-restrain actions based on ethics or macro social needs, regardless of evidence presented to them. In this regard they are no different than the protectionist cultures they would replace, and at least as dangerous. I strongly disagree with such extreme philosophies. I believe technology should be tapped to serve humans and other species, with exceptions reserved for contained areas such as defense and space research where humans are at risk, or in areas such as surgery where machine precision in some cases are superior to humans and therefore of service.

Many AI applications and systems are now sufficiently mature for adoption, the potential value and functionality are clearly unprecedented, and competitive pressures are such in most sectors that to not engage in emerging AI could well determine organizational fate in the not-too-distant future. The question then is not whether to deploy AI, or increasingly even when, but rather how, which, and with whom. About fifteen years ago during an intense learning curve I published a note in our network for global thought leaders that the philosophy of the architect is embedded in the code—it just often requires a qualified eye to see it. This is where problems in adoption of emerging technology often arise as those few who are qualified include a fair percentage of biased and conflicted individuals who don’t necessarily share a high priority for the best interest of the customer.

My advice to decision makers and chief influencers is to engage in AI, but choose your consultants, vendors, and partners very carefully.

Leave a Reply