I submitted this op-ed to several leading publications since writing it on January 4th, all of which either declined or failed to respond. See my note below, thanks. — MM
Over the course of the past dozen years the U.S. has experienced a series of dangerous and costly systemic failures throughout our security and regulatory framework. The unfettered bubble in technology, missed opportunities to prevent 9/11—leading to two ongoing wars, the tragic response to Katrina, the largest financial crisis in history, the Fort Hood massacre, and the ‘underwear bomber’ incident on Christmas Day all share one commonality.
In each of these cases, data had been collected by U.S. government agencies that contained a high probability of either entirely preventing or substantially mitigating each event, if only the information had been recognized and acted upon within the window of time allowed by circumstances. In case after case, repeated warnings by recognized experts, sourced internally and externally, were ignored or suppressed.
Concurrent with this series of historic failures, advances in the multi-disciplinary area of knowledge systems has dramatically improved our ability to predict and prevent crises. In the specialized field within computer science generally known as semantics, digital files are embedded with pre-defined meaning and executed in an automated or semi-automated manner that can reduce or eliminate common human failures, regardless of cause.
This technology is successfully deployed today in other large scale data environments where human errors, conflicted decision making, lack of interoperability, and misinterpretation of data have long been associated with systemic failures. When combined with rich meta data in each digital file describing the interrelationships of topics, organizations, and people, these type of human-caused systemic failures simply need not occur.
For example, if a state-of-the-art semantic architecture had been deployed prior to 9/11, the Phoenix Memo would have contained sufficient embedded intelligence to automatically elevate the red flag warning not just within one agency where internal conflicts are common, but also notify pre-selected decision makers in partner agencies with built-in tracking to ensure accountability as well as instant audit reporting; significantly increasing the probability of preventing two wars.
In the Fort Hood massacre, a logical semantic system should have required an alert to the base commander and appropriate security personnel about Maj. Nidal Malik Hasan, who apparently displayed a red flag warning on a continuing basis. In the most recent incident on Christmas Day, a properly designed system would have profiled not just yet another youth succumbing to militant religious extremism, but the quality and relationship of the information source, which would then have automatically placed Umar Farouk Abdulmutallabon on the no fly list.
Within the financial regulatory arena, a properly designed system for banking regulators around the world would have automatically linked the incoming data from university and independent researchers that clearly displayed dangerously spiking discrepancies between earnings and mortgage levels in multiple regional markets, thereby making it difficult if not impossible to ignore, or later deny knowledge of a multi-trillion dollar financial crisis in the making.
In our highly complex, interrelated, and rapidly changing world, these types of crises are simply too dangerous not to prevent, which is why so many countries around the world have targeted semantic systems research and development as a high priority.
In order to achieve a higher level of functionality required to prevent these types of crises, leadership must first acknowledge that misalignment of interests exist throughout their organizations; not just in sharing data, but also in the design and adoption of enterprise systems. More than a decade of attempting to improve knowledge systems in federal agencies has repeatedly demonstrated that effective solutions cannot emerge from the institutions implicated in the systemic failures. Despite tens of billions of dollars invested over the past decade, U.S. government agencies are still a decade behind in state-of-the-art functionality in knowledge systems.
In understanding the breadth and scale of both the need and potential of improved knowledge systems, we should reconsider that during the past decade the U.S. national debt more than doubled, the number of jobs created stands near zero, and the U.S. stock markets delivered the worst performance in 200 years of history, providing investors with a negative return.
The direct correlation between a series of systemic failures and the precipitous decline of the American economy could not be more evident, so no greater priority exists, for all other goals depend in varying degrees on successfully overcoming this challenge.
We are wasting precious time.
Mark Montgomery is founder and CEO of Kyield; a holistic enterprise system with a mission to enhance innovation and productivity in the digital work environment while mitigating or eliminating human-caused crises. A former business consultant and venture capitalist, Mr. Montgomery has been involved with knowledge systems research, development, and testing for 15 years. He is inventor of the patent-pending design for a semantic enterprise system licensed by Kyield.
The publications that declined this op-ed include the Washington Post, New York Times, Wall Street Journal, and the Financial Times. I also sent the piece to the all of the major television networks that obviously struggled to find experts within their normal guest lists who understand the issues, but to date while several visited our web site, none have replied.
One of the few individuals who has replied is also perhaps the best informed. Peter Baker’s article in the NYT ‘Obama’s War Over Terror‘ is the most revealing I’ve read to date on recent internal dynamics in the U.S. government. Near the end of this long article Peter quotes John Brennan: “A lot of the knuckleheads I’ve been listening to out there on the network shows don’t know what they’re talking about.” In this regard I am in complete agreement with Mr. Brennan. The coverage on counter terrorism in mass media has quite often been embarrassing, and frankly damaging, for a misinformed democracy cannot correct course.
4 thoughts on “Systemic failures, by design”
That is interesting but perhaps too general. I would like to know more about the successes (and failures) of systems which already use this technology.
How do they deal with false positives and noise, especially with deliberate attempts to mislead the analysis?
Are these systems sets of rules which can be validated or more like like neural networks?
Looking forward to learning more from you,
This was an op-ed targeting policy makers at the White House, Congress, and agency heads, not engineers in semantic technologies who are generally already aware of the benefits. As with any new technology, we obviously can only show the potential of Kyield prior to adoption. The point that I’ve been making for a decade is that the U.S. government should be deploying holistic semantic systems to prevent these types of crises, but haven’t. Most of the leaders are now at least listening.
My job and intent here is to educate policy makers and those who influence policy, not competitors. We do provide an unusual amount of public information on our web site, including white papers that we hear regularly are too technical for a general audience — far more for example than industry leaders.
Also, this piece has been published at truthout.org where it was able to reach more policy makers, and has been accepted by http://www.erepublic.com/ for upcoming editions of their magazines for public agency leaders; CIO, Emergency Management, and Governance.
Thanks for your comment and interest. — MM