Since I have now responded to a related USPTO elections/restriction letter, I feel a bit more freedom in sharing additional thoughts on the underlying theory that has served as the foundation of Kyield:

Yield Management of Knowledge: The process by which individuals and organizations manage the quality and quantity of information consumption, storage, and retrieval in order to optimize knowledge yield.–(me) 

(Please see this related post prior to attempting to categorize)

Background: The cyber lab experiment

The theory emerged gradually over several years of hyper intensive information management in my small lab on our property in northern Arizona (we are currently based in Santa Fe after a year in the Bay area). The experimental network called GWIN (Global Web Interactive Network) was designed after several years of previous high intensity work in our incubator, which followed a decade in consulting that was also drawn from. The GWIN product was entirely unique and intentionally designed to test the bleeding edge of what was then possible in computer and social sciences. We were particularly interested in filtering various forms of digitized intelligence worldwide as quality sources came online, conversion to useful knowledge, weaving through academic disciplines, and then mix with professional networking.

The network was open to anyone, but soon became sort of an online version of the World Economic Forum (photo), with quite a few of the same institutions and people, although our humble network even in nascent form was broader, deeper, larger, with less elitism and therefore more effective in some ways.

Mark Montgomery's first computer lab and incubator building (Kyield)
Our first computer lab and office

I was quite proud that membership was based primarily on interest, effort, and intellectual contributions; not social status, guilds, political views, market power, or wealth, even if the norm in our membership.

My late partner and friend Russell Borland and I learned a great deal from GWIN, as did many of our members and those who followed our work closely. The first thing one should understand is that while we worked with various teams of remote programmers to build products, and served thousands of people worldwide daily who provided about half of the content, I operated the lab solo onsite. Given the volume, work hours, lab efficiencies, and short commute, I was likely consuming as much data personally as any other human, which is fundamental to the construct of the theory; how the brain functions in dealing with overload, human-computer interaction, and what tools, languages, and architectures were needed in order to optimize knowledge yield.

Need to emphasize data quality, not quantity

The vast majority of solutions for improved decision making in the networked era have been computing versions of HazMat crews attempting to clean up the toxic waste resulting from information overload. Reliance on the advertising model for the consumer Web created a system design essentially requiring lower quality in a populist manner, aided and abetted by search and then social networking.  While the advertising model is certainly appropriate for many forms of low cost entertainment, for serious learning and important decision making envisioned in yield management of knowledge, an ounce of prevention in the form of logically structured data is worth potentially far more than a ton of cure.

It became obvious very early in our lab (1996) that the world needed a much more intelligently structured Web and Internet, for consumers as well as the enterprise. In studying search engines closely at the earliest stages, they were by necessity applying brute computing force, clever algorithms, and exploiting content providers in an attempt to deal with the unprecedented explosion of data, and noise, while providing what investors needed for such risk. What we really needed of course was logically structured data that was controlled by data owners and providers, which would then (and only then) provide the opportunity for knowledge yield. Further, the languages used for the structure must be non-proprietary due to the overwhelming global market power that would result for the winner due to the network effect.

Need for independent standards

In the enterprise market, proprietary languages can and do thrive internally, but the integration required in sharing data with essential external partners is similar to the brute force applied in search—crisis clean-up rather than prevention, complete with disempowerment of customers who create and own the data. Most organizations increasingly rely on shared data, whether regulatory or partnerships, even if private and encrypted, so proprietary data languages are not well aligned to the enterprise in the increasingly networked, global economy.

Finally, there are fundamental and unavoidable conflicts between large public companies that dominate markets with proprietary languages, their fiduciary duty, and the minimal sustainable requirements of our globally networked economy. A few examples of these conflicts can be clearly observed today in failure to deal effectively with network security, personal privacy, protection of intellectual property, and information overload. Evidence of the challenge can also be observed (and felt by millions of people) in economics where policies of multinationals favor the largest emerging markets due to market access. Of course the lack of functioning governance of what has become an essential global medium empowers these phenomenons.

It is my personal position that the intellectual competition should be intentionally focused on optimal use of the data to achieve the mission of the customer (whether individual consumer or enterprise), not protectionism, and that vendors should be the caretaker of data on the behalf of data owners, which requires a different economic model than the free ad supported model on the consumer Web.

So in order to realize the goal of the theory, we really needed a much more intelligent and highly structured Internet and Web that is based on independent languages in a similar method as the underlying protocols (TCP/IP and HTTP), and not supported by advertising alone.

I am speaking here of data communication in global networks, not individual applications. If we had the former we need not worry about the latter, at least in the context of network dynamics.

A word of caution on standards

A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines.– Tim Berners-Lee, 1999

One of the most significant risks with independent universal standards is unintended consequences.  While the stated vision of most involved is for more efficiency, transparency, empowering individuals and organizations, less bureaucracy, and lower costs, the nature of universal languages favors organizations that control data. One of the primary challenges in Web 1.0 and 2.0 has been data silos and private sector exploitation of data owned by others, which is largely driven by the revenue model. The primary challenge of Web 3.0 and 4.0 could be increased government control at the expense of individual liberty and private sector jobs, or perhaps worse; a public/private duopoly or oligopoly. From the perspective of an entrepreneur attempting to create jobs, I see such risk increasing daily.

Introducing Mawthos

Louis V. Gerstner, Jr. was perhaps most responsible for moving software towards a service model in his turn around of IBM in the mid 1990s, which was a brilliant business strategy for IBM at that time (we exchanged letters on the topic in that era), but it has not been terribly successful as a model for creative destruction, rather it has primarily seemed to exchange one extortion model (propriety code) for another (combination of proprietary code, consulting, and open source). Unlike a giant turn around, we were focused on more revolutionary models that provided value where none existed previously, so our first effort was an ASP (Application Service Provider), which emerged in the mid 1990s. In 2001, this paper by the SIIA is credited with defining and popularizing SaaS (Software as a Service), which has evolved more recently to an ‘on demand’ subscription model that is often bare bones software apps like those developed for smart phones.

While I have been a big proponent of a cultural shift in software towards service, I have rarely been a proponent of the results sold under the banner of service in the software industry, recognizing a shift in promotions and revenue modeling, not culture. In reviewing this article I recalled many public and private discussions through the years debating the misalignment of interests between vendors and the missions of customers, so thought I would introduce yet another acronym: Mawthos (Mission accomplished with the help of software), which is a slight jab at our acronym-fatigued environment, while attempting to describe a more appropriate posture and role for enterprise software, and the philosophy necessary to realize the theory Yield Management of Knowledge.

One thought on “Realizing the Theory: Yield Management of Knowledge

Leave a Reply