2011-08-08

Machine intelligence: the earthmoving equipment of the information age, and the future of meaningful lives


Everybody today is overwhelmed by information

What we need more than anything else today is some heavy-lifting equipment for the mountains of heavy information debris we find piling up around us every day.

Most people in the developed world are overwhelmed right now by the information firehose, and few people are living lives in which the the balance of their time is a good reflection of their true deepest priorities.

Machine intelligence is the earthmoving equipment of the information age

The primary reason we need to build machine intelligence in the information age is that it will help us magnify our information-processing capabilities by orders of magnitude, the way that physical machines like cars/planes/bulldozers/cranes have long magnified our physical strength by orders of magnitude.

Fundamentally, I see continuing symbiotic augmentation of human intelligence -- with human biological intelligence in the driver's seat -- as a far more plausible future than that "Singularity Extremists" (in spite of being a graduate of Singularity University myself).

Technology so far has always served humanity as a toolset -- as an extension of our own intelligence and physical capabilities. Technology hasn't yet started serving itself, because it hasn't yet become sentient. Personally though, I disagree with many futurists that the "coming technological singularity" is either a distinct point in time, the same way that an exponential curve doesn't actually have an uptick or cusp or "knee of the curve", it has the same exact overall shape regardless of what vertical scale you stretch it to. I also disagree that eventually machines will become more intelligent than humans (after which we would render ourselves redundant, because artificial intelligence would be able to innovate faster than we can).  My view is that technology will always serve as a toolset for human intelligence, regardless of how "smart" it begins to act or seem, and regardless of how powerful it becomes, just as a car or plane is a large and powerful machine that is ultimately a prosthetic extension of the body and therefore the human brain controlling it.

We will always "keep up" with any intelligence we create, because the sum of (our brain + a machine brain) will always be more advanced than just the machine brain alone.

The real reason we need AI: we need a networked I/O interface for our brain

The most important role that so-called "AGIs" (Artificial General Intelligences) will serve in the future are:
  1. To deliver only the highest-quality, most-important, most distilled/focused information possible through the bandwidth-limited channels of our physical senses, and in particular through our visual channel, which has the highest bandwidth of all our senses [input], and
  2. To empower us to transform enormous quantities of external information in meaningful ways through a few simple and easy-to-learn actions [output].
This will give us the power tools to actively reshape the piles of information debris in our lives so that we can each build something beautiful, and so that we can live lives that are true to our deepest values and priorities.

We must build intelligent machines for the future of humanity

We must build these intelligent information prostheses, because it is far too easy to get distracted and overwhelmed today. If we want to continue living meaningful lives while still swimming in information, then without such tools we will reach the end of our lives and wonder why we didn't spend more time on the things that mattered most.

2011-08-06

The commoditization of technology, and when to open the source

The point of open source is not to kill competition, it's to enable innovation at a higher level

I have exclusively used Free (as in freedom) and open source technologies for more than 15 years, and have contributed to a number of open source projects. However, virtually every person in the developed world now uses open source technologies every single day, usually without usually even knowing it. We carry open source cellphone software in our pocket if we use Android, we use open source browser technology if we browse the Web with Chrome or Firefox, and at the very least, we request pages from open source webservers running on top of open source operating systems when we view the majority of pages on the Web.


On face value, it seems counter-intuitive that a company could survive if they were to open source their products or technologies, since if the product is available for free in source form, competitors could take these products or technologies and use them to compete without doing their own development work, and/or sell the originating company's products themselves. Consumers could obtain and use products for free without purchasing them, by building the products themselves from source or by using versions built and shared by other consumers.


However, at least from an economics perspective, the point of open source is not to kill competition, it's to enable competition and innovation to take place at a higher level of abstraction than was previously possible.


Frontier science eventually transitions from public to private development


In the early days of space exploration, you couldn’t just buy an off-the-shelf rocket engine; only governments with multi-billion dollar budgets could design and build rockets that could put an object into orbit, much less a man on the moon. However in 2004 a private company, Scaled Composites, won the $10M Ansari X-Prize for putting a civilian in space twice within a two-week period. The US government has subsequently awarded multiple heavy-launch contracts to another private company, Elon Musk's SpaceX, to replace its own aging heavy lift technologies, and recently canceled its own costly Constellation program that was intended to replace the shuttle. The US Government is still debating what technology should replace the shuttle and its launch system, but it is becoming increasingly clear that we have reached an era where the US Government is no longer able to compete with commercial offerings. This is a complete inversion of the situation in the early days of the space race, when there was no way the private sector could afford to compete with the financial resources that the government was able to throw at open-ended space research.




Commoditization is an inevitable process for all technology


Every industry goes through this transition towards the commoditization of technology, which is the stage at which a technology is commercially available in off-the-shelf form, and/or when the knowledge or parts required to build the technology are freely available and anybody with sufficient skill in the art can build it if they expend sufficient effort. When a technology has become commoditized, it becomes the newest "greatest common denominator" for the industry.


The process of commoditization has already happened in the PC hardware space (with the development of generic PCs that took the bottom out of the mainframe and minicomputer markets and made computers accessible to all), it has long been happening in the operating system space (with Linux and BSD), it continues to happen in the web technologies space (with browsers, web servers, web toolkits, the evolution of browser extensions into web standards, etc.), and is currently happening at an alarming rate in the mobile space (with Android in particular).  There are many examples of this in every industry.



 

Price is usually the first thing to go, then a little later it becomes clear that technologies that are free but closed-source, and the communities they serve, stand to benefit substantially from opening the source.

Commoditization of technology enables innovation at a higher level


The interesting thing about the commoditization of technology is that it affords all players a new solid foundation of common-denominator technology to build upon, and innovation can then move up to the next level of abstraction. Nobody builds their own web server anymore -- it doesn’t make sense to reinvent that wheel when there are several incredibly powerful, commoditized open source options; everybody just uses (and innovates on top of) Apache/Cherokee etc. In the mobile space, the availability and commoditization of remarkably rich app development platforms such as Android has led to a Cambrian explosion of diversity in mobile applications that was simply not possible before powerful and featureful mobile operating systems became a commodity on the majority of cellphones.


Those companies that do not innovate on top of commoditized technologies will perish


Most companies are very familiar with the maxim, "innovate or perish."  However, many companies also fear that the commoditization of technology, combined with the open sourcing of commoditized technology, is a recipe for destroying business models and even entire industries. It is understandable that this would be a big concern for many industries, especially if their market is small. However, some companies divert too much energy towards holding onto the IP they already have, doing whatever they can to resist forces that would lead to the commoditization of those technologies. Progress in that market ends up becoming enmired. Ironically, this allows the company to continue to operate within its comfort zone -- but it is not a sustainable model and is certainly not a growth model.


Nobody wins in the long term if a company is attempting to suppress innovation in order to survive. History has shown that companies that understand that commoditization is an inevitable process, disruptive though it is, i.e. companies that learn to adapt nimbly to the changing tech landscape and begin building one level above the current level of abstraction, are the very companies that will survive and become the next generation of market leaders. Those companies that cease to innovate, or cease to move their level of innovation above the level of commodity, are the very companies that inevitably perish due to disruption from below. (The defining work in this area is of course The Innovator's Dilemma by Clayton Christensen.)




Companies are born, grow, age and die


There is nothing ad hoc about when companies go through the different stages of innovating, resisting innovation, and then perishing -- in fact, it turns out that these stages are completely predictable. Geoffrey West of the Santa Fe Institute describes this process briefly but eloquently in his recent TED Talk:

  • It is possible to measure something similar to the "metabolic rate" for companies, and, if analyzed this way, all companies are born, grow, age and die just like biological organisms. 
  • Companies demonstrate a sigmoidal growth curve overall (just like any other organism), with slow initial growth, followed by a growth spurt during the company's "teenage years", followed by a decline in growth (senescence), followed by eventual collapse and death. 
  • As companies grow, they benefit from the added efficiency of economies of scale -- efficiencies brought about by the introduction of bureaucracy and levels of administration. 
  • However, eventually every company enters a stage where their metabolism slows down [the company ceases to innovate] and the company soon cannot sustain growth, collapses and dies.
Geoffrey West

Companies typically produce their best innovations in their "prime years", before their growth slows. In their later years, they become bogged down by the very bureaucracy that granted the company economies of scale in its "young adult years". Old companies have so much corporate inertia and bureaucracy that they cease to innovate and collapse under their own weight. Restated, it is in the early years of a company's life that they are able to produce technologies that have the potential to become commodities. In their older years, companies often fight commoditization (i.e. cease to innovate) and eventually lose that battle.


Today, commodity technology inevitably ends up open sourced or ceases to be relevant. It doesn't take a huge jump in logic to realize that if you fight open source, you are in a real sense fighting innovation, and it's a good sign that you have entered the life stage of corporate senescence. Old companies typically try to stifle the innovativeness of their competitors through patent litigation, rather than moving up a level and innovating on top of the new common, commoditized platform (which is doubly ironic, because patents are supposed to encourage innovation rather than stifle it). Such companies lack creativity, skill, nimbleness and innovativeness of their own, and will soon die.


It is worth noting that even though biological organisms and companies all follow the same growth curve, the lifespan of organisms and of companies depends upon their size. Thus, even though death is inevitable, it would make sense that corporate lifespan can be increased by doing whatever is possible to ensure that a company is better able to scale.


How to know when a technology has become commoditized


It's usually clear when a technology has become a commodity. One or more of the following scenarios will hold true:

  • The secret of how to build the technology is out of the bag (it's no longer protected by virtue of being a trade secret)
  • The technology has become easy to duplicate: your competitors are all starting to build the same technology
  • The technology is available for free (designs for or an implementation for the technology are widely available)
  • Alternative implementations of the technology are beginning to surface that either compete head-on with your technology, or promise their own different but competitive feature sets and strengths
  • The technology is regarded as a component part that other more interesting things can be built from
  • The industry is beginning to find ways to work around the need for your technology.
Bottled and canned air, for sale in China for
up to $860 for a jar of air from the French countryside

It makes a lot of sense for everybody to use and contribute to a common shared source code base once a technology has become commoditized. The right way to look at this is not "we're making our competitor's life easier", but rather, "we're making the world a better place for everybody, and enabling a whole new generation of innovative products that can be built upon this platform".  There's almost always plenty of room up there at the next level for everybody.


Embracing open source doesn't require abandoning your business model, it requires rethinking it


There is a lot of discussion about when or even whether a technology should be open sourced. Open source business models typically revolve around expertise for hire (e.g. support contracts) or value-added repackaging of open source products. However several points should be noted:

  1. Software that is partially or completely open source can still be sold, sometimes with additional premium features in the commercial version (depending on actual licensing). "Open source" or "Free (as in freedom) software" does not have to mean "free (as in price) software".
  2. Opening the source code of a piece of software can actually help a business in unexpected ways by revitalizing the entire software ecosystem that the company operates within.
  3. Ideally, open source software also attracts contributions by others in the community and sometimes even brings in contributions from competitors.  "Win-win" is always a healthier mentality than "dog eat dog".
Companies need to learn to respond to the commoditization or open sourcing by competitors of technologies in areas they are seeking to remain competitive, and should not continue to try hard to sell something that is not clearly better than an open or free alternative. For example, the operating system space was commoditized over a decade ago, but Microsoft is still deriving a large percentage of their income from of sales of Windows as a base operating system, even though profit margins are quickly eroding as computing moves online and people use a plethora of devices running open source operating systems to access the Web.

Related reminder: If you're not cannibalizing your own business model from below with the next big thing, then somebody else is. (See how Amazon cannibalized their own book sales with the Kindle for a good example of how to approach an eroding business model.)


A nonexhaustive list of situations when opening the source of a product may be helpful:

  • When the software’s general availability is crucial to the basic vitality of the ecosystem in which a company operates. (e.g. Google created and released their own browser as open source, upping the ante for everybody in terms of both features and speed, and as a result of this competition, all other major browsers became significantly more powerful and featureful within a couple of years. In the end, whatever browser people end up choosing, even if it isn't Chrome, Google still wins.) 
  • When a piece of software has ceased to give a company a competitive edge but would be immensely useful to others. 
  • When a company feels it can still stay ahead by innovating on top of the open sourced code, but there is a clear need in the community for the code and it makes sense to share it. 
  • When a piece of software is being retired but could still prove useful to someone. (a.k.a. “Abandonware”)

2011-08-04

On hierarchical learning and building a brain

The brain is a hierarchical learning system, and knowledge representation is inherently hierarchical

Many systems in the human brain are structured hierarchically, with feedback loops between the levels of hierarchy. Hierarchically structuring a system creates a far more compact and flexible recognition engine, controller or model than alternatives. Knowledge and learning are inherently hierarchical, with generalizations as higher-level constructs and specifics as lower-level constructs. Assimilating new knowledge often requires breaking old hierarchies (undergoing a paradigm shift) and restructuring existing knowledge in terms of new generalizations, so that the new knowledge can be properly incorporated. The hierarchical structuring of reasoning may not be surprising given that the wiring of the brain itself is highly hierarchical in structure.

[source]

Strengths of hierarchical learning

Hierarchical reasoning systems can be immensely dextrous, just as an articulated arm is dextrous: each joint in an arm has only one or two degrees of freedom, but these degrees of freedom compound, overall yielding an immensely rich range of motion. In a hierarchical reasoning system, global, mid-level and local reasoning engines collaborate in feedback with each other to converge on a single hypothesis or plan of action that is consistent at multiple levels of abstraction. Each level only needs to absorb a small amount of noise, yet the overall system can be immensely resilient due to the synergistic combining of degrees of freedom. Note that the feedback between the layers of hierarchy can be either excitatory or inhibitory, sensitizing or desensitizing other levels towards certain hypotheses based on the current best-guess set of hypotheses at the current level. (This is effectively a manifestation of Bayesian priors.)

Also, consistent false positives and false negatives at individual levels can actually help improve performance of a hierarchical reasoning system, because each level can learn to work with systematic quirks of higher and lower levels. A hierarchical learning system is more powerful than the sum of its parts, because error correction is innate.

Note that modularity and hierarchy are ubiquitous across all of biology, at all levels of complexity, and this appears to be a direct result of trying to minimize communication cost. Biology is particularly good at producing a massive fan-out of emergent complexity at every layer of hierarchy, and then packaging up that complexity inside a module, and presenting only a small (but rich) "API" to the outer layers of complexity. This can be observed in the modularity of proteins, organelles, cells, organs and organisms.

Learning is a process of information compression

It is interesting to note that hierarchical learning is related to to progressive encoding, as found in JPEG and other image compression algorithms, where an image is encoded in low resolution first, and then progressively refined by adding in detail that remains as the difference between lower-order approximations and the original image. Progressive encoding isn't just useful for letting you see the overall content of an image before the complete image has loaded -- it also increases compression ratios by decreasing local dynamic range.

In fact, in general, learning is a process of information compression -- Grandmaster chess players never evaluate all possible moves, they compress a large number of move sequences and board positions into a much smaller number of higher-order concepts, patterns and strategies -- so it would make sense that learning is innately hierarchical.

Error correction improves accuracy of inference

As noted, error correction is innate in hierarchical systems. In fact, the principles of error correction, as defined in information theory, can be directly applied to machine learning. In my own tests, adding error correction to the output codes of a handwriting recognition system can decrease error rates by a factor of three, with no other changes to the system.

However, adding error correction to a system typically implies adding redundancy to decrease entropy, by increasing the minimum Hamming distance between codewords etc. This principle lies in direct tension with the fact that learning is a process of information compression, because information compression, as we know it, typically deals with removing redundancy.

A further conundrum is presented by the fact that traditional sequence compression, where redundancy is minimized and entropy maximized, dramatically increases the brittleness of a data stream: flipping bits in a zipped file is much more likely to render the original file unreadable than flipping bits in the uncompressed file. However, biology seems to welcome "data corruption", as evidenced by how resilient the genome is to mutation (mutation actually helps species adapt over time), and as evidenced by how well the brain works with uncertainty.

The CS approach to information compression increases brittleness

The most interesting theoretical approach to unifying the two apparently opposing forces of error correction and information compression is Kolmogorov complexity or "algorithmic information theory", which states that it may require significantly less space to describe a program that generates a given sequence or structure than is needed to directly represent it (or sequence-compress it). Algorithmic compression may be used by the brain to dramatically increase compression ratios of structure, making room for redundancy. (It is certain that algorithmic compression is used in the genome, because there are 30,000 immensely complex cells in your body for every base pair of DNA in your genome.)

The criticality of feedback in the learning process

Feedback (and the ability to respond to feedback by updating a model or by changing future behavior) is the single most critical element of a learning system -- in fact, without feedback, it is impossible to learn anything. However, the brain consists of probably trillions of nested feedback loops, and the emergent behavior of a system incorporating even just a few linked feedback loops can be hard to characterize. It is critical to understand how the vast number of interacting feedback loops in the brain work together harmoniously at different scales if we hope to build a brain-like system. We have a lot of work to do to understand the global and local behavioral rules in the brain that together lead to its emergent properties.

The use of feedback in machine learning has so far mostly been limited to the process of minimization of test-set error during training (through the process of backpropagation or an analog). Time series feedback loops in recurrent networks are also immensely powerful though, and these can be trained using backpropagation through time. Recurrent networks and other time series based models can be used for temporal prediction ("Memory prediction framework" in Wikipedia). Prediction is a fundamental property of intelligence: the brain is constantly simulating the world around it on a subconscious level, comparing the observed to the expected, and then updating the predictive model to minimize the error and taking corrective action based on unexpected contingencies. Temporal prediction is a core concept in Jeff Hawkins' book On Intelligence, however Jeff's HTM framework is far too rigid and discrete in the implementation of these ideas to be generally applicable to substantial real-world problems.

How to build a brain

Building a computational structure with multiscale, feedback-based, predictive properties similar to those in the brain is critical to creating machine intelligence that will be useful in the human realm. Until we figure out how to do this, we're stuck with machine learning amounting to nothing more than the process of learning arbitrary function approximators.

Jeff Hawkins' HTM framework looks a lot more like a big Boolean logic network than the soft, fuzzy Bayesian belief network present in the brain. The basic ideas behind HTM are sound, but we need to replace HTM's regular, binarized, absolute-coordinate grid system with something more amorphous, reconfigurable and fuzzy, and we need to propagate Bayesian beliefs rather than binary signals. Building such a system so that it has the desired behavior will be a hard engineering challenge, but the resulting system should be, ironically, much closer to the principles Jeff describes in his own book.

Most importantly, however, we will have built something that functions a lot more like the human brain than most existing machine learning algorithms -- something that, through having a cognitive "impedance" correctly matched to the human brain, will more naturally interface with and extend our own intelligence.

[source]