23/02/2009

What Happened to Neural Networking?

NOTE: This post is one of my rants. It is not based on serious data analysis. Instead, it is an impression I have had for a while. If you think I am wrong, let me know. If you think I am right, let me know--that would make my day! ;)
-------------------------

Back in the 1990s artificial neural networking was everywhere. The number of conferences, journal articles and grants devoted to its exploration was phenomenal. Then, suddenly, it seems, everyone moved on. Now, the rage is social networks.

I am not saying neural networks has been completely dropped. There is still some exciting stuff going on. But, it just never got integrated into the mainstay of complexity science method the way one would think.

And yet, neural networking is a major line of thinking in complexity science. As shown in my map of complexity, cybernetics and artificial intelligence (specifically distributed artificial intelligence)

One particular area that has yet to be fully appreciated by complexity scientists is Kohonen's self-organizing map--known as the SOM.

The SOM represents the latest advance in what can be called "qualitative computing."

By this term I mean that, the SOM is ready-made for finding nonobvious patterns in very large, complex digital, numerical databases. However, unlike statistics, the SOM is not driven by traditional hypotheses; it is not governed by the linear model; it searches for patterns of difference rather than aggregate norms and trends; it focuses on the relationships between conceptual indicators rather than the most powerful single variables; and, most important, while “intelligent,” it is actually dumb: the SOM does not tell you why it arrived at the results it gives you. There are no t-tests of significance to tell you what you found.

Instead, the SOM's output is open-ended, visual, and intuitive. To make sense of the nonobvious patterns and trends found, the researcher must apply traditional qualitative techniques--including coding, memo writing, and theoretical sampling. The qualitative orientation of the SOM does not mean one does not use statistics or formal mathematical modeling. I use these techniques all the time with it.

But, it does mean that the SOM is both computational and qualitative--a rare thing in method.

The SOM can do all of this because it is essentially a data reduction technique--while preserving the complexity of a system, it reduces its complexity to a 2-dimensional grid, onto which it projects the nuanced relationships between a set of factors. One combs this grid and the underlying factor structure to determine the dominant ways a data set clusters and the set of factors responsible for this clustering.

Familiar examples of the SOM are facial pattern recognition, analysis of disease trends, tumor detection, and primitive learning in robots and smart machines (See Kohonen 2001).

So, why aren't complexity scientists, particularly those in the social sciences, using the SOM? I do not know. Perhaps there is just so much going on that we have not reached an integration point. A method is explored, applied, developed and then everyone moves on to the next big method. Complexity science has not reached the point where multiple methods are combined to create a toolkit.

The other reason I think the SOM is not widely used, particularly amongst social scientists, is because of the geek factor involved. For example, I run Kohonen's free-ware program--the SOM Toolkit--in Matlab. If you cannot program your own neural net or you are not comfortable with Matlab or other programs with a high geek factor, it can be a bit overwhelming making use of this method. That, more than anything, is probably the unspoken reason neural nets and the SOM have not made a major splash in the social sciences. They are not overly easy to use.

They also do not fit the traditional paradigm of being numerical and quantative. Social scientists have an emotional breakdown when a method cannot be classified as qualitative or quantitative. Worse, if a numerical method does not have a t-test or some exact statistical way of determining the significance of its results, they just lose it! :)

Anyway, it just seems the SOM can be used to advance complexity science. For example, it can be used to explore how people cluster in a social network; it can be used to create conceptual maps of a complex systems; it can be used with agent-based modeling to improve the intelligence of agents, etc.

Again, I am not saying that the above types of work are not being done. I'm just saying that it seems more could be done.

What do you think?

No comments:

Post a Comment