What Happened to Neural Networking?

NOTE: This post is one of my rants. It is not based on serious data analysis. Instead, it is an impression I have had for a while. If you think I am wrong, let me know. If you think I am right, let me know--that would make my day! ;)

Back in the 1990s artificial neural networking was everywhere. The number of conferences, journal articles and grants devoted to its exploration was phenomenal. Then, suddenly, it seems, everyone moved on. Now, the rage is social networks.

I am not saying neural networks has been completely dropped. There is still some exciting stuff going on. But, it just never got integrated into the mainstay of complexity science method the way one would think.

And yet, neural networking is a major line of thinking in complexity science. As shown in my map of complexity, cybernetics and artificial intelligence (specifically distributed artificial intelligence)

One particular area that has yet to be fully appreciated by complexity scientists is Kohonen's self-organizing map--known as the SOM.

The SOM represents the latest advance in what can be called "qualitative computing."

By this term I mean that, the SOM is ready-made for finding nonobvious patterns in very large, complex digital, numerical databases. However, unlike statistics, the SOM is not driven by traditional hypotheses; it is not governed by the linear model; it searches for patterns of difference rather than aggregate norms and trends; it focuses on the relationships between conceptual indicators rather than the most powerful single variables; and, most important, while “intelligent,” it is actually dumb: the SOM does not tell you why it arrived at the results it gives you. There are no t-tests of significance to tell you what you found.

Instead, the SOM's output is open-ended, visual, and intuitive. To make sense of the nonobvious patterns and trends found, the researcher must apply traditional qualitative techniques--including coding, memo writing, and theoretical sampling. The qualitative orientation of the SOM does not mean one does not use statistics or formal mathematical modeling. I use these techniques all the time with it.

But, it does mean that the SOM is both computational and qualitative--a rare thing in method.

The SOM can do all of this because it is essentially a data reduction technique--while preserving the complexity of a system, it reduces its complexity to a 2-dimensional grid, onto which it projects the nuanced relationships between a set of factors. One combs this grid and the underlying factor structure to determine the dominant ways a data set clusters and the set of factors responsible for this clustering.

Familiar examples of the SOM are facial pattern recognition, analysis of disease trends, tumor detection, and primitive learning in robots and smart machines (See Kohonen 2001).

So, why aren't complexity scientists, particularly those in the social sciences, using the SOM? I do not know. Perhaps there is just so much going on that we have not reached an integration point. A method is explored, applied, developed and then everyone moves on to the next big method. Complexity science has not reached the point where multiple methods are combined to create a toolkit.

The other reason I think the SOM is not widely used, particularly amongst social scientists, is because of the geek factor involved. For example, I run Kohonen's free-ware program--the SOM Toolkit--in Matlab. If you cannot program your own neural net or you are not comfortable with Matlab or other programs with a high geek factor, it can be a bit overwhelming making use of this method. That, more than anything, is probably the unspoken reason neural nets and the SOM have not made a major splash in the social sciences. They are not overly easy to use.

They also do not fit the traditional paradigm of being numerical and quantative. Social scientists have an emotional breakdown when a method cannot be classified as qualitative or quantitative. Worse, if a numerical method does not have a t-test or some exact statistical way of determining the significance of its results, they just lose it! :)

Anyway, it just seems the SOM can be used to advance complexity science. For example, it can be used to explore how people cluster in a social network; it can be used to create conceptual maps of a complex systems; it can be used with agent-based modeling to improve the intelligence of agents, etc.

Again, I am not saying that the above types of work are not being done. I'm just saying that it seems more could be done.

What do you think?


  1. I've used all three of these (neural-networks (by this I mean multi-layer perceptrons), SOM, and social networks). I think neural networks have fallen off the face of the planet because they are really hard to interpret and understand *why* they get the solution they do. When you apply it to data you will get an answer and sometimes the model will be blindingly accurate, but I don't know if it provides any useful understanding of the system studied.

    SOM, on the other hand, is beautiful. It feels very intuitive to me, but the problem is that it's still somewhat of an art-form (and it can still suffer from the same problems of ml-perceptrons). Exactly how to tune the weighting function, what lattice or network do you use to overlay the data, how do you assess a good fit, etc... I'm using it to mine some of my thesis data for trends, but I don't plan on stopping there. Other statistical tests come afterwards.

    And social networks (network science, I would say) - there's no comparison between network science and the neural networking. Network science isn't going anywhere, and I am not too afraid to admit that I expect network analysis to be as common a tool in the social science toolbox as OLS (that is, as data collection techniques get better and cheaper). But maybe I'm just caught up in all the rage. =P

  2. I do not think you are caught up in the rage. I agree and think you are correct about their integration into mainstream software. You are also right that SOM is somewhat of an art form. I think that is why more needs to be done to show how to develop the SOM statistically. For example, i think there is much promise in the research exploring the relationship between k-means cluster analysis and the SOM.

  3. Neural net hasn't died out, it's just giant gains haven't been made about it that the media wishes to publicize. In fact, I'd wager that all that awesome press in the 90's gave us the jumpstart in nanotechnology due to the enormous gains in fuzzy logic that will eventually allow us to build computational swarms. No longer will we need a single powerful chip, we will have dozens of stupid chips that can give us the (statistically) right computational answer far quicker than any single, primitive behemoth chip ever could.

  4. Thanks for that point! any recommended readings?

  5. Brian, I think you are absolutely right in addressing the relationship between k-means and the SOM. Coincidentally, we have a recent article on this very matter.

    Teuvo Kohonen, Ilari T. Nieminen, and Timo Honkela. On the quantization error in SOM vs. VQ: A critical and systematic study. In Proceedings of WSOM'09, pages 133–144, 2009.
    (see http://www.springerlink.com/content/m141023548048156/?p=7341dfab0453413b961dbe832605deeb&pi=2)