Unusual time and place: Wednesday, 12:05 pm, in 1610 Engineering.
Abstract: Many of our most troubling long-range problems - trade balances, sustainability, AIDS, genetic defects, mental health, computer viruses - center on certain systems of extraordinary complexity - economies, ecologies, immune systems, embryos, nervous systems, computer networks. These systems, called complex adaptive systems, are above all ``social'' systems: Their global behavior stems from interactions among large numbers of agents, agents that adapt and learn in response to the actions of other agents in the system.
Because these systems are continually in flux, they are not readily studied using the traditional mathematics of equilibria and steady states, and they cannot be isolated for easy experimentation. Computer-based models provide for organized experimentation, allowing a search for patterns and general principles that can be formalized and then verified in real systems. This lecture describes the basic elements of such models, and discusses some of the relevant mathematics.
A videotape of the lecture is on reserve at the desk in the Physics Library (4220 Chamberlin Hall), and can be checked out overnight. ``The video quality is good, but the first few minutes of audio are missing.'' (Sprott.)
Abstract: Thomas Kuhn's message in The Structure of Scientific Revolutions was that science lurches forward in abrupt ``paradigm shifts''. Is this really so? Once upon a time we used to quote Linnaeus, ``Nature doesn't jump!'' - and then came Catastrophe Theory. We also used to think that science was a search for order - and then came Chaos. Finally, just the day before yesterday, ``Complexity'' wasn't a subject, but a state of mind, best left to biologists and computer buffs. Now ``Complexity Institutes'' are sprouting like mushrooms after the rain. In this talk we'll try to separate the paradigms from the hype, and present a review of some of the ``hard'' science advances that are likely to have broad significance and be of lasting value.
Abstract: Ecological communities represent a complex web of interactions among many species. These interactions present a challenge to both theoretical and empirical ecologists trying to predict how communities will change in response to environmental perturbations. My talk will address three questions:
Abstract:Insect societies self-organize and show emergent behavior, and appear to be examples of complex systems. The behavioral acts performed by individual workers are simple and limited in number, yet the behavior produced by the whole colony can be amazingly complex. The task confronting insect sociobiologists is to derive the complex patterns of behavior seen at the colony level from a knowledge of the interactions among colony members. Because it is regular, predictable, and easily observed and measured, nest construction is an ideal form of group behavior on which to attempt this analysis. This is especially true for Polybia occidentalis, a tropical social wasp my students and I have been studying for several years. In this seminar I'll describe our progress toward understanding how the components of nest construction behavior are organized and regulated.
Unusual time: 12:30
Abstract: Social structures affect many human interactions. We study the emergence of behaviors in an artificial ecology in which evolved players use expected payoffs to select partners for games of prisoner's dilemma. Each player has a finite state machine specifying its prisoner's dilemma strategy, and a genetic algorithm is used to evolve new players. Full cooperation is the most common behavior which evolves in our simulations, but many other behaviors, which rely upon interesting social network structures, also evolve. (This talk will be a new and improved version of the talk given last semester in computer sciences. New material will be presented.)
Abstract: From the dawn of science until just a few years ago the phenomenon of chaos was largely unknown. Now chaos is seen everywhere. Is chaos the exception or the rule? I will describe numerical experiments that assess the prevalence of chaos. Millions of equations are solved and the solutions catalogued. A portion of these solutions are chaotic and produce strange attractors - fractal objects of great beauty and mathematical interest. Properties of the solutions will be described. The technique is extended to complex systems (artificial neural nets) containing many identical parts interacting by simple rules, and it will be shown that such systems are nearly always weakly chaotic.
Unusual day, time and place: Wednesday, March 22nd, 12:05, the anatomy conference room, 341 Bardeen Hall.
Abstract: During non-REM sleep, and during absence seizures, widespread synchronous oscillations in electrical activity are seen in the brain. The thalamus is believed to act as a generator for these rhythms. A recently developed in vitro preparation, an isolated ``slice'' of ferret thalamus, shows spontaneous rhythmic behaviors like those seen in vivo, and with interesting wave-like behavior. Current hypotheses for the neural basis of these rhythms will be outlined, and computational neural network models that simulate the rhythms will be presented. The nonlinear dynamics of single cells, and of the coupling between cells, will be described for these Hodgkin-Huxley-like models. In this system, coupling via synaptic inhibition plays a key role in rhythmogenesis.
Abstract: This talk examines the economic consequences of a fad or bandwagon effect in consumers' preferences. Agents trade in a global market but change their preferences over time in repsonse the consumption decision of other agents located nearby. The models embed standard economic theory in a nonlinear dynamical systems framework. The results show clustering of preferences and consumption on an individual level and a characteristic evolution of price and average preferences on an economy wide level.
Abstract: In this project an artificial stock market provides an environment in which to study the behavior of many artificially intelligent agents trying to forecast the future behavior of a traded asset paying a random dividend. The objective is to understand some of the phenomenon possible from the interactions of learning algorithms brought together in a simple stock market trading environment. Traders using Holland's classifier systems build up sets of simple rules to forecast future stock market price behavior. Successful rules are strengthened and used more frequently while less successful rules are replaced with new rules created by a genetic algorithm. The relationship between this model and traditional modeling of financial markets in economics will be discussed.
Abstract: Possibly the simplest sort of feature in a dynamical system is a fixed point, sometimes called a ``steady state.'' Stable fixed points play an important role in the study of linear systems, and unstable fixed points underlie much nonlinear, chaotic behavior. Recently, much experimental and theoretical work has investigated how unstable fixed points can be exploited to control chaotic and other nonlinear behavior, and it has been suggested that such techniques may be applicable to controlling cardiac rhythms and epilepsy. I will discuss techniques for identifying unstable fixed points in experimental data, and show applications to fibrillating hearts and to the subthreshold dynamics of the squid axon.
Abstract: Neural networks offer an appealing approach to concept learning because they are applicable to a large class of problems, and because they have demonstrated good generalization performance on a number of difficult real-world tasks. A limitation of neural networks, however, is that the concept representations they form are nearly impenetrable to human understanding. To address this limitation, we have been developing algorithms for extracting comprehensible, symbolic representations from trained neural networks. I will first discuss why it is important to be able to understand the concept representations formed by neural networks, and then describe our approach to this task. We have developed a novel method that involves viewing the rule-extraction task as a separate learning problem in which the target concept is the network itself. In addition to learning from training examples, our method exploits the property that networks can be queried.
Abstract:For some 40+ years the speaker developed and taught undergraduate and graduate courses in electrical engineering; some 50+ courses in all. Among these, and of particular interest, were a series of seven courses in automatic feedback control systems theory, and a series of five courses in advanced linear and nonlinear circuit theory. In each of these two subject areas stability theory occupies a central role. In each case the speaker made especial effort to seek out, read, and utilize all published writings on stability theory. In recent years, the so-called Lyapunov stability theory has come to be of interest and use by electrical engineers especially concerned with the theory and design of various nonlinear systems utilized in engineering practice.
In his talk the speaker will trace in chronological order, the history of the development of Lyapunov stability theory - which is of interest to those concerned with "chaos" theory.
Abstract:When trying to forecast the future behavior of real-world systems, two of the key problems are overfitting (particularly serious for noisy processes) and regime switching (the underlying process changes its characteristics). In this talk we show how Gaussian mixture models with nonlinear experts point to solutions to these problems. In connectionist terms, the architecture consists of several experts and a gating net. The nonlinear experts put out the conditional mean (as usual), but each expert also has its own adaptive width. The gating net puts out an input-dependent probability for each expert. There is a supervised component in learning, to predict the next value(s), and an unsupervised component, to discover the (hidden) regimes. We report a number of results: