George Mobus' Research Interests

Home Page
Selected Publications
Adaptive Agents Lab

My textbook, Principles of Systems Science, Springer (Nov. 2014), is now available for ordering.

My personal blog,
Question Everything

A must-see tutorial on Critical Thinking at YouTube

See a demo of my robotics class
Requires iTunes player, or Quicktime.

My presentation on Biophysical Economics at the, Pacific Science Center, Science Café in Tacoma

Multi-disciplinary Research Areas

Though my day job remains working in Computer Science (esp. teaching CS courses), my range of interests in research topics is quite broad. Though it is not immediately obvious from the topic names (below), there is actually a thread of continuity that weaves among these areas. I am currently engaged in active research in several of these, with semi-active projects in the background in the others. Starting with the currently most active:

Biophysical Economics and Energy Systems

Systems Science and Energy

Systems science is the collection of highly interrelated subjects that taken together form a kind of meta-science, or a general science of science! Systems science includes subjects such as cybernetics, information theory, complexity theory, the universal theory of evolution (including sub-topics like emergence), network theory, and many more. What is unique about systems science is that its concepts broadly apply to every other science discipline. Indeed, many of the traditional disciplines have developed sub-branches named "systems ...", where the ellipsis can be filled in with names like 'biology', 'sociology', 'psychology', 'ecology', etc.

My textbook, with co-author Michael Kalton, Principles of Systems Science, Springer (Nov. 2014), has just been published and will be shipped in November, 2014. This book is an integration of the major various aspects of systems science and serves as a general introduction to the field as a whole, but also to those different sub-fields.

Presently, I have been deeply involved in the application of systems science to energy systems in particular. This ranges from building a computer model of alternative energy sustainability to working on the development of a new modeling language that makes it easier to express energy and material flows in dynamic systems (see below). This last fall quarter I took my sabbatical at SUNY-ESF (State University of New York, Environmental Sciences and Forestry) in Syracuse, NY, working with a group of scientists led by Professor Charles Hall who have developed the extremely important concept of energy return on energy investment (EROEI). Hall and several others, coming from fields as diverse as systems ecology and business management have begun to develop a whole new way to study economic phenomena. It builds upon the early work in Ecological Economics, which attempts to embed the more reliable aspects of economics theories within the framework of whole systems ecology. The new approach is called Biophysical Economics.

Biophysical Economics

Ecological economics does attend to energy matters in economics, but not as a main feature. Biophysical economics puts much more focus on energy and its flow through economic systems to better understand the nature of wealth production, growth, and resource depletion. Energy is an extremely important aspect because, for the modern industrial economies, over 80% of our energy comes from fossil fuels, and in the case of oil, the "king pin" energy, we have reached the peak of global production. The consequences of this phenomenon are just now beginning to be understood from a scientific perspective.

My research in this arena involves the development of a biophysical model of something I call an abstract economy. The main feature of the model is that it deals very explicitly with the physics of extracting energy from a fixed, finite reserve of fuel and the increasing energy cost of doing so. The EROEI associated with oil production has been in steady decline for the past one hundred years due to the increasing costs associated with finding and pumping more oil from exotic (e.g. continental shelves) locations. Far more energy is used up producing the infrastructure for obtaining this harder to reach oil. Today, for every BTU of oil that is obtained from these locations, we use up 1/20th of a BTU of previously pumped oil. Oil pumped from shallow wells in Pennsylvania or West Texas, nearly 100 years ago only required about 1/100th of a BTU for each BTU we obtained.

To date my model has made it clear that the net energy flow (the gross energy less the energy cost), which is the energy that is used in the rest of the economy, has been in decline for the past thirty years or so. From this it is not hard to link this decline in net energy with many of the economic phenomena that have been plaguing our world over that time. We even have a way of explaining globalization as a response to declining net energy! Several papers are in process which will explicate these phenomena.

Evolutionary, Cognitive Neuro-Psychology

Over the last few years I have rekindled one of my first loves in science — neurobiology — to explore the nature of real intelligence. This follows naturally from my work on autonomous agents (below) but has led me into a rather interesting realm that would not be obvious to those who have come to cognitive science strictly from the field of artificial intelligence.

The Search for Sapience — The Cognitive Basis of Wisdom

When one looks at the situation with energy and other global problems that seem to be coming to a head in recent times, one has to ask: "If humans are so clever, why have they failed to act in such a way that ensures a sustainable future?" Why, in other words, are humans generally so foolish?

The advances in understanding how the brain works have literally exploded over the last several decades, and particularly in the first decade of the 21st century. Many disciplines are converging on the workings of the human mind. From psychology we continue to refine probes of behavior and decision making/problem solving. From neurobiology, especially with the advent of dynamic imaging techniques, we have begun to map control functions to specific areas of the brain. And from Evo-Devo (evolution and development taken together) we are beginning to understand how the modern human brain came into existence and how it helped Homo sapiens emerge as the dominant hominid as well as a symbol manipulating (language and signs) sentience. These are extraordinarily exciting times in brain research.

But a major puzzle has developed as we understand intelligence, creativity, and affect (emotions and feelings) in their evolutionary and contemporary contexts. Humans, even the most intelligent ones, often fail to make good judgments. Age helps, but is no guarantee that people will make wise choices. Many psychologists have begun to explore the psychological basis of wisdom and have defined a recognizable construct which they can probe with tests similar to those used to probe intelligences. It is now reasonably clear that wisdom is related to intelligence, but is not just more intelligence. It involves reflective judgment and a wealth of tacit life-knowledge. Wise people make choices of what to turn their intelligence and creativity to, of what to learn and what knowledge is useful in solving complex life and social problems. Wisdom involves superior moral judgments that benefit the largest numbers for the longest times.

At the same time neurobiologists are determining the capabilities of the prefrontal cortex in its role of providing so-called executive functions in guiding the reasoning and problem solving abilities of the mind. Recently attention has turned to the prefrontal cortex, particularly the extreme pole patch of tissue (right behind the eyes!) in processing judgment. My interest is in determining if this processing, which I have labeled sapience in order to distinguish the cognitive aspect from the performance and knowledge-base aspects of wisdom, is, indeed, the basis of what we recognize as wisdom.

A major question, from an evolutionary perspective, is this: If the human brain has evolved a higher form of judgment, geared presumably to the life problems faced by early Holocene hominids, is that capability sufficient to guide us in our present circumstances? This is a sociological question as much as an evolutionary or psychological one. Is modern Homo sapiens wise enough to make good choices on global and multi-generational scales?

Recent political events across the world, overpopulation and over-consumption of non-renewable resources, which seem to reflect that we, as a species, have not learned much from history, would suggest that the answer to that question is no.

This research, which is both personal and academic, is integrative and geared to finding an answer to that question. Given the rate at which we, as a species, seem to be degrading our life-support systems, it might be good to know if we even have the mental capacity to solve the problems we've created.

Here is a partial bibliography of books that I have found particularly interesting in this quest.

And here is a bibliography of books on global issues that support the suggestion that we humans are not very wise as a species.

Computational Projects

My interest in the above research area, especially in light of the various global issues that face humanity, has motivated me to apply my computer science background to projects that just might contribute to the analysis and solving of some of these problems.

I have started two projects that are directed at producing tools for analyzing complex systems problems, both technical and social. One project involves the development of a new systems dynamics modeling language to aid researchers in studying highly complex, hierarchically structured dynamic systems. The other project involves developing a global-scale, structured, e-discourse platform that will allow participants from all over the globe to help in the top-down analysis and bottom-up design of complex social problems (so-called "wicked" problems).

The Computing & Software Systems program at UWT has a Master's of Science degree program. I invite anyone who wants to further their education in computing and has a bent on helping to save the world(!) to contact me regarding these two projects.

Dynamic Systems Modeling Language - DynSysMod

DynSysMod is a new modeling language for expressing discrete time models of a wide variety of concrete systems. If you are familiar with languages such as DYNAMO or STELLA you may appreciate the benefits that DynSysMod should provide. For starters we have explicitly identified three different types of flows — matter, energy, and messages. This makes it easier to create models of systems that have large inputs of free energy to drive production processes. After several years of trying to delineate these separate flows in STELLA and ending up with incredibly complex models that were difficult to interpret it became apparent that real systems would be better represented if the flows were segregated.

Another innovation planed for DynSysMod is to provide a method for representing models in hierarchical decomposition or upward composition. In other words one can start defining a model from a very high level macro-view and then decompose the system into its component subsystems. Alternatively one can define a system that can later be combined with another system operating in the same time domain. In part this will be achieved by allowing different levels in the hierarchy to operate with different time step increments. For example a shop floor might be updated on an hourly basis while the inventories could be updated on a daily basis. We are currently exploring the computational efficiencies that might be gained by taking this approach as opposed to updating an entire model by just the smallest time increment of any of its components.

DynSysMod is being developed primarily to allow those interested in energy systems to test various aspects of, say, an alternative energy system such as solar photovoltaics or wind. I began this project in response to the difficulties I encountered in modelling what I call the Energy Systems Sustainability Criterion. This criterion is relatively simple to understand but fiendishly difficult to get ahold of in practice. It says that any energy conversion capital equipment (such as solar panels or wind turbines) must, in the long run, produce enough excess energy above that consumed in the economy, to maintain and replace itself when its useful life is over. At first glance this might seem like a simple thing to verify, but it has second and higher order aspects, such as there must be enough energy to account for the fraction of energy used to maintain and build the manufacturing plant where the capital equipment is produced (as well as cover other work-costs). A modelling language that breaks out the explicit flows and reservoirs of energy, as well as captures the first and second laws of thermodynamics, should be very helpful in determining how energy flows through such a complex macro-system.

See MS graduate Jennifer Leaf's final project report covering the first phase of this project.

Electronic Discourse Systems: The ConsensUs Project

Electronic discourse systems have been developing since the early days of e-mail lists and bulletin boards. The Internet and various technologies enable large-scale e-discourse systems such as usenet and, more recently, Wiki collaboration.

I have recently embarked on developing a new e-discourse system code-named "ConsensUs", which extends concepts of structured discourse for problem exploration and solving. This system will employ latent semantic analysis (think of Google's "Similar pages" feature) to aggregate and consolidate large volumes of commentary as a means of assisting moderators and managing information overload.

The purpose of ConsensUs is to support global-scale discourse for global-scale problems like global warming/climate change. Current e-discourse systems do not seem to scale well. We hypothesize that the form of structure used and the use of semantic analysis, and other techniques, will allow ConsensUs to scale well. Additionally, while the alpha development system is being built on Web technologies, we plan to port the system to a peer-to-peer (P2P) platform such as Sun Microsystem's JXTA project.

The ConsensUs Web site gives an (as yet incomplete) description of the project. This project is being developed by my students under the direction of myself and Don McLane. Both undergraduate and graduate students are participating in developing the initial code base. We plan to open this project up for open-source development once that base is in place. Stay tuned for further developments!

Adaptive, Autonomous, Artificial Agents/Architecture

It is now widely accepted that in order for artificial agents (robots and softbots) to achieve the level of autonomy and robustness that is desired for a number of important problem domains, these agents will need to be able to adapt to dynamic, uncertain environments. They will need to learn from experience and be able to predict future states of their environments based on a (probably) imperfect, acquired model of their world. The capacity to predict allows agents to not merely react to contingencies but to anticipate and act in advance of mission-impacting events. In this way preemptive agents can avoid dangerous situations or seek advantageous ones. To do this they need to exploit causal relations [412k including graphics] between environmental factors. An adaptrode-based learning architecture provides a means for agents to learn and use causal relations to become anticipatory and preemptive. The adaptrode is strongly inspired by biological learning. The model was derived from neurobiological and behavioral considerations (see Adaptrode Learning [114k including graphics] for the derivation and Adaptrode Learning in Artificial Agents [97k including graphics] for how it is employed.

See the Adaptive Agents Lab Web Site

Foraging search -- inspired by how animals find food -- relies on a causal relation heuristic.  It assumes that there are, associated with each sought resource object, a group of objects which have a causal relationship with the sought resource object.  Such objects are called "cues". In the absence of a cue (labelling a path out of a given node) a stochastic component selects a path randomly on each iteration of a depth-first-search.  If, however, a cue is detected, the agent switches to a directed search (best-first-search). The trick is in how the agent is to learn what constitutes valid cues.  I have been investigating the learning competence of the Adaptrode mechanism along with the architecture necessary to learn and use causal relations for guiding search. 

I have extended the concept of foraging search into the realm of prototypical intelligence.  That is, the brain mechanisms needed to conduct foraging search behavior can be used to explain mental processes that we call thinking.  What interest me is the progression from reactive systems, through adaptive and anticipative systems, to intelligence.  Is it possible that anticipative behavior is the basic, possibly only, necessary one for the evolution of intelligence in animals?

Last update: 08/15/2014

Return to Home Page