Part One: Form and Function
Systems theory, as it is now called, has its roots in two ideas developed in tandem by thinkers half a world apart. In the 1940s, Ludwig von Bertalanffy – an Austrian biologist then working in Germany – proposed what he later called general systems theory. He “emphasized that real systems are open to, and interact with, their environments, and that they acquire qualitatively new properties through emergence, resulting in continual evolution” (Heylighen & Joslyn, 1992). This idea stood in opposition to the simplifying assumptions and divisive strategies intrinsic to reductionist scientific philosophy. Nature, according to von Bertalanffy, is a system – a word that implies relationship – and one cannot understand a system based on the properties of its components. Rather, the interaction of the components amongst themselves and with their environments both defines the system and directs its development. Hence, the separation of scientific inquiry into fields – such as physics, chemistry, biology, psychology, etc. – is both arbitrary and misleading, as each field is the study of a system that either is the environment of, or interacts with, all others. General systems theory, then, applies to all systems in general, and it was on this principle that von Bertalanffy wished to reunify the scientific community.
At the same time, the American mathematician Norbert Wiener became interested in the role of feedback on the behavior of antiaircraft guns (de Rosnay, 2000). The traditional information model had been, and remains:
Input→Process→Output
However, Wiener determined that purposeful behavior in machines stems from their ability to recursively adapt future behavior based on assessments of past performance. In other words, while the machine processes some stimulus (input) to produce some behavior (output), it must recognize that each output behavior constitutes a component of subsequent input stimuli. This discovery, which he called a negative feedback loop, spawned a new field of inquiry that investigated living things from the perspective of engineers, and machines from the perspective of physiology. A series of meetings, spearheaded by neurophysiologists Arturo Rosenbleuth and Warren McCulloch (de Rosnay, 2000), furthered this research, especially as it applied to “communication and control between animals and machines” (Heylighen, Joslyn and Turchin, 1999). Thus was cybernetics born; a word coined by Wiener (1945) from the Greek kubernetes meaning steersman, and signifying “the art of managing and directing highly complex systems” (de Rosnay, 2000).
Though similar, cybernetics and general systems theory are not synonymous philosophies. “[General] systems theory has focused more on the structure of systems and their models, whereas cybernetics has focused more on how systems function” (emphasis in original; Heylighen et al., 1999). Bertalanffy himself drew the distinction that, while his General systems theory deals with systems in any context – that is, in general – cybernetics “is a theory of control systems based on communication (transfer of information) between systems and environment and within the system, and control (feedback) of the system’s function in regard to environment… Cybernetic systems are a special case, however important, of systems showing self-regulation” (von Bertalanffy, 1968 in Scrivener, 2002).
Self-regulation, or self-organization, is the concept that links the function of a system to its structure. W. Ross Ashby (1956) – who studied with von Bertalanffy – pioneered this principle, which states that within any open system, there exist regions wherein the system’s internal organization increases automatically as a consequence of the laws (e.g. physical, social, etc.) acting on that system. Applied to general systems theory and cybernetics, self-organization becomes the goal of a system interacting with its environment, and feedback becomes the mechanism by which the system achieves that goal. Since one cannot understand structure and function “in separation, it is clear that cybernetics and [general] systems theory” are “two facets of a single approach” (Heylighen et al, 1999). That single approach is here called systems theory.
Since its conception, systems theory – or systems principle, as some prefer (cc Mandel, 1996; 1997) – has evolved into, or over, several related disciplines that consider natural phenomena from a perspective of connectedness rather than one of linear causality. One of these disciplines – second-order cybernetics – developed in the 1970’s in response to conceptual shifts in the cybernetic field driven by post-war technological advancement. The traditional cybernetic philosophy in the 1960’s – then already predisposed to focus on computers and the purposes for which they were constructed – began to languish under the practical, reductionist influence of engineers and computer scientists. This concentration on purpose de-emphasized, even nullified, the role of self-organization in a system, so that the system became a “passive, objectively given ‘thing’, that can be freely observed, manipulated, and taken apart” (Heylighen and Joslyn, 2001). To return to and preserve the principles of “autonomy, self-organization and the subjectivity of modeling” required recognition that a “system [is] an agent in its own right, interacting with another agent, the observer,” and that any “observations [of that system] will depend on their interaction” (Heylighen and Joslyn, 2001). This broadened emphasis on the importance of the system/observer relationship remains the hallmark of second-order cybernetics.
-G
References:
Ashby, R. 1956. An Introduction to Cybernetics. New York: Methuen
De Rosnay, J. 2000. “History of Cybernetics and Systems Science”, in: F. Heylighen, C. Joslyn and V. Turchin (editors): Principia Cybernetica Web (Principia Cybernetica, Brussels). Retrieved December 10, 2004, from http://pespmc1.vub.ac.be/CYBSHIST.html
Heylighen, F. and Joslyn, C. 1992. “What is Systems Theory?” in: F. Heylighen, C. Joslyn and V. Turchin (editors): Principia Cybernetica Web (Principia Cybernetica, Brussels). Retrieved December 10, 2004, from http://pespmc1.vub.ac.be/SYSTHEOR.html
Heylighen, F., Joslyn, C. and Turchin, V. 1999. “What are Cybernetics and Systems Science?” in: F. Heylighen, C. Joslyn and V. Turchin (editors): Principia Cybernetica Web (Principia Cybernetica, Brussels).Retrieved December 10, 2004, from http://pespmc1.vub.ac.be/CYBSWHAT.html
Heylighen, F. and Joslyn, C. 2001. “Second-Order Cybernetics”, in: F. Heylighen, C. Joslyn and V. Turchin (editors): Principia Cybernetica Web (Principia Cybernetica, Brussels). Retrieved December 10, 2004, from http://pespmc1.vub.ac.be/SECORCYB.html
Mandel, T. 1996;1997. Is There a General System? The First International Electronic Seminar on Wholeness. Retrieved January 25, 2005, from http://www.newciv.org/ISSS_Primer/asem10tm.html
Scrivener, A. B. 2002. Introduction. A Curriculum for Cybernetics and Systems Theory. Retrieved December 10, 2004, from http://www.well.com/user/abs/curriculum.html#What