Niflheim Media Photography

51249E05-24EE-4821-BDF0-A8E9A6EB0F539F402136-9F5A-47D0-990E-16863E175FFA8956FF64-3747-4282-89AB-B783AFCA6D8E724661FE-2DB9-4A94-8A91-180E31581121690F9CE0-0EDF-4D1F-9E1C-7BE1C246391D1857F312-5A8F-4D16-90D5-471C9F63D0D5B57010EC-E62F-4B41-B21F-A3E9C2E6BACD1D97A122-2F92-489B-938C-C151F80F8548BF303FD4-CBF4-4716-BA19-D85303816DFD7BED810B-B17D-4D00-B789-AA99C76FC2666FF9C1D9-065A-422D-B028-57DFF580ACF5ECDBFC8A-C99F-409E-B54E-91B46662042978C91924-E67B-4D73-96C3-9B15C535BAEC7763B98B-44A5-4BF4-9669-81FF4A54F787B9CCA775-B7D9-4307-9BDD-0E26A40BB50A5CBC72AF-EB3A-4427-95E8-48ADF2920C21Las Vegas Photography

702-803-4263

professional photojournalist

niflheim.thompson@gmail.com

———

8078F395-BECD-4154-8561-0B8E30DB36EBE3F69130-9DF2-481F-92A6-AC5C3F3A6A98

Jason Thompson received his Bachelor of Arts degree in Law & Society from Winona State University and the International University Of Ulaanbaatar where he studied U.S.-Mongolian Foreign Relations 1860-1920. He also attended diesel and hybrid technology programs at Hennepin Technical College in Minnesota, foreign ambassadorship courses at Soonchunhyang University in the Republic of Korea and the Democratic People’s Republic of Korea and attained a positive leadership certificate at the University for Peace in Costa Rica. He has a Master of Arts degree from University of Nevada, Las Vegas where he focused on: how climate control was visually framed in the media using content analysis, enhanced weathering techniques that create power and control atmospheric carbon dioxide percentages using olivine powder and Sonics and high-energy x-ray applications. Jason has wrote for Diesel Power and The Costa Rica News. Niflheim Media Injection Services orchestrated this article:

https://www.forbes.com/sites/jamesconca/2015/09/14/amazons-jeff-bezos-parachutes-into-north-carolina-to-save-wind-power/#1258ebe81d73

Contact us:

28AA3B58-3286-4AFB-ADD1-6AA7314AFBD4.jpeg

Niflheim.thompson@gmail.com

1(702)803-4263

Support Niflheim Media:

https://niflheimmedia.wordpress.com/2016/07/28/support-niflheim-media/

About Niflheim Media:

https://niflheimmedia.wordpress.com/2017/04/27/how-news-makes-money/

Niflheim Media Places Your Idea, Product or Policy in the News:

https://niflheimmedia.wordpress.com/2016/03/18/niflheim-media-services/

Climate control history and eugenics:

https://niflheimmedia.wordpress.com/2016/04/01/climate-change-science/

 

Advertisements

Ways to Warm Our Planet

We will be looking for ways to warm the planet soon.

https://www.nature.com/articles/s41467-018-03800-0

Steering comets to change obliquity might be the most feasible.

Change albedo of ice sheets with black carbon might help a little..

Re-directing ocean currents might work a little…

Changing the atmosphere doesn’t do much not enough potential energy….

Decreasing clouds might help via restricting air travel…


Jason Thompson received his Bachelor of Arts degree in Law & Society from Winona State University and the International University Of Ulaanbaatar where he studied U.S.-Mongolian Foreign Relations 1860-1920. He also attended diesel and hybrid technology programs at Hennepin Technical College in Minnesota, foreign ambassadorship courses at Soonchunhyang University in the Republic of Korea and the Democratic People’s Republic of Korea and attained a positive leadership certificate at the University for Peace in Costa Rica. He has a Master of Arts degree from University of Nevada, Las Vegas where he focused on: how climate control was visually framed in the media using content analysis, enhanced weathering techniques that create power and control atmospheric carbon dioxide percentages using olivine powder and Sonics and high-energy x-ray applications. Jason has wrote for Diesel Power and The Costa Rica News. Niflheim Media Injection Services orchestrated this article:

https://www.forbes.com/sites/jamesconca/2015/09/14/amazons-jeff-bezos-parachutes-into-north-carolina-to-save-wind-power/#1258ebe81d73

Contact us:

28AA3B58-3286-4AFB-ADD1-6AA7314AFBD4.jpeg

Niflheim.thompson@gmail.com

1(702)803-4263

Support Niflheim Media:

https://niflheimmedia.wordpress.com/2016/07/28/support-niflheim-media/

About Niflheim Media:

https://niflheimmedia.wordpress.com/2017/04/27/how-news-makes-money/

Niflheim Media Places Your Idea, Product or Policy in the News:

https://niflheimmedia.wordpress.com/2016/03/18/niflheim-media-services/

Climate control history and eugenics:

https://niflheimmedia.wordpress.com/2016/04/01/climate-change-science/

Small is Beautiful

https://en.wikipedia.org/wiki/Small_Is_Beautiful

220px-SmallIsBeautiful1973

Socialists should insist on using the nationalised industries not simply to out-capitalise the capitalists — an attempt in which they may or may not succeed — but to evolve a more democratic and dignified system of industrial administration, a more humane employment of machinery, and a more intelligent utilization of the fruits of human ingenuity and effort. If they can do this, they have the future in their hands. If they cannot, they have nothing to offer that is worthy of the sweat of free-born men

it said

Part I summarizes the Economic world of the early 1970s, from Schumacher’s perspective. In the first chapter, “The Problem of Production”, Schumacher argues that the modern economy is unsustainable. Natural resources (like fossil fuels), are treated as expendable income, when in fact they should be treated as capital, since they are not renewable, and thus subject to eventual depletion. He further argues that nature’s resistance to pollution is limited as well. He concludes that government effort must be concentrated on sustainable development, because relatively minor improvements, for example, technology transfer to Third World countries, will not solve the underlying problem of an unsustainable economy. Schumacher’s philosophy is one of “enoughness”, appreciating both human needs and limitations, and appropriate use of technology. It grew out of his study of village-based economics, which he later termed Buddhist economics, which is the subject of the book’s fourth chapter.

When Social Science, Climate Science & Science Science Converge: Limits to Growth…Need to add Physics and Love and Urban Farming and Urban Sawyering to the equation less NSSM 200 and NSSM 201

https://www.amazon.com/Models-Doom-Critique-Limits-Growth/dp/0876639058https://reason.com/archives/2012/04/18/the-limits-to-growth-40-year-update

https://www.amazon.com/Models-Doom-Critique-Limits-Growth/dp/0876639058

Models of Doom; A Critique of The Limits to Growth

content

https://www.google.com/imgres?imgurl=http://t1.gstatic.com/images?q%3Dtbn:ANd9GcQGUi-MdHsByhs1VPujEOPYu7RL6z3qb2KTNIbCE5BMB2PzSM4P&imgrefurl=https://books.google.com/books/about/Models_of_Doom.html?id%3DBJIXmQEACAAJ%26source%3Dkp_cover&h=400&w=268&tbnid=reup48uRFIHaGM:&q=Models+of+Doom;+A+Critique+of+The+Limits+to+Growth&tbnh=160&tbnw=107&usg=AI4_-kQ6ypF9xqeoUAj2yS2GNOv0XjYKuA&vet=12ahUKEwjf3tSB-pfeAhXSq1MKHaNRAKAQ_B0wCnoECAkQFA..i&docid=O_i-UeQj-86bRM&itg=1&sa=X&ved=2ahUKEwjf3tSB-pfeAhXSq1MKHaNRAKAQ_B0wCnoECAkQFA

 

’2052-A Global Forecast for the Next Forty Years’, written by Jørgen Randers

http://www.2052.info/http://www.2052.info/

pic121105-Jorgen-Randers-in-Sydney

Jørgen Randers (born 1945) is professor of climate strategy at the Norwegian Business School, where he works on climate and energy issues, scenario analysis and system dynamics. He lectures widely at home and abroad on sustainable development issues – particularly on the future and climate change – for all types of corporate and non-corporate audiences.

Jørgen Randers has spent one third of his life in academia, one third in business and on third in the NGO world. He is non-executive member of several corporate boards in Norway, including the state owned Postal Service. He also sits on the sustainability council of The Dow Chemical Company in the US and Astra Zeneca in the UK.

He was President of the Norwegian Business School BI 1981 – 89, and Deputy Director General of WWF International (World Wide Fund for Nature) in Switzerland 1994 – 99. He chaired the Commission on Low Greenhouse Gas Emissions who reported in 2006 to the Norwegian cabinet on how Norway can cut is climate gas emissions by two thirds by 2050.

He has written a number of books and scientific papers. He co-authored The Limits to Growth in 1972 and its sequels in 1992 and 2004. In 2012 he published 2052 – A Global Forecast for the Next Forty Years, which is now available in 8 languages in more than 100.000 copies.

He has received many prizes and awards, including an honorary doctorate from the Anglia Ruskin University in Cambridge UK. He is a full member of the Club of Rome.


Jump to search

The Limits to Growth
Cover first edition Limits to growth.jpg

The Limits to Growth first edition cover.
Authors
Language English
Published 1972
Publisher Potomac Associates – Universe Books
Pages 205
ISBN 0-87663-165-0
OCLC 307838
digital: Digitized 1972 edition

Logo of the Club of Rome.

The Limits to Growth (LTG) is a 1972 report[1] on the computer simulation of exponential economic and population growth with a finite supply of resources.[2] Funded by the Volkswagen Foundation

Google Resurrects Embrace, Extend, Extinguish

https://www.technewsworld.com/story/83972.html

This is interesting:

This almost sounds like a plot for a novel. Microsoft creates a successful strategy called “Embrace, Extend, Extinguish” and then promptly forgets it, resulting in a string of failures.

Google, which up to now seemed happy to repeat Microsoft’s mistakes, accidentally picks up a successful Microsoft practice and uses it against Apple — likely taking out a number of its Android partners in the process.

Kings of Norway

220px-Harold-III-Coin

https://en.wikipedia.org/wiki/Monarchy_of_Norway 

https://en.wikipedia.org/wiki/Harald_Hardrada

Wikipedia said

Harald Sigurdsson (Old Norse: Haraldr Sigurðarson; c. 1015 – 25 September 1066), given the epithet Hardrada (Old Norse: harðráði, modern Norwegian: Hardråde, roughly translated as “stern counsel” or “hard ruler”) in the sagas,[2] was King of Norway (as Harald III) from 1046 to 1066. In addition, he unsuccessfully claimed the Danish throne until 1064 and the English throne in 1066. Before becoming king, Harald had spent around fifteen years in exile as a mercenary and military commander in Kievan Rus’ and of the Varangian Guard in the Byzantine Empire.

1280px-The_body_of_Leo_V_is_dragged_to_the_Hippodrome_through_the_Skyla_Gate

170px-Elisiv_of_Kiev
Elisiv of Kiev.jpg

 

220px-Harald_Hardrada_window_in_Kirkwall_Cathedral_geograph_2068881
Colin Smith / Harald Hardrada / CC BY-SA 2.0Enter a caption

Colin Smith / Harald Hardrada / CC BY-SA 2.0

What is a Climate Model and Global Model? Is there a difference? Both created by first computers: Jay Wright Forrester (July 14, 1918 – November 16, 2016) Whirlwind or Wizard of Oz….

800px-8863-Project-Whirlwind-CRMI

 

Climate models are made up of one and zeroes. Models have boundary conditions or rules set up in the programs. Then the programs are run or simulated… The programs are made of statistical calculations. Chaos theory.

 

Capture

Jay Wright Forrester (July 14, 1918 – November 16, 2016) was a pioneering American computer engineer and systems scientist. He was a professor at the MIT Sloan School of Management. Forrester is known as the founder of system dynamics, which deals with the simulation of interactions between objects in dynamic systems.

Forrester was the founder of system dynamics, which deals with the simulation of interactions between objects in dynamic systems. Industrial Dynamics was the first book Forrester wrote using system dynamics to analyze industrial business cycles. Several years later, interactions with former Boston Mayor John F. Collins led Forrester to write Urban Dynamics, which sparked an ongoing debate on the feasibility of modeling broader social problems.

The urban dynamics model attracted the attention of urban planners around the world, eventually leading Forrester to meet a founder of the Club of Rome. He later met with the Club of Rome to discuss issues surrounding global sustainability; the book World Dynamics followed. World Dynamics took on modeling the complex interactions of the world economy, population and ecology, which was controversial (see also Donella Meadows and Limits to Growth). It was the start of the field of global modeling.[3] Forrester continued working in applications of system dynamics and promoting its use in education.

https://en.wikipedia.org/wiki/Jay_Wright_Forrester

Whirlwind I was a Cold War-era vacuum tube computer developed by the MIT Servomechanisms Laboratory for the U.S. Navy. It was among the first digital electronic computers that operated in real-time for output, and the first that was not simply an electronic replacement of older mechanical systems.

It was one of the first computers to calculate in parallel (rather than serial), and was the first to use magnetic core memory.

https://en.wikipedia.org/wiki/Whirlwind_I

https://en.wikipedia.org/wiki/Magnetic-core_memory

The following is a brief quote describing the birth of the control paradigm. Climate control, birth control, quality control, human resources, and climate models . Also global models and the college system.

https://concordlibrary.org/special-collections/oral-history/Forrester

Dr. Jay Forrester
29 Kings Lane

Age 76

Interviewed December 6, 1994

Concord Oral History Program
Renee Garrelick, Interviewer.

Dr. Jay Forrester was a graduate electrical engineering student in 1944 at MIT when he was asked to design a flight training project for the Navy, which became Project Whirlwind. The work on aircraft stability and control analysis for the project converged in time with the development of digital computer technology and became the key to a national air defense system that laid the groundwork for the birth of the minicomputer industry. Project Whirlwind and Dr. Forrester’s contribution resulted in the first digital computer that could be put to such practical uses as controlling manufacturing processes and directing airplane traffic.

Forrester, Dr. JayI joined Gordon Brown along with two other people, and the four of us started the Servomechanisms Laboratory at MIT. Gordon Brown was the director, and the work was devoted to developing remote control servomechanisms for Army gun mounts and for Navy radar units. These control systems would take a very weak signal from an analog computer or director and then would position either the gun mount or radar set to a corresponding position, and so these were devices used to position heavy military equipment based on small controlling signals. That work continued through World War II. Most of the work that I did was with hydraulic oil-driven equipment, rather than electrical, because at that time the Army in particular was very suspicious of electronic equipment and did not trust it and would not use it except in radios where it was required. They much preferred that their control systems be made with mechanical hydraulic equipment.

At one stage a control unit that I had designed had found its way to the Pacific. In fact, the history of the occasion came about because we had built an experimental control unit to go on an experimental radar that the Radiation Laboratory had built to demonstrate the possibility of directing fighter planes to incoming enemy aircraft that were approaching the fleet. The experimental control system had been put on the experimental radar just to demonstrate the principles that were involved when the captain of the U.S.S. Lexington was brought for a tour of the Radiation Laboratory, shown what they were working on and told in six or nine months there would be production units of that sort available. According to the story, he said “That’s too long, I want that one right there put on my ship now.” Nobody had expected that these experimental units were in fact going to be used immediately. He prevailed and got the equipment on his ship, and it operated in the Pacific theater for six months or more. At which time some difficulties arose, and I volunteered to go out and see what the trouble was and try to remedy it. Not knowing what the real problem was, I packed up everything that I thought I would need, all the spare parts and the tools in a rather large foot locker that was almost solidly packed with metal and weighed about 230 lbs. I took this as baggage on an ADC3 on a 24-hour flight between Boston and San Francisco. One of the very amusing sidelights was the baggage people looked at this, and it was printed on it 230 lbs., and they laughed and considered it a great joke, and grabbed a hold of it, and nothing at all happened because it really was 230 lbs.

Anyway, having worked on the unit in Pearl Harbor and not being entirely finished with what I was doing, they invited me to go along for what turned out to be the invasion of Tarawa and a turn down through two chains of the Marshall Islands that were still in the control of the Japanese. The objective was to bomb the Japanese air fields and the objective of the Japanese was to resist these, and so there was an all-day air battle. About 11:00 that night as the task force was leaving the area, the Japanese succeeded in hitting the Lexington with one torpedo, which cut off one of its four propellers and made a hole in the side making it less than fully maneuverable. It eventually did go back to Pearl Harbor for minor repairs and then for full repairs.

In 1944 I was considering leaving MIT and perhaps going into some industrial company in feedback control systems when Gordon Brown showed me a list of a number of projects that were taking shape or available that one might work on. Out of this list I picked the one related to the development of a stability and control analyzer for aircraft that Admiral Louis deFlorez had asked MIT to consider. This had originally been directed to the aeronautics department and they had come to Gordon Brown in the Servomechanisms Laboratory, and through that connection I began to work on an analog computer to solve the equations of the motion of an airplane. Before this, there had been aircraft flight trainers. Link Company and Western Electric and others had made cockpits that would behave like known aircraft. The objective in this project was to take wind tunnel data from a model of an airplane and create a realistic aircraft feel so that pilots could judge the aircraft before it was actually built. We worked on this for one year and decided that the analog computer idea was not going to be satisfactory. Through discussions with Perry Crawford, who worked at the Special Devices Center of the Navy and who were sponsors for this program, we decided to shift over to the newly emerging field of digital computers. The advantage of the digital computer was that it would get away from the most serious disadvantage of the analog machines. The analog machines were themselves imperfect mechanical and electrical devices, and if you put a large number of them together, you weren’t sure whether they would be solving the problem you gave to them or simply solving the consequences of their own idiosyncrasies as they interacted with one another.

A clock would not be a very good point of departure as a comparison because it counts the swing of the pendulum and therefore it is somewhere in between the analog and digital equipment. The digital equipment was based on computing with numbers and the analog equipment based on computing with the positions with mechanical shafts or the voltages of electronic circuits, and any of these electromechanical devices have a rather limited range of sensitivity in their own internal noise, and uncertainties are apt to be large compared to some of the computations you want to make so they are difficult to use in large numbers. But the analog computers had preceded digital computers by a good many years. They had been the first computing devices to be used.

Robert Everett started with me around 1941, and worked with me in the Servomechanisms Laboratory when we were working on remote control devices for gun mounts and radar sets. He then continued with me as we started the project for the aircraft stability analyzer, which then became the Digital Computer Laboratory at MIT where I was the director and he was associate director. It was in the Digital Computer Laboratory along with discussions and inspiration from Perry Crawford in the Special Devices Center that we began to see the opportunity to use digital computers as combat information centers to handle the complicated flow of information in a military situation. This then led us into high speed computers that had to be very reliable and the Whirlwind Project that took shape was characterized by its good engineering and its search for highly reliable circuits. A lot of the other computers at that time, and there were several other digital computer projects going on at that time, were devoted to scientific computation where if the machine stopped working, you could begin over or do the job tomorrow. In the kind of applications that we were anticipating, the computer was part of a real time ongoing situation and would have to be highly reliable.

It was an unusual group of people that worked on the project. Most of them had come, as Robert Everett and I had, as graduate students into the electrical engineering department. I think the esprit de corps came from the atmosphere of the laboratory, the extent to which high responsibilities or major responsibilities were given to very young people who managed to rise to those challenges, and where they could immediately see themselves making a reputation and progress. They would work on a circuit in the laboratory and then go directly to the big International Institute of Radio Engineers convention in New York and present a paper on what they had been doing before a national and international audience, so they saw themselves as very important leaders in a new pioneering field.

The name Whirlwind came from the Special Devices Center and possibly from Perry Crawford whom I have mentioned. They had several computer projects of various sorts. Some of them were analog and some of them were digital, and they decided to name them after various atmospheric disturbances, so there was the cyclone project and the hurricane project and some others and the Whirlwind Project. It was one of several Navy computer projects.

It was started by the Special Devices Center which was located on Long Island and was devoted to pioneering unusual pace-setting military equipment. That’s where the original work was sponsored. All the digital computer work and the aircraft analyzer work was done after World War II. The Special Devices Center was merged into the Office of Naval Research in Washington in 1946. Perry Crawford stayed in the picture, and he did a great deal of the fundraising and promoting of the idea of digital computers in the Navy. He worked at every level from the Chief of Naval Operations up and down the line. He was quite uninhibited and moved around to keep people focused on the possibilities of digital computers.

Officially, the work began in late 1944 and the analog computer part of it ran through 1945, and the shift over to serial digital computers was roughly 1946. We decided that serial digital computers, which were very popular at that time because they were relatively simple, would not be fast enough. So by about 1947 we had shifted over to the parallel digital computer idea as the only way to get the speed that would be necessary for the problems that we were interested in.

In 1948 we had written two memoranda about the possibility of the digital computer handling the combat information flow in a naval task force, taking information from aircraft, taking information from surface ships, information from submarines and putting it together so that the whole picture could be seen. That led to a project with the Air Force on air traffic control. We had a small project for a period of a couple of years, the idea of applying digital computers for civilian air traffic control.

While that was going on, the Soviet Union fired its first atom bomb, and there began to be concern about the air defense system, the possibility of Soviet attacks over the North Pole with aircraft. That became the driving force for looking for an improved air defense system because it was becoming rapidly evident that the manual handling of information would not be satisfactory in terms of high speed aircraft and modern weapons. That concern then was behind a project that was headed up by George Valley, an associate professor of physics at MIT. He began to look into alternative methods, and he was directed to come and see me and together we discussed the situation. I proposed to him what we had already proposed to the Navy that digital computers could provide the information analysis and consolidation that would be necessary in such an application. This was very radical, very daring and something that the Air Force would not dare to simply endorse and proceed.

So the upshot result was the creation of what was called Project Charles, which ran for the better part of a year, to look at the shortcomings of the existing air defense system and search for alternatives. They could quickly see that the existing systems were not adequate and about the only proposal for what to do about it came from George Valley, Bob Everett and me, and by that time we were able to actually demonstrate using Whirlwind computer. The computer would bring in radar data about a bomber and automatically compute the instructions to intercept and send those instructions by radio to the autopilot of the fighter plane. That had all been worked out between several laboratories so during the existence of Project Charles we demonstrated the nature of the proposed system. This was about the only proposal they had to go on, and so they ended up endorsing the idea of an air defense system with digital computers at the various centers for handling radar information and issuing defense orders.

The work up through Project Charles was done at the Digital Computer Laboratory where Whirlwind was located at 211 Massachusetts Avenue, which was the building opposite the Necco factory and the building where presently the Graphics Department of MIT is located. That whole building was devoted to Whirlwind at the time, and Whirlwind was running there and the demonstrations were run from that location. The result of the Project Charles study led to the creation of the Lincoln Laboratory. Buildings were built in Lexington, and in due course the Digital Computer Laboratory on the MIT campus moved and became Division 6 of the Lincoln Laboratory. I was Division 6 Head and Robert Everett was my Associate Head, and I believe it was the largest division of the Lincoln Laboratory. There were six divisions. We were the ones in charge of designing what became known as the SAGE defense system (Semi-Automatic Ground Environment) with 30 some control centers in North America bringing data and analyzing and computing defense instructions.

Ken Olsen came in as a research assistant while he was working toward his masters degree and was a very top-notch engineer with considerable ability in the area of reliable equipment and a great deal of initiative in being able to get work done. Whirlwind had been a big project that we worked on for several years. We had also, to facilitate the research, developed a line of what we called “test equipment.” These were digital building blocks, each a different kind of digital circuit, so called flip flop that would remember amplifiers. The various building blocks of a digital system had been built as separate units or separate panels so that one could plug them together in any kind of configuration to test the things that went into Whirlwind. We then came to the point where the random-access-coincident-current memory that I had invented as a memory system was ready to be given a full scale test. We wanted to make it a very realistic test. Norman Taylor who was chief engineer and Kenneth Olsen who worked for him suggested that they would make a full-scale computer with about the capacity of Whirlwind out of this “test equipment” and use it to test the magnetic core memory. I must say I doubted that they could do it and certainly not in the nine months they said it would take, but they came very close to in fact having the so called memory test computer built in about nine months, and it was used to try out the random-access magnetic core memory which turned out to be very successful. Within a couple of months after that, we moved it into the Whirlwind computer to replace the electrostatic storage tubes that we had been using for memory but which were expensive and short-lived and not very reliable.

The ideas that became the magnetic core memory evolved over a period of two or three years. At the time, people were desperate for memory for computers. All kinds of things were being tried. As an interesting sidelight, we seriously considered renting a television link from Boston to Buffalo and back so that we could store binary digits in the transit time that it would take to make the round-trip on the television channel. This is just an illustration of how desperate people were to explore every possibility in electronic memory. If one looked at the existing memory systems that were being undertaken, there were what I would call the linear ones, the single line memories of the mercury delay line. The mercury delay line, being the one that was actually used and developed and was reliable, would take about a meter long tube of mercury. You would have a piece of electric crystal at one end where you would put shock waves into it, and another crystal at the other end where you would pick these up, and the transit time in the tube was about a millisecond, a thousandth of a second. You could put maybe something like a thousand of these shocks that were either present or absent in the tube traveling down the tube and then picked up at the far end and then retimed, reshaped and put in at the beginning end. You could keep this whole chain of a thousand binary digits circulating in the delay line and that worked. But it was slow in the sense that you could not get at something stored in the delay line until it came out the far end. A millisecond represented the access time, which is very slow as computer circuits go.

Then there were the two dimensional storage units which were mostly cathode-ray tubes of varying designs where you would store on the inside of a cathode-ray tube dots that could be charged plus or minus and scanned and picked up again. In general, those were rather unreliable and had a short life, but they were used. I began to think that if we had one-dimensional storage and two-dimensional storage, what’s the possibility of a three-dimensional storage. In thinking about that in about 1947, I arrived at a logical structure that satisfied the idea, but it was built around devices that were themselves not really practical. The devices were essentially little neon tubes, glow tubes, which have a high degree of nonlinearity which you must have for a memory system where you would have to put on a rather large voltage to make them glow, but they will continue to glow as you reduce the voltage down to a relatively low voltage. So you have a system where you can cause it to fire and then you can reduce the voltage and it will continue to fire. This lent itself to a coincident-current kind of arrangement where you could activate a wire say in the “x” axis and another one crossing in the “y” axis and only the glow tube at the intersection would have enough voltage on it to break down and begin to discharge. Logically, this was the basic idea. Practically, it again would be slow but more importantly the characteristics of glow discharges change with temperature and age, and we never really tried to build a full system. We did tests on individual units but on the whole it didn’t seem to be going anywhere and we didn’t really pursue it. It was then about two years later in 1949 in looking at advertisements for magnetic materials that had been used by the Germans for magnetic amplifiers in their army tank turrets, magnetic material having what is called a rectangular ??? tube piling on linear, that I began to ponder whether or not that could be incorporated into the logical structure that I had worked with before. Over a period of two or three months we developed how that could be done and then over the next three years or so, we in fact brought it to the point where it was a working, permanently reliable system. It was used from the mid’50s to the mid’80s, about 25 years, in essentially all digital computers until it was replaced by the integrated circuits that are used today.

In Whirlwind there were more things, more ideas that have continued to the present time than in any other computer at that time. It had the magnetic core memory which dominated the field for perhaps two decades, it was a high-speed parallel machine which is true of today’s computers, a lot of them at that time were serial machines, it used cathode-ray tubes driven by the computer and ways of interacting between the person and what you saw on the computer. We didn’t use the mouse that people use today, we used a light gun that you would hold over the face of the tube where you wanted something to happen, and it would pick up the light from the tube. It served the purpose that you now get with a mouse interacting with the computer. And a number of other things that I would say made it a pioneer for a lot of what is going on now. It was of very high reliability. The emphasis throughout had been on reliability.

The SAGE air defense centers had maybe between 60,000 and 80,000 vacuum tubes in each one in a building four stories high and maybe 160 feet square. Those systems operated from the mid to late 1950s up until the early 1980s. The historical data on the performance of those centers show that a center was operational about 99.8% of the time. That’s better than today’s computers will generally do. Partly it was because there were two computers in each center where you could trade off between them but also it was largely due to two things. One, we had raised the life of the vacuum tube in one design step from 500 hours to about 500,000 hours, something like a thousand fold increase in the life of a vacuum tube by finding out why they were failing and taking away the cause. Then second, we had added another factor of ten or more in reliability by a marginal checking system that would allow you to find any electronic component that was drifting toward the point of causing an error before it did cause an error. So one could always be monitoring the machine, the entire system, for anything that might be drifting or changing its characteristics.

There were two major Air Force projects at one stage in the early 1950s. One which was presumed to be a quick fix and one that was to depend upon existing largely analog computers at the Willow Run Laboratory, University of Michigan. The people were proposing, for a relatively small sum, $300,000,000, that they could both design and install a much improved air defense system. We at the Lincoln Laboratory believed that they were grossly underestimating both the cost and how long it would take them to do it, but we took the position that if people of good reputation think that that can be done, the Air Force should continue to support it, but they should also support the work at the Lincoln Laboratory which was on a solider basis that would take longer but be better. In a matter of two or three years, it was evident to anyone walking through the two places that the long-term project was ahead of the short-term project at Michigan, and eventually the Air Force canceled the Michigan project and diverted what would have been production money for Michigan to finish the research and development at Lincoln Laboratory. Of course, the air defense system cost a great deal more than had been originally estimated.

As we developed a computer design for the air defense system, one, of course, would need a manufacturer to actually produce the equipment. As that time became near, we sent out requests to a substantial number of companies, probably 15 or so, seeking any expression of interest to be considered. Perhaps five companies responded that they did want to be considered. A team consisting of me and several of my associates visited these companies in some depth, a very penetrating visit, to see who looked the most able to carry off what was an entirely new kind of project. There was no precedent. There wasn’t anybody building digital computers at that time. As a result of that survey, it was clear that IBM was far ahead of any of the others as a possible company to do the work, so we recommended them to the Air Force. We subcontracted with them to work with us on design and building the first prototype. The Air Force then contracted with them for the building of the computers to go into the air defense system.

By 1956, the SAGE air defense system was essentially cast in its direction. The first of 30- some computer centers was nearing completion in New Jersey. The designs had been frozen. The organization to carry out the installations had been set. The computer programming for the air defense system which was started at the Lincoln Laboratory was turned over to the Rand Corporation, which created a new corporation, the System Development Corporation to do the computer programming for the system. There were now many organizations and large numbers of people working on it. There was an opportunity for things to begin to change for me. I felt at that time that the pioneering days of computers were over. A lot of people today might find that surprising. In fact, I think, in the decade from 1946 to 1956 computers advanced more than in any decade since, although every decade has had tremendous advances.

I decided that I would do something different. Out of a happenstance discussion with James Killian, who was then President of MIT, he suggested that I should consider the new management school that MIT was starting. It seemed appropriate to do something else. Robert Everett then became Head of Division 6 of Lincoln Laboratory. In time Division 6 was separated from the Lincoln Laboratory and became the MITRE Corporation, and after one or two changes Everett became President of the MITRE Corporation and stayed there until his retirement.

I went to the MIT Management School, partly to try to fulfill the vision of Alfred Sloan. Sloan had given ten million dollars to MIT to start a management school. Sloan had a feeling that a management school in a technical environment would develop differently from one in a liberal arts environment like Harvard, Chicago or Columbia, maybe better, but in any case different, and it would be worth ten million dollars to run the experiment and see what would happen. The school had been officially started in 1952. It had existed in name for four years before I joined it. There hadn’t been anything done in terms of what it meant in the MIT setting. It was getting organized to teach rather typical management subjects. I think others believed and perhaps I did too that I would either work on the question of how business should use the newly emerging digital computers or the field of operations research which had already been defined. It existed then pretty much as it does now. It would be one or the other of these.

I had my first year at the management school with nothing to do except try to decide why I was there, and during that time, both of those prior expectations as far as computers in business use seemed to have a lot of momentum. The manufacturers were very much in the business. Banks and insurance companies were using computers. It didn’t seem like a few of us would have an impact on the way that field was going, and the field of operations research was interesting, useful, probably worthwhile, but clearly was not dealing with the big issues of what it is that made the difference between corporate success and corporate failure. Out of that and out of discussions with various people in industry, I think that what happened was that my background in servomechanisms, feedback systems, computers and computer simulation came together to lead into what is now known as the field of system dynamics.

System dynamics deals with how the policies and structural relationships of a social or socioeconomic or sociotechnical system produce the behavior of such a system. The field started as one devoted to corporate policy, and corporate policy produces corporate growth and corporate stability. In 1969 it began to shift to larger social systems when John Collins, former mayor of Boston, and I worked together to apply the field of system dynamics to the growth and stagnation of cities. That work led into two directions. One, the use of system dynamics for studying the behavior of economic systems which I’m still involved in, and the other direction led to the work we did with the Club of Rome which led to my World Dynamics book and Limits to Growth book that dealt with how the growth of population in industrialization and pollution were coming against the carrying capacity of the world and was leading all of civilization into serious pressures and consequences. Now, more recently, I have been much involved in system dynamics becoming a foundation for kindergarten through 12th grade education. Not as a subject in its own right, but as a basis for every subject. So it is being used in mathematics, in physics, biology, environmental issues, economics and even in literature. I see that whole area of a public coming to understand our social and economic systems as really the frontier for the next 50 years. It is the new frontier or you might say the frontier of science and technology for the last 150 years. The frontier for the next 50 to 100 years will be the understanding of our social, economic, population and political systems.

For a number of years the Whirlwind computer operated in the Barta building at MIT. Then after commercial computers became available and were fairly widely used and there were a number of them at MIT, it became really too expensive and unnecessary to maintain Whirlwind, so it was put in storage. Then much to my surprise, Bill Wolf, who had a company here in West Concord and who always liked the machine, decided he would put it back into operation. I would not have expected that one could successfully reassemble it, but he did. He built a building here in Concord, and he got permission from the Navy to take the machine out of storage, put it back together and did use it for a period of time. He had a research company and he sold time on the computer. Then it got to the point where again he couldn’t keep it up, and it was on the road to being junked. But then Kenneth Olsen discovered that it was about to be junked and sent trucks to rescue it, and took it back to Digital where I think parts of it were on display for a period of time. What is now the Computer Museum in Boston started at Digital Equipment Company. There are parts of Whirlwind there at the museum. Then parts of it went to the Smithsonian Institution in Washington where the last time I knew about it, there was a display on the first floor of the Smithsonian.

It was interesting and rewarding to be on the edge of technology in that post-World War II period. I think we were aware of the ground we were breaking at that time. In 1948 we were asked by Carl Compton, President of MIT, who was also head of the Research and Development Board for the military establishment, to give him an estimate of what the role of computers would be in the military. This was before any reliable high-speed, general purpose computer had yet functioned. There were unreliable computers and ones that weren’t general purpose and there were ones that were slow, but there was nothing comparable to what Whirlwind was to become in all of those dimensions. Whirlwind had not yet run successfully. So we wrote a report for him on what we foresaw as the future of computers in the military for the next 15 years, which was from 1948 to 1963. That report which is still available ended up with a large page maybe 2’x3′, 15 years across the top, a dozen military applications down the side and at every intersection we filled in with our estimate of what would be the status of the field, what would be spent in that year for development and what would be spent in that year for production. At that time, as I say, there were only a few experimental efforts in the field. Well, that culminated down in the right-hand corner with a total of something over a billion dollars for I think just the research and development part of it. We went into a meeting with the Office of Naval Research where they thought the agenda was whether we would be permitted to have another hundred thousand dollars and we suggested they would be spending two billion. So shall we say, there was a communication gap in that meeting. However, I think that forecast was closer to being right in percentage terms than what most corporations forecast today for how long it will take them to design their next computer.