What Would Nash,Shannon,Turing, Wiener and von Neumann Think?

An image of the folks as mentioned above via the GAN de jour

First, as usual, i trust everyone is safe. Second, I’ve been “thoughting” a good deal about how the world is being eaten by software and, recently, machine learning. i personally have a tough time with using the words artificial intelligence.

What Would Nash, Shannon, Turing, Wiener, and von Neumann Think of Today’s World?

The modern world is a product of the mathematical and scientific brilliance of a handful of intellectual pioneers who happen to be whom i call the Horsemen of The Digital Future. i consider these humans to be my heroes and persons that i aspire to be whereas most have not accomplished one-quarter of the work product the humans have created for humanity. Among these giants are Dr. John Nash, Dr. Claude Shannon, Dr. Alan Turing, Dr. Norbert Wiener, and Dr. John von Neumann. Each of them, in their own way, laid the groundwork for concepts that now define our digital and technological age: game theory, information theory, artificial intelligence, cybernetics, and computing. But what would they think if they could see how their ideas, theories and creations have shaped the 21st century?

A little context.

John Nash: The Game Theorist

John Nash revolutionized economics, mathematics, and strategic decision-making through his groundbreaking work in game theory. His Nash Equilibrium describes how parties, whether they be countries, companies, or individuals, can find optimal strategies in competitive situations. Today, his work influences fields as diverse as economics, politics, and evolutionary biology. NOTE: Computational Consensus Not So Hard; Carbon (Human) Consensus Nigh Impossible.

The Nash equilibrium is the set of degradation strategies 

    \[(E_i^*,E_j^*)\]

 

such that, if both players adopt it, neither player can achieve a higher payoff by changing strategies. Therefore, two rational agents should be expected to pick the Nash equilibrium as their strategy.

If Nash were alive today, he would be amazed at how game theory has permeated decision-making in technology, particularly in algorithms used for machine learning, cryptocurrency trading, and even optimizing social networks. His equilibrium models are at the heart of competitive strategies used by businesses and governments alike. With the rise of AI systems, Nash might ponder the implications of intelligent agents learning to “outplay” human actors and question what ethical boundaries should be set when AI is used in geopolitical or financial arenas.

Claude Shannon: The Father of Information Theory

Claude Shannon’s work on information theory is perhaps the most essential building block of the digital age. His concept of representing and transmitting data efficiently set the stage for everything from telecommunications to the Internet as we know it. Shannon predicted the rise of digital communication and laid the foundations for the compression and encryption algorithms protecting our data. He also is the father of my favorite equation mapping the original entropy equation from thermodynamics to channel capacity:

    \[H=-1/N \sum_{i=1}^{N} P_i\,log_2\,P_i\]

The shear elegance and magnitude is unprecedented. If he were here, Shannon would witness the unprecedented explosion of data, quantities, and speeds far beyond what was conceivable in his era. The Internet of Things (IoT), big data analytics, 5G/6G networks, and quantum computing are evolutions directly related to his early ideas. He might also be interested in cybersecurity challenges, where information theory is critical in protecting global communications. Shannon would likely marvel at the sheer volume of information we produce yet be cautious of the potential misuse and the ethical quandaries regarding privacy, surveillance, and data ownership.

Alan Turing: The Architect of Artificial Intelligence

Alan Turing’s vision of machines capable of performing any conceivable task laid the foundation for modern computing and artificial intelligence. His Turing Machine is still a core concept in the theory of computation, and his famous Turing Test continues to be a benchmark in determining machine intelligence.

In today’s world, Turing would see his dream of intelligent machines realized—and then some. From self-driving cars to voice assistants like Siri and Alexa, AI systems are increasingly mimicking human cognition human capabilities in specific tasks like data analysis, pattern recognition, and simple problem-solving. While Turing would likely be excited by this progress, he might also wrestle with the ethical dilemmas arising from AI, such as autonomy, job displacement, and the dangers of creating highly autonomous AI systems as well as calling bluff on the fact that LLM systems do not reason in the same manner as human cognition on basing the results on probabilistic convex optimizations. His work on breaking the Enigma code might inspire him to delve into modern cryptography and cybersecurity challenges as well. His reaction-diffusion model called Turings Metapmorphsis equation, is foundational in explaining biological systems:

Turing’s reaction-diffusion system is typically written as a system of partial differential equations (PDEs):

    \[\frac{\partial u}{\partial t} &= D_u \nabla^2 u + f(u, v),\]


    \[\frac{\partial v}{\partial t} &= D_v \nabla^2 v + g(u, v),\]

where:

    \[\begin{itemize}\item $u$ and $v$ are concentrations of two chemical substances (morphogens),\item $D_u$ and $D_v$ are diffusion coefficients for $u$ and $v$,\item $\nabla^2$ is the Laplacian operator, representing spatial diffusion,\item $f(u, v)$ and $g(u, v)$ are reaction terms representing the interaction between $u$ and $v$.\end{itemize}\]

In addition to this, his contributions to cryptography and game theory alone are infathomable.
In his famous paper, Computing Machinery and Intelligence,” Turing posed the question, “Can machines think?” He proposed the Turing Test as a way to assess whether a machine can exhibit intelligent behavior indistinguishable from a human. This test has been a benchmark in AI for evaluating a machine’s ability to imitate human intelligence.

Given the recent advances made with large language models, I believe he would find it amusing, not that they think or reason.

Norbert Wiener: The Father of Cybernetics

Norbert Wiener’s theory of cybernetics explored the interplay between humans, machines, and systems, particularly how systems could regulate themselves through feedback loops. His ideas greatly influenced robotics, automation, and artificial intelligence. He wrote the books “Cybernetics” and “The Human Use of Humans”. During World War II, his work on the automatic aiming and firing of anti-aircraft guns caused Wiener to investigate information theory independently of Claude Shannon and to invent the Wiener filter. (The now-standard practice of modeling an information source as a random process—in other words, as a variety of noise—is due to Wiener.) Initially, his anti-aircraft work led him to write, with Arturo Rosenblueth and Julian Bigelow, the 1943 article ‘Behavior, Purpose and Teleology. He was also a complete pacifist. What was said about those who can hold two opposing views?

If Wiener were alive today, he would be fascinated by the rise of autonomous systems, from drones to self-regulated automated software, and the increasing role of cybernetic organisms (cyborgs) through advancements in bioengineering and robotic prosthetics. He, I would think, would also be amazed that we could do real-time frequency domain filtering based on his theories. However, Wiener’s warnings about unchecked automation and the need for human control over machines would likely be louder today. He might be deeply concerned about the potential for AI-driven systems to exacerbate inequalities or even spiral out of control without sufficient ethical oversight. The interaction between humans and machines in fields like healthcare, where cybernetics merges with biotechnology, would also be a keen point of interest for him.

John von Neumann: The Architect of Modern Computing

John von Neumann’s contributions span so many disciplines that it’s difficult to pinpoint just one. He’s perhaps most famous for his von Neumann architecture, the foundation of most modern computer systems, and his contributions to quantum mechanics and game theory. His visionary thinking on self-replicating machines even predated discussions of nanotechnology.

Von Neumann would likely be astounded by the ubiquity and power of modern computers. His architectural design is the backbone of nearly every device we use today, from smartphones to supercomputers. He would also find significant developments in quantum computing, aligning with his quantum mechanics work. As someone who worked on the Manhattan Project (also Opphenhiemer), von Neumann might also reflect on the dual-use nature of technology—the incredible potential of AI, nuclear power, and autonomous weapons to both benefit and harm humanity. His early concerns about the potential for mutual destruction could be echoed in today’s discussions on AI governance and existential risks.

What Would They Think Overall?

Together, these visionaries would undoubtedly marvel at how their individual contributions have woven into the very fabric of today’s society. The rapid advancements in AI, data transmission, computing power, and autonomous systems would be thrilling, but they might also feel a collective sense of responsibility to ask:

Where do we go from here?

Once again Oh Dear Reader You pre-empt me….

A colleague sent me this paper, which was the impetus for this blog:

My synopsis of said paper:


The Tensor as an Informational Resource” discusses the mathematical and computational importance of tensors as resources, particularly in quantum mechanics, AI, and computational complexity. The authors propose new preorders for comparing tensors and explore the notion of tensor rank and transformations, which generalize key problems in these fields. This paper is vital for understanding how the foundational work of Nash, Shannon, Turing, Wiener, and von Neumann has evolved into modern AI and quantum computing. Tensors offer a new frontier in scientific discovery, building on their theories and pushing the boundaries of computational efficiency, information processing, and artificial intelligence. It’s an extension of their legacy, providing a mathematical framework that could revolutionize our interaction with quantum information and complex systems. Fundamental to systems that appear to learn where the information-theoretic transforms are the very rosetta stone of how we perceive the world through perceptual filters of reality.

This shows the continuing relevance in ALL their ideas in today’s rapidly advancing AI and fluid computing technological landscape.

They might question whether today’s technology has outpaced ethical considerations and whether the systems they helped build are being used for the betterment of all humanity. Surveillance, privacy, inequality, and autonomous warfare would likely weigh heavily on their minds. Yet, their boundless curiosity and intellectual rigor would inspire them to continue pushing the boundaries of what’s possible, always seeking new answers to the timeless question of how to create the future we want and live better, more enlightened lives through science and technology.

Their legacy lives on, but so does their challenge to us: to use the tools they gave us wisely for the greater good of all.

Or would they be dismayed that we use all of this technology to make a powerpoint to save time so we can watch tik tok all day?

Until Then,

#iwishyouwater <- click and see folks who got the memo

𝕋𝕖𝕕 ℂ. 𝕋𝕒𝕟𝕟𝕖𝕣 𝕁𝕣. (@tctjr) / X

Music To blog by: Bach: Mass in B Minor, BWV 232. By far my favorite composer. The John Eliot Gardiner and Monterverdi Choir version circa 1985 is astounding.

Snake_Byte:[14] Coding In Philosophical Frameworks

Dalle-E Generated Philospher

Your vision will only become clear when you can look into your heart. Who looks outside, dreams; who looks inside, awakes. Knowing your own darkness is the best method for dealing with the darknesses of other people. We cannot change anything until we accept it.

~ C. Jung

(Caveat Emptor: This blog is rather long in the snakes tooth and actually more like a CHOMP instead of a BYTE. tl;dr)

First, Oh Dear Reader i trust everyone is safe, Second sure feels like we are living in an age of Deus Ex Machina, doesn’t it? Third with this in mind i wanted to write a Snake_Byte that have been “thoughting” about for quite some but never really knew how to approach it if truth be told. I cant take full credit for this ideation nor do i actually want to claim any ideation. Jay Sales and i were talking a long time after i believe i gave a presentation on creating Belief Systems using BeliefNetworks or some such nonsense.

The net of the discussion was we both believed that in the future we will code in philosophical frameworks.

Maybe we are here?

So how would one go about coding an agent-based distributed system that allowed one to create an agent or a piece of evolutionary code to exhibit said behaviors of a philosophical framework?

Well we must first attempt to define a philosophy and ensconce it into a quantized explanation.

Stoicism seemed to me at least the best first mover here as it appeared to be the tersest by definition.

So first those not familiar with said philosophy, Marcus Aurelius was probably the most famous practitioner of Stoicism. i have put some references that i have read at the end of this blog1.

Stoicism is a philosophical school that emphasizes rationality, self-control, and inner peace in the face of adversity. In thinking about this i figure To build an agent-based software system that embodies Stoicism, we would need to consider several key aspects of this philosophy.

  • Stoics believe in living in accordance with nature and the natural order of things. This could be represented in an agent-based system through a set of rules or constraints that guide the behavior of the agents, encouraging them to act in a way that is in harmony with their environment and circumstances.
  • Stoics believe in the importance of self-control and emotional regulation. This could be represented in an agent-based system through the use of decision-making algorithms that take into account the agent’s emotional state and prioritize rational, level-headed responses to stimuli.
  • Stoics believe in the concept of the “inner citadel,” or the idea that the mind is the only thing we truly have control over. This could be represented in an agent-based system through a focus on internal states and self-reflection, encouraging agents to take responsibility for their own thoughts and feelings and strive to cultivate a sense of inner calm and balance.
  • Stoics believe in the importance of living a virtuous life and acting with moral purpose. This could be represented in an agent-based system through the use of reward structures and incentives that encourage agents to act in accordance with Stoic values such as courage, wisdom, and justice.

So given a definition of Stoicism we then need to create a quantized model or discrete model of those behaviors that encompass a “Stoic Individual”. i figured we could use the evolutionary library called DEAP (Distributed Evolutionary Algorithms in Python ). DEAP contains both genetic algorithms and genetic programs utilities as well as evolutionary strategy methods for this type of programming.

Genetic algorithms and genetic programming are both techniques used in artificial intelligence and optimization, but they have some key differences.

This is important as people confuse the two.

Genetic algorithms are a type of optimization algorithm that use principles of natural selection to find the best solution to a problem. In a genetic algorithm, a population of potential solutions is generated and then evaluated based on their fitness. The fittest solutions are then selected for reproduction, and their genetic information is combined to create new offspring solutions. This process of selection and reproduction continues until a satisfactory solution is found.

On the other hand, genetic programming is a form of machine learning that involves the use of genetic algorithms to automatically create computer programs. Instead of searching for a single solution to a problem, genetic programming evolves a population of computer programs, which are represented as strings of code. The programs are evaluated based on their ability to solve a specific task, and the most successful programs are selected for reproduction, combining their genetic material to create new programs. This process continues until a program is evolved that solves the problem to a satisfactory level.

So the key difference between genetic algorithms and genetic programming is that genetic algorithms search for a solution to a specific problem, while genetic programming searches for a computer program that can solve the problem. Genetic programming is therefore a more general approach, as it can be used to solve a wide range of problems, but it can also be more computationally intensive due to the complexity of evolving computer programs2.

So returning back to the main() function as it were, we need create a genetic program that models Stoic behavior using the DEAP library,

First need to define the problem and the relevant fitness function. This is where the quantized part comes into play. Since Stoic behavior involves a combination of rationality, self-control, and moral purpose, we could define a fitness function that measures an individual’s ability to balance these traits and act in accordance with Stoic values.

So lets get to the code.

To create a genetic program that models Stoic behavior using the DEAP library in a Jupyter Notebook, we first need to install the DEAP library. We can do this by running the following command in a code cell:

pip install deap

Next, we can import the necessary modules and functions:

import random
import operator
import numpy as np
from deap import algorithms, base, creator, tools

We can then define the problem and the relevant fitness function. Since Stoic behavior involves a combination of rationality, self-control, and moral purpose, we could define a fitness function that measures an individual’s ability to balance these traits and act in accordance with Stoic values.

Here’s an example of how we might define a “fitness function” for this problem:

# Define the fitness function.  NOTE: # i am open to other ways of defining this and other models
# the definition of what is a behavior needs to be quantized or discretized and 
# trying to do that yields a lossy functions most times.  Its also self referential

def fitness_function(individual):
    # Calculate the fitness based on how closely the individual's behavior matches stoic principles
    fitness = 0
    # Add points for self-control, rationality, focus, resilience, and adaptability can haz Stoic?
    fitness += individual[0]  # self-control
    fitness += individual[1]  # rationality
    fitness += individual[2]  # focus
    fitness += individual[3]  # resilience
    fitness += individual[4]  # adaptability
    return fitness,

# Define the genetic programming problem
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Individual", list, fitness=creator.FitnessMax)

# Initialize the genetic algorithm toolbox
toolbox = base.Toolbox()

# Define the genetic operators
toolbox.register("attribute", random.uniform, 0, 1)
toolbox.register("individual", tools.initRepeat, creator.Individual, toolbox.attribute, n=5)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
toolbox.register("evaluate", fitness_function)
toolbox.register("mate", tools.cxTwoPoint)
toolbox.register("mutate", tools.mutGaussian, mu=0, sigma=0.1, indpb=0.1)
toolbox.register("select", tools.selTournament, tournsize=3)

# Run the genetic algorithm
population = toolbox.population(n=10)
for generation in range(20):
    offspring = algorithms.varAnd(population, toolbox, cxpb=0.5, mutpb=0.1)
    fits = toolbox.map(toolbox.evaluate, offspring)
    for fit, ind in zip(fits, offspring):
        ind.fitness.values = fit
    population = toolbox.select(offspring, k=len(population))
    
# Print the best individual found
best_individual = tools.selBest(population, k=1)[0]

print ("Best Individual:", best_individual)
 

Here, we define the genetic programming parameters (i.e., the traits that we’re optimizing for) using the toolbox.register function. We also define the evaluation function (stoic_fitness), genetic operators (mate and mutate), and selection operator (select) using DEAP’s built-in functions.

We then define the fitness function that the genetic algorithm will optimize. This function takes an “individual” (represented as a list of five attributes) as input, and calculates the fitness based on how closely the individual’s behavior matches stoic principles.

We then define the genetic programming problem via the quantized attributes, and initialize the genetic algorithm toolbox with the necessary genetic operators.

Finally, we run the genetic algorithm for 20 generations, and print the best individual found. The selBest function is used to select the top individual fitness agent or a “behavior” if you will for that generation based on the iterations or epochs. This individual represents an agent that mimics the philosophy of stoicism in software, with behavior that is self-controlled, rational, focused, resilient, and adaptable.

Best Individual: [0.8150247518866958, 0.9678037028949047, 0.8844195735244268, 0.3970642186025506, 1.2091810770505023]

This denotes the best individual with those best balanced attributes or in this case the Most Stoic,

As i noted this is a first attempt at this problem i think there is a better way with a full GP solution as well as a tunable fitness function. In a larger distributed system you would then use this agent as a framework amongst other agents you would define.

i at least got this out of my head.

until then,

#iwishyouwater <- Alexey Molchanov and Dan Bilzerian at Deep Dive Dubai

Muzak To Blog By: Phil Lynott “The Philip Lynott Album”, if you dont know who this is there is a statue in Ireland of him that i walked a long way with my co-founder, Lisa Maki a long time ago to pay homage to the great Irish singer of the amazing band Thin Lizzy. Alas they took Phil to be cleaned that day. At least we got to walk and talk and i’ll never forget that day. This is one of his solo efforts and i believe he is one of the best artists of all time. The first track is deeply emotional.

References:

[1] A list of books on Stoicism -> click HERE.

[2] Genetic Programming (On the Programming of Computers by Means of Natural Selection), By Professor John R. Koza. There are multiple volumes i think four and i have all of this but this is a great place to start and the DEAP documentation. Just optimizing a transcendental functions is mind blowing what GP comes out with using arithmetic

Computing The Human Condition – Project Noumena (Part 2)

In the evolution of a society, continued investment in complexity as a problem-solving strategy yields a declining marginal return.

Joseph A. Tainter

Someone asked me if from now on my blog will only be about Project_Noumena – on the contrary.

I will be interspersing subject matter within Parts 1 to (N) of Project_Noumena. To be transparent at this juncture i am not sure where it will end or if there is even a logical MVP 1.0.  As with open-source systems and frameworks technically one never achieves V1.0 as the systems evolve. i tend to believe this will be the case with Project Noumena.  i  recently provided a book review on CaTB and have a blog on Recurrent Neural Networks with respect to Multiple Time Scale Prediction in the works so stuff is proceeding. 

To that end, i would love comments and suggestions as to anything you would like my opinion on or for me to write about in the comments section.  Also feel free to call me out on typos or anything else you see in error.

Further within Project Noumena there are snippets that could be shorter blogs as well.  Look at Project Noumena as a fractal-based system.

Now on to the matter at hand.

In the previous blog Computing The Human_Condition – Project Noumena (Part 1) i discussed the initial overview of the model from the book World Dynamics.  i will take a part of that model which is what i call, the main, Human_Do_Loop(); and the main attributes of the model: Birth and Death of Humans. One must ask if we didn’t have humans we would not have to be concerned with such matters as societal collapse?  i don’t believe animals are concerned with such existential crisis concerns so my answer is a resounding – NO. We will be discussing such existential issues in this blog although i will address such items in future writings. 

Over the years i have been asking myself is this a biological model by definition?  Meaning do we have cellular components involved only?  Is this biological modeling at the very essence?  If we took the cell-based organisms out of the equation what do we still have as far as models on Earth? 

While i told myself i wouldn’t get too extensional here and i do want to focus on the models and then codebases i continually check the initial conditions of these systems as they for most systems dictate the response for the rest of the future operations of said systems.  Thus for biological systems, are there physical parameters that govern the initial exponential growth rate?  Can we model with power laws and logistic curves for coarse-grained behavior?  Is Bayesian reasoning biologically plausible at a behavioral level or at a neuronal level? Given that what are the atomic units that govern these models?  

These are just a sampling of initial condition questions i ask myself as i evolve through this process. 

So with that long-winded introduction and i trust i didn’t lose you oh reader lets hope into some specifics. 

Birth and Death Rates

The picture from the book depicts basic birth and death loops in the population sector.  In the case of these loops, they are generating positive feedback which causes growth.  Thus an increase in population P causes an increase in birthrate BR.  This, in turn, causes population P to further increase.  The positive feedback loop would if left to its own devices would create an exponentially growing situation.  As i said in the first blog and will continue to say, we seem to have started using exponential growth as a net positive fashion over the years in the technology industry.  In the case of basic population dynamics with no constraints, exponential growth is not a net positive outcome. 

Once again why start with simple models?  The human mind is phenomenal at perceiving pressures, fears, greed, homeostasis, and other human aspects and characteristics and attempting at a structure that is given say the best fit to a situation and categorizing these as attributes thereof.  However, the human mind is rather poor at predicting dynamical systems behaviors which are where the models come into play especially with social interactions and what i attempting to define from a self-organizing theory standpoint.  

The next sets of loops that have the most effective behavior is a Pollution loop and a Crowding Loop.  If we note that pollution POL increases one can assume up to a point that one hopes that nature absorbs and fixes the pollution otherwise it is a completely positive feedback loop and this, in turn, creates over pollution which we are already seeing the effects of around the worlds. One can then couple this with the amount of crowding humans can tolerate. 

Population, Birth Rate, Pollution

We see this behavior in urban sprawl areas when we have extreme heat or extreme cold or let’s say extreme pandemics.  If the population rises crowding ratio increases the birth rate multiplier declines and birth rates reduce.  The increasing death rate and reducing the birth rate are power system dynamic stabilizers coupled with pollution. This in turn obviously has an effect on food supplies. One can easily deduce that these seemingly simple coefficients if you will within the relative feedback loops create oscillations, exponential growth, or exponential decay.  The systems while that seem large and rather stable are very sensitive to slight variations.  If you are familiar with NetLogo it is a great agent-based modeling language.  I picked a simple pollution model whereas we can select the number of people, birthrate, and tree planting rate. 

population dynamics with pollution

As you can see without delving into the specifics after 77 years it doesn’t look to promising.  i ‘ll either be using python or netlogo or a combination of both to extended these models as we add other references. 

Ok enough for now.

Until Then,

#iwishyouwater

@tctjr

Book Review: The Cathedral and The Bazaar (Musings On Linux and Open Source By An Accidental Revolutionary

“Joy, humor, and playfulness are indeed assets;” 

~ Eric S. Raymond

As of late, i’ve been asked by an extreme set of divergent individuals what does “Open Source Software” mean? 

That is a good question.  While i understand the words and words do have meanings i am not sure its the words that matter here.  Many people who ask me that question hear “open source” and hear or think “free’ which is not the case.  

Also if you have been on linkedin at all you will see #Linux, #LinuxFoundation and #OpenSource tagged constantly in your feeds.

Which brings me to the current blog and book review.

(CatB)as it is affectionately known in the industry started out and still is a manifesto as well accessible via the world wide web.  It was originally published in 1997 on the world wide wait and then in print form circa 1999.  Then in 2001 was a revised edition with a foreword by Bob Young, the founding chairman and ceo of Redhat.

Being i prefer to use plain ole’ books we are reviewing the physical revised and extended paperback edition in this blog circa 2001. Of note for the picture, it has some wear and tear.

To start off as you will see from the cover there is a quote by Guy Kawasaki, Apple’s first Evangelist:

“The most important book about technology today, with implications that go far beyond programming.”

This is completely true.  In the same train of thought, it even goes into the aspects of propriety and courtesy within conflict environments and how such environments are of a “merit not inherit” world, and how to properly respond when you are in vehement disagreement.  

To relate it to the book review: What is a cathedral development versus a bazaar environment?

Cathedral is a tip of the fedora if you will to the authoritarian view of the world where everything is very structured and there are only a few at most who will approve moving the codebase forward.

Bazaar refers to the many.  The many coding and contributing in a swarm like fashion.  

In this book, closed source is described as a cathedral development model and open source as a bazaar development model. A cathedral is vertically and centrally controlled and planned. Process and governance rule the project – not coding.  The cathedral is homeostatic. If you build or rebuild Basilica Sancti Petri within Roma you will not be picking it up by flatbed truck and moving it to Firenze.

The forward in the 2001 edition is written by Bob Young co-founder and original CEO of RedHat.  He writes:

“ There have always been two things that would be required if open-source software was to materially change the world; one was for open-source software to become widely used and the other was the benefits this software development model supplied to its users had to be communicated and understood.”

Users here are an interesting target.  Users could be developers and they could be end-users of warez.  Nevertheless, i believe both conditions have been met accordingly.  

i co-founded a machine learning and nlp service as a company in 2007 wherein i had the epiphany after my “second” read of Catb that the future is in fact open source.  i put second in quotes as the first time i read it back in 1998 it wasn’t really a read in depth nor having fully internalized it while i was working at Apple in the CPU software department on OS9/OSX and while at the same time knowing full well that OSX was based on the Mach kernel.  The Mach kernel is often mentioned as one of the earliest examples of a microkernel. However, not all versions of Mach are microkernels. Mach’s derivatives are the basis of the operating system kernel in GNU Hurd and of Apple’s XNU kernel used in macOS, iOS, iPadOS, tvOS, and watchOS.

That being said after years of working with mainly closed source systems in 2007 i re-read Catb.  i literally had a deep epiphany that the future of all development would be open source distributed machine learning – everywhere.

Then i read it recently – deeply – a third time.  This time nearly every line in the book resonates.

The third time with almost anything seems to be the charm.  This third time through i realized not only is this a treatise for the open-source movement it is a call to arms if you will for the entire developer community to behave appropriately with propriety and courtesy in a highly matrixed collaborative environment known as the bazaar.

The most obvious question is:  Why should you care?  i’m glad you asked.

The reason you care is that you are part of the information economy.  The top market cap companies are all information-theoretic developer-first companies.  This means that these companies build things so others can build things.  Software is truly eating the world.  Think in terms of the recent pandemic.  Work (code) is being created at an amazing rate due to the fact that the information work economy is distributed and essentially schedule free.  She who has distributed wins and she who can code anytime wins.  This also means that you are interested in building world-class software and the building of this software is now a decentralized peer reviewed transparent process.  

The book is organized around Raymond’s various essays.   It is important to note that just as software is an evolutionary process by definition so are the essays in this book.  They can also be found online.  The original collection of essays date back to 1992 on the internet: “A Brief History Of Hackerdom.’

The book is not a “how-to” cookbook but rather what i call a “why to” map of the terrain.  While you can learn how to hack and code i believe it must be in your psyche.  The book also uses the term “hacker” in a positive sense to mean one who creates software versus one who cracks software or steals information.

While the history and the methodology is amazing to me the cogent commentary on the types of the reasoning behind why hackers go into open source vary as widely as ice cream flavors.

Raymond goes into the theory of incentives with respect to the instinctive wiring of humans beings.  

“The verdict of history seems to be free-market capitalism is the globally optimal way to cooperate for economic efficiency; perhaps in a similar way to cooperate for generating (and checking!) high-quality creative work.”

He categorizes command hierarchy, exchange economy, and gift culture to address these incentives.  

Command hierarchy:

Goods are allocated in a scarce economy model by one central authority.

Exchange Economy:

The allocation of scarce goods is accomplished in a decentralized manner allowing scale through trade and voluntary cooperation.

Gift Culture:

This is very different than the other two methods or cultures.  Abundance makes command and control relationships difficult to sustain.  In gift cultures, social status is determined not by what you control but by what you give away.

It is clear that if we define the open source hackerdom it would be a gift culture.  (It is beyond the current scope of this blog but it would be interesting to do a neuroscience project on the analysis of open source versus closed source hackers brain chemistry as they work throughout the day)

Given these categories, the essays then go onto define the written and many times unwritten (read secrets) that operate within the open-source world via a reputation game. If you are getting the idea it is tribal you are correct.  Interestingly enough the open source world has in many cases very divergent views on all prickly things within the human condition such as religion and politics but one thing is a constant – ship high-quality code.

Without a doubt the most glaring cogent commentary comes in a paragraph from the essay “The Magic Cauldron.” entitled “Open Source And Strategic Business Risk.”   

Ultimately the reasons open source seems destined to become a widespread practice have more to do with customer demand and market pressures than with supply-efficiencies for vendors.”

And further:

“Put yourself for the moment in the position of a CTO at a Fortune 500 corporation contemplating a build or upgrade of your firm’s IT infrastructure.  Perhaps you need to choose a network operating system to be deployed enterprise-wide; perhaps your concerns involve 24/7 web service and e-commerce, perhaps your business depends on being able to field high-volume, high-reliability transaction databases.  Suppose you go the conventional closed-source route.  If you do, then you put your firm at the mercy of a supplier monopoly – because by definition there is only one place you can go to for support, bug fixes, and enhancements.  If the supplier doesn’t perform, you will have no effective recourse because you are effectively locked by your initial investment.”

FURTHER:

“The truth is this: when your key business processes are executed by opaque blocks of bits that you cant even see inside (let alone modify) you have lost control of your business.”

“Contrast this with the open-source choice.  If you go this route, you have the source code, and no one can take that away from you. Instead of a supplier monopoly with a choke-hold on your business, you now have multiple service companies bidding for your business – and you not only get to play them against each other, but you also have the option of building your own captive support organization if that looks less expensive than contracting out.  The market works for you.”

“The logic is compelling; depending on closed-source code is an unacceptable strategic risk  So much so that I believe it will not be very long until closed-source single-vendor acquisitions when there is an open source alternative available will be viewed as a fiduciary irresponsibility, and rightly grounds for a share-holder lawsuit.”

THIS WAS WRITTEN IN 1997. LOOK AROUND THE WORLD WIDE WAIT NOW… WHAT DO YOU SEE?  

Open Source – full stop.

i will add that there was no technical explanation here only business incentive and responsibility to the company you are building, rebuilt, or are scaling.  Further, this allows true software malleability and reach which is the very reason for software.

i will also go on a limb here and say if you are a software corporation one that creates software you can play the monopoly and open-source models against each other within your corporation. Agility and speed to ship code is the only thing that matters these days. Where is your github? Or why is this not shipping TODAY?

This brings me to yet another amazing prescient prediction in the book that Raymond says that applications are ultimately where we will land for monetary scale.  Well yes, there is an app for that….

While i have never met Eric S. Raymond he is a legend in the field.  We have much to thank him for in the areas of software.  If you have not read CatB and work in the information sector do yourself a favor: buy it today.

As a matter of fact here is the link: The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary

Muzak To Blog To:  “Morning Phase” by Beck 

Resources:

http://www.opensource.org

https://www.apache.org/foundation/

Computing The Human Condition – Project Noumena (Part 1)

“I am putting myself to the fullest possible use, which is all I think any conscious entity can ever hope to do.” ~ HAL 9000

“If you want to make the world a better place take a look at yourself and then make a change.” ~ MJ.

First and foremost with this blog i trust everyone is safe.  The world is in an interesting place, space, and time both physically and dare i say collectively – mentally.

A Laundry List

Introduction

This past week we celebrated  Earth Day.  i believe i heard it was the 50th year of Earth Day.  While I applaud the efforts and longevity for a day we should have Earth Day every day.  Further just “thoughting” about or tweeting about Earth Day – while it may wake up your posterior lobe of the pituitary gland and secret some oxytocin – creating the warm fuzzies for you it really doesn’t create an action for furthering Earth Day.  (much like typing /giphy YAY! In Slack).

 As such, i decided to embark on a multipart blog that i have been “thinking” about what i call an Ecological Computing System.  Then the more i thought about it why stop at Ecology?   We are able to model and connect essentially anything, we now have models for the brain that while are coarse-grained can account for gross behaviors, we have tons of data on buying habits and advertisement data and everything is highly mobile and distributed.  Machine learning which can optimize, classify and predict with extremely high dimensionality is no longer an academic exercise.  

Thus, i suppose taking it one step further from ecology and what would differentiate it from other efforts is that <IT>  would actually attempt to provide a compute framework that would compute The Human Condition.  I am going to call this effort Project Noumena.  Kant the eminent thinker of 18th century Germany defined Noumena as a thing as it is in itself, as distinct from a thing as it is knowable by the senses through phenomenal attributes and proposed that the experience was a product of the mind.

My impetus for this are manifold:

  • i love the air, water, trees, and animals,
  • i am an active water person,
  • i want my children’s children’s children to know the wonder of staring at the azure skies, azure oceans and purple mountains,
  • Maybe technology will assist us in saving us from The Human Condition.

Timing

i have waited probably 15+ years to write about this ideation of such a system mainly due to the technological considerations were nowhere near where they needed to be and to be extremely transparent no one seemed to really think it was an issue until recently.  The pandemic seems to have been a global wakeup call that in fact, Humanity is fragile.  There are shortages of resources in the most advanced societies.  Further due to the recent awareness that the pollution levels appear (reported) to be subsiding as a function in the reduction of humans’ daily involvement within the environment. To that point over the past two years, there appears to be an uptake of awareness in how plastics are destroying our oceans.  This has a coupling effect that with the pandemic and other environmental concerns there could potentially be a food shortage due to these highly nonlinear effects.   This uptake in awareness has mainly been due to the usage of technology of mobile computing and social media which in and of itself probably couldn’t have existed without plastics and massive natural resource consumption.  So i trust the irony is not lost there.   

From a technical perspective, Open source and Open Source Systems have become the way that software is developed.  For those that have not read The Cathedral and The Bazaar and In The Beginning Was The Command Line i urge you to do so it will change your perspective.

We are no longer hampered by the concept of scale in computing. We can also create a system that behaves at scale with only but a few human resources.  You can do a lot with few humans now which has been the promise of computing.

Distributed computing methods are now coming to fruition. We no longer think in terms of a monolithic operating system or in place machine learning. Edge computing and fiber networks are accelerating this at an astonishing rate.  Transactions now dictate trust. While we will revisit this during the design chapters of the blog I’ll go out on a limb here and say these three features are cogent to distributed system processing (and possibly the future of computing at scale).

  • Incentive models
  • Consensus models
  • Protocol models

We will definitely be going into the deeper psychological, mathematical, and technical aspects of these items.

Some additional points of interest and on timing.  Microsoft recently released press about a Planetary Computer and announced the position of Chief Ecology Officer.  While i do not consider Project Nuomena to be of the same system type there could be similarities on the ecological aspects which just like in open source creates a more resilient base to work.

The top market cap companies are all information theoretic-based corporations.  Humans that know the science, technology, mathematics and liberal arts are key to their success.  All of these companies are woven and interwoven into the very fabric of our physical and psychological lives.

Thus it is with the confluence of these items i believe the time is now to embark on this design journey.  We must address the Environment, Societal factors and the model of governance.

A mentor once told me one time in a land far away: “Timing is everything as long as you can execute.”  Ergo Timing and Execution Is Everything.

Goals

It is my goal that i can create a design and hopefully, an implementation that is utilizing computational means to truly assist in building models and sampling the world where we can adhere to goals in making small but meaningful changes that can be used within what i am calling the 3R’s:  recycle, redact, reuse.  Further, i hope with the proper incentive models in place that are dynamic it has a mentality positive feedback effect.  Just as in complexity theory a small change – a butterfly wings – can create hurricanes – in this case positive effect. 

Here is my overall plan. i’m not big on the process or gant charts.  I’ll be putting all of this in a README.md as well.  I may ensconce the feature sets etc into a trello or some other tracking mechanism to keep me focused – WebSphere feel free to make recommendations in the comments section:

Action Items:

  • Create Comparative Models
  • Create Coarse-Grained Attributes
  • Identify underlying technical attributes
  • Attempt to coalesce into an architecture
  • Start writing code for the above.

Preamble

Humanity has come to expect growth as a material extension of human behavior.  We equate growth with progress.  In fact, we use the term exponential growth as it is indefinitely positive.  In most cases for a fixed time interval, this means a doubling of the relevant system variable or variables.  We speak of growth as a function of gross national production.  In most cases, exponential growth is treacherous where there are no known or perceived limits.  It appears that humanity has only recently become aware that we do not have infinite resources.  Psychologically there is a clash between the exponential growth and the psychological or physical limit.  The only significance is the relevant (usually local) limit.  How does it affect me, us, and them?  This can be seen throughput most game theory practices – dominant choice.  The pattern of growth is not the surprise it is the collision of the awareness of the limit to the ever-increasing growth function is the surprise.

One must stop and ask: 

Q: Are progress (and capacity) and the ever-increasing function a positive and how does it relate to 2nd law of thermodynamics aka Entropy?  Must it always expand?

We are starting to see that our world can exert dormant forces that within our life can greatly affect our well being. When we approach the actual or perceived limit the forces which are usually negative begin to gain strength.

So given these aspects of why i’ll turn now to start the discussion.  If we do not understand history we cannot predict the future by inventing it or in most cases re-inventing it as it where.

I want to start off the history by referencing several books that i have been reading and re-reading on subjects of modeling the world, complexity, and models for collapse throughout this multipart blog.  We will be addressing issues concerning complex dynamics as are manifested with respect to attributes model types, economics, equality, and mental concerns.  

These core references are located at the end of the blog under references.  They are all hot-linked.  Please go scroll and check them out.  i’ll still be here.  i’ll wait.

Checked them out?  i know a long list. 

As you can see the core is rather extensive due to the nature of the subject matter.  The top three books are the main ones that have been the prime movers and guides of my thinking.  These three books i will refer to as The Core Trilogy:

World Dynamics

The Collapse of Complex Societies 

Six Sources of Collapse 

 As i mentioned i have been deeply thinking about all aspects of this system for quite some time. I will be mentioning several other texts and references along the continuum of creation of this design.

We will start by referencing the first book: World Dynamics by J.W. Forrestor.  World Dynamics came out of several meetings of the Rome Club a 75 person invite-only club founded by the President of Fiat.  The club set forth the following attributes for a dynamic model that would attempt to predict the future of the world:

  • Population Growth
  • Capital Investment
  • Geographical Space
  • Natural Resources
  • Pollution
  • Food Production

The output of this design was codified in a computer program called World3.  It has been running since the 1970s what was then termed a golden age of society in many cases.  All of these variables have been growing at an exponential rate. Here we see the model with the various attributes in action. There have been several criticisms of the models and also analysis which i will go into in further blogs. However, in some cases, the variants have been eerily accurate. The following plot is an output of the World3 model:

2060 does not look good

Issues Raised By World3 and World Dynamics

The issues raised by World3 and within the book World Dynamics are the following:

  • There is a strong undercurrent that technology might not be the savior of humankind
  • Industrialism (including medicine and public health) may be a more disturbing force than the population.  
  • We may face extreme psychological stress and pressures from a four-pronged dilemma via suppression of the modern industrial world.
  • We may be living in a “golden age” despite a widely acknowledged feeling of malaise.  
  • Exhtortions and programs directed at population control may be self-defeating.  Population control, if it works, would yield excesses thereby allowing further procreation.
  • Pollution and Population seem to oscillate whereas the high standard of living increases the production of food and material goods which outrun the population.  Agriculture as it hits a space limit and as natural resources reach a pollution limit then the quality of life falls in equalizing population.
  • There may be no realistic hope of underdeveloped countries reaching the same standard and quality of life as developed countries.  However, with the decline in developed countries, the underdeveloped countries may be equalized by that decline.
  • A society with a high level of industrialization may be unsustainable.  
  • From a long term 100 years hence it may be unwise for underdeveloped countries to seek the same levels of industrialization.  The present underdeveloped nations may be in better conditions for surviving the forthcoming pressures.  These underdeveloped countries would suffer far less in a world collapse.  

Fuzzy Human – Fuzzy Model

The human mind is amazing at identifying structures of complex situations. However, our experiences train us poorly for estimating the dynamic consequences of said complexities.  Our mind is also not very accurate at estimating ad hoc parts of the complexities and the variational outcomes.  

One of the problems with models is well it is just a model  The subject-observer reference could shift and the context shifts thereof.  This dynamic aspect needs to be built into the models.

Also while we would like to think that our mental model is accurate it is really quite fuzzy and even irrational in most cases.  Also attempting to generalize everything into a singular model parameter is exceedingly difficult.  It is very difficult to transfer one industry model onto another.  

In general parameterization of most of these systems is based on some perceptual model we have rationally or irrationally invented.  

When these models were created there was the consideration of modeling social mechanics of good-evil, greed – altruism, fears, goals, habits, prejudice, homeostasis, and other so-called human characteristics.  We are now at a level of science where we can actually model the synaptic impulse and other aspects that come with these perceptions and emotions.

There is a common cross-cutting construct in most complex models within this text that consists of and mainly concerned with the concept of feedback and how the non-linear relationships of these modeled systems feedback into one another.  System-wide thinking permeates the text itself.  On a related note from the 1940’s of which Dr Norbert Weiner and others such as Claude Shannon worked on ballistic tracking systems and coupled feedback both in a cybernetic and information-theoretic fashion of which he attributed the concept of feedback as one of the most fundamental operations in information theory.  This led to the extremely famous Weiner Estimation Filters.  Also, side note: Dr Weiner was a self-styled pacifist proving you can hold two very opposing views in the same instance whilst being successful at executing both ideals.   

Given that basic function of feedback, lets look at the principle structures.  Essentially the model states there will be levels and rates.  Rates are flows that cause levels to change.  Levels can accumulate the net level. Either addition or subtraction to that level.  The various system levels can in aggregate describe the system state at any given time (t).  Levels existing in all subsystems of existence.  These subsystems as you will see include but are not limited to financial, psychological, biological, and economic.   The reason that i say not limited to because i also believe there are some yet to be identified subsystems at the quantum level.  The differential or rate of flow is controlled by one or more systems.  All systems that have some Spatio-temporal manifestation can be represented by using the two variables levels and rates.  Thus with respect to the spatial or temporal variables, we can have a dynamic model.  

The below picture is the model that grew out of interest from the initial meetings of the Club of Rome.  The inaugural meeting which was the impetus for the model was held in Bern, Switzerland on June 29, 1970.  Each of the levels presents a variable in the previously mentioned major structures. System levels appear as right triangles.  Each level is increased or decreased by the respective flow.  As previously mentioned on feedback any closed path through the diagram is a feedback loop.  Some of the closed loops given certain information-theoretic attributes be positive feedback loops that generate growth and others that seek equilibrium will be negative feedback loops.  If you notice something about the diagram it essentially is a birth and death loop. The population loop if you will.  For the benefit of modeling, there are really only two major variables that affect the population.  Birth Rate (BR) and Death Rate (DR).  They represent the total aggregate rate at which the population is being increased or decreased.  The system has coefficients that can initialize them to normal rates.  For example, in 1970 BRN is taken as 0.0885 (88.5 per thousand) which is then multiplied by population to determine BR.  DRN by the same measure is the outflow or reduction.  In 1970 it was 9.5% or 0.095.  The difference is the net and called normal rates.  The normale rates correspond to a physical normal world.  When there are normal levels of food, material standard of living, crowding, and pollution.  The influencers are then multipliers that increase or decrease the normal rates.

Feedback and isomorphisms abound


As a caveat, there have been some detractors of this model. To be sure it is very coarse-grained however while i haven’t seen the latest runs or outputs it is my understanding as i said the current outputs are close. The criticisms come in the shape of “Well its just modeling everything as a y=x*e^{{rt}}. I will be using this concept and map if you will as the basis for Noumena.  The concepts and values as i evolve the system will vary greatly from the World3 model but i believe starting with a minimum viable product is essential here as i said humans are not very good at predicting all of the various outcomes in high dimensional space. We can asses situations very quickly but probably outcomes no so much. Next up we will be delving into the loops deeper and getting loopier.

So this is the first draft if you will as everything nowadays can be considered an evolutionary draft.  

Then again isn’t really all of this just  The_Inifinite_Human_Do_Loop?

until then,

#iwishyouwater

tctjr

References:

(Note: They are all hotlinked)

World Dynamics

The Collapse of Complex Societies 

Six Sources of Collapse 

Beyond The Limits 

The Limits To Growth 

Thinking In Systems Donella Meadows

Designing Distributed Systems Brendan Burns

Introduction to Distributed Algorithms 

A Pragmatic Introduction to Secure Multi-Party Computation 

Reliable Secure Distributed Programming 

Distributed Algorithms 

Dynamic General Equilibrium Modeling 

Advanced Information Systems Engineering 

Introduction to Dynamic Systems Modeling 

Nonlinear Dynamics and Chaos 

Technological Revolutions and Financial Capital 

Marginalism and Discontinuity 

How Nature Works 

Complexity and The Economy 

Complexity a Guided Tour

Future Shock 

Agent_Zero 

Nudge Theory In Action

The Structure of Scientific Revolutions

Agent-Based Modelling In Economics

Cybernetics

Human Use Of Human Beings

The Technological Society

The Origins Of Order

The Lorax

Blog Muzak: Brain and Roger Eno: Mixing Colors