Rolling Ubuntu On An Old Macintosh Laptop

“What we’re doing here will send a giant ripple through the universe.”

Steve Jobs

I have an old mac laptop that was not doing anyone much use sitting around the house.  i had formatted the rig and due to it only being an i7 Pentium series mac you could only roll up to the Lion OS.  Also, i wanted a “pure” Linux rig and do not like other form factors (although i do dig the System 76 rigs).

So i got to thinking why dont i roll Ubuntu on it and let one cat turn into another cat?  See what i did there? Put a little shine on Ye Ole Rig? Here Kitty Kitty!

Anyways here are the steps that i found worked the most painless.

Caveat Emptor:  these steps completely wipe the partition and Linux does run natively wiping out any and all OSes. You WILL lose your OS X Recovery Partition, so returning to OS X or macOS can be a more long-winded process, but we have instructions here on how to cope with this: How to restore a Mac without a recovery partition.  You are going All-In!

On that note i also don’t recommend trying to “dual-boot” OS X and Linux, because they use different filesystems and it will be a pain.  Anyways this is about bringing new life to an old rig if you have a new rig with Big Sur roll Virtual Box and run whatever Linux distro you desire.

What you need:

  • A macintosh computer is the point of the whole exercise.  i do recommend having NO EXTERNAL DRIVES connected as you will see below.
  • A USB stick with at least 8 gig of storage.  This to will be formatted and all data lost.
  • Download your Linux distribution to the Mac. We recommend Ubuntu 16.04.4 LTS if this is your first Linux install. Save the file to your ~/Downloads folder.
  • Download and install an app called Etcher from Etcher.io. This will be used to copy the Linux install .ISO file to your USB drive.

Steps to Linux Freedom:

  • Insert your USB Thumb Drive. A reminder that the USB Flash drive will be erased during this installation process. Make sure you’ve got nothing you want on it.

  • Open Etcher Click Select “Image”. Choose ubuntu-16.04.1-desktop-amd64.iso (the image you downloaded in Step 1).  NOTE: i had some problems with 20.x latest release with wireless so i rolled back to 16.0x just to get it running. 
  • Click “Change” under Select Drive. 

  • Pick the drive that matches your USB Thumb Drive in size. It should be  /dev/disk1 if you only have a single hard drive in your Mac. Or /dev/disk2, /dev/disk3 and so on (if you have more drives attached). NOTE: Do not pick /dev/disk0. That’s your hard drive! Pick /dev/disk0 and you’ll wipe your macOS hard drive HEED THY WARNING YOU HAVE BEEN WARNED! This is why i said its easier if you have no external media.

  • Click “Flash!” Wait for the iso file to be copied to the USB Flash Drive. Go browse your favorite socnet as this will take some time or hop on your favorite learning network and catch up on those certificates/badges.

  • Once it is finished remove the USB Flash Drive from your Mac. This is imperative.
  • Now SHUTDOWN the mac and plug the Flashed USB drive into the mac.

  • Power up and hold the OPTION key while you boot.
  • Choose the EFI Boot option from the startup screen and press Return.
  • IMMEDIATELY press the “e” key.  i found you need to do this quickly otherwise the rig tries to boot.

  • Pressing the “e” key will enter you into “edit mode” you will see a black and white screen with options to try Ubuntu and Install Ubuntu. Don’t choose either yet, press “e” to edit the boot entry.
  • This step is critical and the font maybe really small so take your time.  Edit the line that begins with Linux and place the word "nomodeset" after "quiet splash". The whole line should read: "linux /casper/vmlinuz.efifile=/cdrom/preseed/ubuntu.seed boot=casper quiet splash nomodeset --

  • Now Press F10 on the mac.
  • Now its getting cool! Your mac and Ubuntu boots into trial mode!

(Note: at this point also go browse your favorite socnet as this will take some time or hop on your favorite learning network and catch up on those certificates/badges.)

  • Double-click the icon marked “Install Ubuntu”. (get ready! Here Kitty Kitty!)
  • Select your language of choice.
  • Select “Install this third-party software” option and click Continue. Once again important.
  • Select “Erase disk and install Ubuntu” and click Continue.
  • You will be prompted for geographic area and keyboard layout.
  • You will be prompted to enter the name and password you want to use (make it count!).
  • Click “Continue” and Linux will begin installing!
  • When the installation has finished, you can log in using the name and password you chose during installation!
  • At this point you are ready to go!  i recommend registering for an ubuntu “Live Update” account once it prompts you.
  • One side note:  on 20.x update there was an issue with the Broadcom wireless adapter crashing which then reboots you without wireless.  i am currently working through that and will get back you on the fix!

Executing the command less /proc/cpuinfo will detail the individual cores. It looks like as i said the i7 Pentium series!

Happy Penguin and Kitten Time!  Now you can customize your rig!

Screen shot of keybase running on my ubuntu mac rig!

And that is a wrap! As a matter of fact i ate my own dog food and wrote this blog on the “new” rig!

Until Then,

#iwishyouwater

@tctjr

Muzak to blog by: Dreamways Of The Mystic by Bobby Beausoliel

Vaspar 21 Minute Workout (Review)

The last three or four reps is what makes the muscle grow. This area of pain divides the champion from someone else who is not a champion.

Arnold Schwarzenegger

Due to my hobbies and extracurricular activities i sometimes get in situations that are disadvantageous to my physical well being. Let’s just say i have my number of sprains, broken bones, and metal parts in my body. Which in addition to being a fan of Dr. Norbert Wiener’s work i also believe in Cybernetics. That said over the past number of years i have had recurring issues with neck, back, shoulder, and hip chronic pain due to what i would term having too much fun whether it be martial arts, lifting weights, surfing, snowboarding etc. Through Lisa Maki (click her name for her story on her trials of pain) i was introduced to Marc Dubick, MD who is a Pain Medicine Specialist in Charleston, SC, and has over 46 years of experience in the medical field. He graduated from the University Of Kentucky medical school in 1975. He also happens to be a really amazing human being. Here is a picture of Dr. Marc Dubick and Your Author:

Me and The Doc

I started just as Lisa Maki did with injections of recombinant human growth hormone and testosterone to the painful and dysfunctional areas in my body that would otherwise have had to be operated on or replaced. Over the years while these injections are extremely painful they eventually help and heal the affected areas. i prefer the short term pain over the complexities of evasive surgery.

Dr Dubick has since retired and handed over the reins to a great doctor (and human) Dr. Todd Joye who is a partner at interveneMD. He is continuing the rGH therapy and it is proving as expected just as effective. This brings us to our current reason behind this blog. He has employed a rehabilitation – human performance machine called Vaspar.

Brochure of VASPAR

As it turns out professional sports teams and the military have started utilizing this machine with astounding results. To understand the physical benefits Vasper provides, they conducted research backed by supporting literature. In order to study its impact effectively, Vasper conducted a safety study, which verifies Vasper is safe and easy to use for most. Dr. Joye told me when he took delivery he had been working with the creator of Vaspar and Vasper users talk about increases in energy and strength. In a small study, they observed significant increases in testosterone as a result of Vasper use. Though there may be other factors at play, it is likely this testosterone boost explains the improvements in performance after Vasper use, which was discovered in a different study. Combined with its low impact physically and physiologically, the anabolic hormone increase with Vasper use is an unbeatable combination for anyone who wants to increase their physical performance. It is well known that testosterone is a key hormone that is involved in regulating muscle growth, bone density, fat metabolism, and mood in both men and women. The Vaspar folks explored this hormone with five professional baseball players in 8 Vasper sessions over 2 weeks. They data showed an 80% increase in free testosterone levels, with an average increase of 132% across all participants.

Purported levels of free testosterone level increase:

Given how much i pursue all things in human performance for mental and physical edges i was skeptical. However i am open to trying (almost) anything once if it shows the benefits of performant edge. Thus i went for my first 21 minute session.

Your author doing his best duck face:

In preparation, you get suited up in cuffs for your thighs, biceps, and neck. You train barefooted as the pedals are also supercooled. The thing that attracted me to the Vaspar workout was that it is low impact. The principles behind Vaspar are three areas: compression, cooling, and interval training.

Compression and cooling create the effect of high intensity (anaerobic) exercise without the major time soak or muscle damage. Compression also allows lactic acid to accumulate to your muscles which drives signals to your brain requesting higher amounts of human growth hormone and testosterone to accelerate repair and recovery. The cooling also increases oxygen to the muscles.

For interval training, it is obvious to anyone who has trained that is the way to go as far as i am concerned to generate high amounts of caloric burn in a shorter amount of time with higher amounts of lactic acid buildup which create a feedback effect.

The system can be customized for limb reach and throw as well as numerous analytics such as pulse oxygen, pulse rate, wattage burned etc.

So what happened?

The staff at interveneMD set the system to slightly higher than intermediate. Well dear reader it blew my mind in the first and second sessions. In 21 minutes it felt like i had been deadlifting substantial weight with sprints in between sets for at least an hour. Which i have done many times before with an extremely sore body afterward. The workout was very intense and exhilarating.

To my amazement, i had zero pain and in fact, greatly reduced pain probably to the endorphins released as well as the amount of HGH and testosterone.

My favorite part is after the interval workout you lie on a super cooling mat for 6 minutes. Nighty night bunny rabbit!

For anyone who is into human performance or needs rehabilitation of any means i highly recommended finding a facility that has one of these for use. As a note health insurance does pay some portion!

In full transparency i have no affiliation with interveneMD or the makers of VASPAR. This blog was written in order to amplify others and the fact i was totally amazed after so many years of searching for novel ways to workout.

Once again here are the links

interveneMD

Vaspar

Here is a great reference to Dr Marc Dubicks paper on rGH:

“Use of localized human growth hormone and testosterone injections in addition to manual therapy and exercise for lower back pain: a case series with 12-month follow-up.”

Hope everyone is safe!

Until then,

#iwishyouwater

tctjr

Muzak To Blog To: Johnny Smith – Kaleidoscope

FLAW: Not Thinking Big Enough or What Is Success?

I am an old man and have known a great many troubles, but most of them never happened.

Samuel Clements

This morning whilst trying to motivate myself at 5AM EST to work out and lift weights i had a thought:

We almost never think big enough in our endeavors and when we think we are thinking big enough we hear the word: “No” in some form or fashion.

After finally willing myself to workout I walked into my living room where i have some music stuff and this poster is one of my most prized possessions.  It was given to me when i was leaving apple.  

It was one of only three made during the famous “Here’s To The Crazy Ones” Campaign from Apple:

Here’s To The Crazy Ones Video

You might know the narrator of the video.  He was a college drop out and was fired from the company he founded.  Many miss him as I do miss him.  Why is this important?

Let us whittle this back some more. i was thinking when we picture ourselves doing something or in the process of doing something do we stop short of our truest desires?  Better yet if we have a passion why don’t we go after it will a full heart?  Or while we are executing on the said passion we stop short?  

Maybe it starts young.  Lets look at something that seems very innocuous at first.  The simple word NO.

It hath been said words have meanings so let’s search – shall we?

Taken from Online Etymology Dictionary | Origin, history and meaning of English word

NO (adv)

“not in any degree, not at all,” Middle English, from Old English na, from ne “not, no” + a “ever.” The first element is from Proto-Germanic *ne (source also of Old Norse, Old Frisian, Old High German ne, Gothic ni “not”), from PIE root *ne- “not.” Second element is from Proto-Germanic *aiwi-, extended form of PIE root *aiw- “vital force, life, long life, eternity.” Ultimately identical to nay, and the differences of use are accidental.”

Years ago a UCLA survey reported that the average one-year-old child hears the word, No! more than 400 times a day! You might think this is an exaggeration. However when we say  No! we usually say, No, no, no!. That’s three times in three seconds! If that child (or adult) is particularly active then i could see this being a valid statistic.  By the way for any parents out there don’t feel bad we have all done it.

(SIDE NOTE: i do realize there are lies, damn lies and statistics – yet i digress).

What do you do when you are constantly being told what not to do? Or being told NO!  You can’t do that! We then “grow up”.

The passion and wonder of childhood fade.  Yet it doesnt have to does it?

Now more than ever there are ways to monetize that passion unless your independently wealthy then you need not worry at all about such things.

One of my interview questions:

What is your true passion?

i truly want to know. What do YOU WANT?  Whatever it is or whatever the person says i usually tell them to go do IT instead of what they think they should be doing. Caveat Emptor: There are always consequences.

I’ve heard all kinds of answers and more to this question: grow mushrooms, paint, be a comedian, peace corps, build the next (insert Apple, Microsoft, Google, etc herewith), fireman, and yes even a porn star.  

Now why i am referencing the word ‘NO’ with respect to not thinking grandiose, audacious, stupendously – enough?

Because we are told you can’t by those that do not understand YOUR passion or those that cannot do what you do and to be more even more succinct they are probably scared at some level.  

Now did i ever say it was easy?  No in fact when it appears to be completely dire straits (not to be confused with the band) it is usually the most opportunistic situation. Storms never last they say and you will always most certainly feel like you are in some form of a storm. “Ordo Ab Chaos” as the old saying goes.

You will encounter criticism, countless setbacks, and ostracization in some cases, depending on how big your passions and executions are in certain areas.  Do not let these deter you.  The loudest negative voice you will have to deal with is the small voice inside your head at night – Nighty Night, but ya can’t do that….

Also to that point remember your passion is just a thought until you execute on it. Ideas are cheap.  Everyone has ideas every day that they never act on because of homeostasis or they create a reason not to act upon thier passion.

One of my all-time favorite guitarist is Steve Vai. He was 17 years old when he played with Frank Zappa.

In this video he talks about what it takes to be successful.

The only thing that is holding you back is the way YOU are thinking.  Again – What is it you truly want?

Whatever it is imagine yourself being there.

All of a sudden the reasons you can’t do it flood in and the word NO is echoed.

In the worst of times go to the larger big audacious outrageous picture of you executing your passion.

Hold it and make it precious.

Then move Ever Forward toward the next step closer to that vision as you become a NO Collector from all of the naysayers that say it cannot be done. For every NO you collect you are one step closer to your success.

Muzak To Blog To: Rome “Flight In Formation”.

Until then,

Be safe.

#iwishyouwater

@tctjr

Computing The Human Condition – Project Noumena (Part 2)

In the evolution of a society, continued investment in complexity as a problem-solving strategy yields a declining marginal return.

Joseph A. Tainter

Someone asked me if from now on my blog will only be about Project_Noumena – on the contrary.

I will be interspersing subject matter within Parts 1 to (N) of Project_Noumena. To be transparent at this juncture i am not sure where it will end or if there is even a logical MVP 1.0.  As with open-source systems and frameworks technically one never achieves V1.0 as the systems evolve. i tend to believe this will be the case with Project Noumena.  i  recently provided a book review on CaTB and have a blog on Recurrent Neural Networks with respect to Multiple Time Scale Prediction in the works so stuff is proceeding. 

To that end, i would love comments and suggestions as to anything you would like my opinion on or for me to write about in the comments section.  Also feel free to call me out on typos or anything else you see in error.

Further within Project Noumena there are snippets that could be shorter blogs as well.  Look at Project Noumena as a fractal-based system.

Now on to the matter at hand.

In the previous blog Computing The Human_Condition – Project Noumena (Part 1) i discussed the initial overview of the model from the book World Dynamics.  i will take a part of that model which is what i call, the main, Human_Do_Loop(); and the main attributes of the model: Birth and Death of Humans. One must ask if we didn’t have humans we would not have to be concerned with such matters as societal collapse?  i don’t believe animals are concerned with such existential crisis concerns so my answer is a resounding – NO. We will be discussing such existential issues in this blog although i will address such items in future writings. 

Over the years i have been asking myself is this a biological model by definition?  Meaning do we have cellular components involved only?  Is this biological modeling at the very essence?  If we took the cell-based organisms out of the equation what do we still have as far as models on Earth? 

While i told myself i wouldn’t get too extensional here and i do want to focus on the models and then codebases i continually check the initial conditions of these systems as they for most systems dictate the response for the rest of the future operations of said systems.  Thus for biological systems, are there physical parameters that govern the initial exponential growth rate?  Can we model with power laws and logistic curves for coarse-grained behavior?  Is Bayesian reasoning biologically plausible at a behavioral level or at a neuronal level? Given that what are the atomic units that govern these models?  

These are just a sampling of initial condition questions i ask myself as i evolve through this process. 

So with that long-winded introduction and i trust i didn’t lose you oh reader lets hope into some specifics. 

Birth and Death Rates

The picture from the book depicts basic birth and death loops in the population sector.  In the case of these loops, they are generating positive feedback which causes growth.  Thus an increase in population P causes an increase in birthrate BR.  This, in turn, causes population P to further increase.  The positive feedback loop would if left to its own devices would create an exponentially growing situation.  As i said in the first blog and will continue to say, we seem to have started using exponential growth as a net positive fashion over the years in the technology industry.  In the case of basic population dynamics with no constraints, exponential growth is not a net positive outcome. 

Once again why start with simple models?  The human mind is phenomenal at perceiving pressures, fears, greed, homeostasis, and other human aspects and characteristics and attempting at a structure that is given say the best fit to a situation and categorizing these as attributes thereof.  However, the human mind is rather poor at predicting dynamical systems behaviors which are where the models come into play especially with social interactions and what i attempting to define from a self-organizing theory standpoint.  

The next sets of loops that have the most effective behavior is a Pollution loop and a Crowding Loop.  If we note that pollution POL increases one can assume up to a point that one hopes that nature absorbs and fixes the pollution otherwise it is a completely positive feedback loop and this, in turn, creates over pollution which we are already seeing the effects of around the worlds. One can then couple this with the amount of crowding humans can tolerate. 

Population, Birth Rate, Pollution

We see this behavior in urban sprawl areas when we have extreme heat or extreme cold or let’s say extreme pandemics.  If the population rises crowding ratio increases the birth rate multiplier declines and birth rates reduce.  The increasing death rate and reducing the birth rate are power system dynamic stabilizers coupled with pollution. This in turn obviously has an effect on food supplies. One can easily deduce that these seemingly simple coefficients if you will within the relative feedback loops create oscillations, exponential growth, or exponential decay.  The systems while that seem large and rather stable are very sensitive to slight variations.  If you are familiar with NetLogo it is a great agent-based modeling language.  I picked a simple pollution model whereas we can select the number of people, birthrate, and tree planting rate. 

population dynamics with pollution

As you can see without delving into the specifics after 77 years it doesn’t look to promising.  i ‘ll either be using python or netlogo or a combination of both to extended these models as we add other references. 

Ok enough for now.

Until Then,

#iwishyouwater

@tctjr

Book Review: The Cathedral and The Bazaar (Musings On Linux and Open Source By An Accidental Revolutionary

“Joy, humor, and playfulness are indeed assets;” 

~ Eric S. Raymond

As of late, i’ve been asked by an extreme set of divergent individuals what does “Open Source Software” mean? 

That is a good question.  While i understand the words and words do have meanings i am not sure its the words that matter here.  Many people who ask me that question hear “open source” and hear or think “free’ which is not the case.  

Also if you have been on linkedin at all you will see #Linux, #LinuxFoundation and #OpenSource tagged constantly in your feeds.

Which brings me to the current blog and book review.

(CatB)as it is affectionately known in the industry started out and still is a manifesto as well accessible via the world wide web.  It was originally published in 1997 on the world wide wait and then in print form circa 1999.  Then in 2001 was a revised edition with a foreword by Bob Young, the founding chairman and ceo of Redhat.

Being i prefer to use plain ole’ books we are reviewing the physical revised and extended paperback edition in this blog circa 2001. Of note for the picture, it has some wear and tear.

To start off as you will see from the cover there is a quote by Guy Kawasaki, Apple’s first Evangelist:

“The most important book about technology today, with implications that go far beyond programming.”

This is completely true.  In the same train of thought, it even goes into the aspects of propriety and courtesy within conflict environments and how such environments are of a “merit not inherit” world, and how to properly respond when you are in vehement disagreement.  

To relate it to the book review: What is a cathedral development versus a bazaar environment?

Cathedral is a tip of the fedora if you will to the authoritarian view of the world where everything is very structured and there are only a few at most who will approve moving the codebase forward.

Bazaar refers to the many.  The many coding and contributing in a swarm like fashion.  

In this book, closed source is described as a cathedral development model and open source as a bazaar development model. A cathedral is vertically and centrally controlled and planned. Process and governance rule the project – not coding.  The cathedral is homeostatic. If you build or rebuild Basilica Sancti Petri within Roma you will not be picking it up by flatbed truck and moving it to Firenze.

The forward in the 2001 edition is written by Bob Young co-founder and original CEO of RedHat.  He writes:

“ There have always been two things that would be required if open-source software was to materially change the world; one was for open-source software to become widely used and the other was the benefits this software development model supplied to its users had to be communicated and understood.”

Users here are an interesting target.  Users could be developers and they could be end-users of warez.  Nevertheless, i believe both conditions have been met accordingly.  

i co-founded a machine learning and nlp service as a company in 2007 wherein i had the epiphany after my “second” read of Catb that the future is in fact open source.  i put second in quotes as the first time i read it back in 1998 it wasn’t really a read in depth nor having fully internalized it while i was working at Apple in the CPU software department on OS9/OSX and while at the same time knowing full well that OSX was based on the Mach kernel.  The Mach kernel is often mentioned as one of the earliest examples of a microkernel. However, not all versions of Mach are microkernels. Mach’s derivatives are the basis of the operating system kernel in GNU Hurd and of Apple’s XNU kernel used in macOS, iOS, iPadOS, tvOS, and watchOS.

That being said after years of working with mainly closed source systems in 2007 i re-read Catb.  i literally had a deep epiphany that the future of all development would be open source distributed machine learning – everywhere.

Then i read it recently – deeply – a third time.  This time nearly every line in the book resonates.

The third time with almost anything seems to be the charm.  This third time through i realized not only is this a treatise for the open-source movement it is a call to arms if you will for the entire developer community to behave appropriately with propriety and courtesy in a highly matrixed collaborative environment known as the bazaar.

The most obvious question is:  Why should you care?  i’m glad you asked.

The reason you care is that you are part of the information economy.  The top market cap companies are all information-theoretic developer-first companies.  This means that these companies build things so others can build things.  Software is truly eating the world.  Think in terms of the recent pandemic.  Work (code) is being created at an amazing rate due to the fact that the information work economy is distributed and essentially schedule free.  She who has distributed wins and she who can code anytime wins.  This also means that you are interested in building world-class software and the building of this software is now a decentralized peer reviewed transparent process.  

The book is organized around Raymond’s various essays.   It is important to note that just as software is an evolutionary process by definition so are the essays in this book.  They can also be found online.  The original collection of essays date back to 1992 on the internet: “A Brief History Of Hackerdom.’

The book is not a “how-to” cookbook but rather what i call a “why to” map of the terrain.  While you can learn how to hack and code i believe it must be in your psyche.  The book also uses the term “hacker” in a positive sense to mean one who creates software versus one who cracks software or steals information.

While the history and the methodology is amazing to me the cogent commentary on the types of the reasoning behind why hackers go into open source vary as widely as ice cream flavors.

Raymond goes into the theory of incentives with respect to the instinctive wiring of humans beings.  

“The verdict of history seems to be free-market capitalism is the globally optimal way to cooperate for economic efficiency; perhaps in a similar way to cooperate for generating (and checking!) high-quality creative work.”

He categorizes command hierarchy, exchange economy, and gift culture to address these incentives.  

Command hierarchy:

Goods are allocated in a scarce economy model by one central authority.

Exchange Economy:

The allocation of scarce goods is accomplished in a decentralized manner allowing scale through trade and voluntary cooperation.

Gift Culture:

This is very different than the other two methods or cultures.  Abundance makes command and control relationships difficult to sustain.  In gift cultures, social status is determined not by what you control but by what you give away.

It is clear that if we define the open source hackerdom it would be a gift culture.  (It is beyond the current scope of this blog but it would be interesting to do a neuroscience project on the analysis of open source versus closed source hackers brain chemistry as they work throughout the day)

Given these categories, the essays then go onto define the written and many times unwritten (read secrets) that operate within the open-source world via a reputation game. If you are getting the idea it is tribal you are correct.  Interestingly enough the open source world has in many cases very divergent views on all prickly things within the human condition such as religion and politics but one thing is a constant – ship high-quality code.

Without a doubt the most glaring cogent commentary comes in a paragraph from the essay “The Magic Cauldron.” entitled “Open Source And Strategic Business Risk.”   

Ultimately the reasons open source seems destined to become a widespread practice have more to do with customer demand and market pressures than with supply-efficiencies for vendors.”

And further:

“Put yourself for the moment in the position of a CTO at a Fortune 500 corporation contemplating a build or upgrade of your firm’s IT infrastructure.  Perhaps you need to choose a network operating system to be deployed enterprise-wide; perhaps your concerns involve 24/7 web service and e-commerce, perhaps your business depends on being able to field high-volume, high-reliability transaction databases.  Suppose you go the conventional closed-source route.  If you do, then you put your firm at the mercy of a supplier monopoly – because by definition there is only one place you can go to for support, bug fixes, and enhancements.  If the supplier doesn’t perform, you will have no effective recourse because you are effectively locked by your initial investment.”

FURTHER:

“The truth is this: when your key business processes are executed by opaque blocks of bits that you cant even see inside (let alone modify) you have lost control of your business.”

“Contrast this with the open-source choice.  If you go this route, you have the source code, and no one can take that away from you. Instead of a supplier monopoly with a choke-hold on your business, you now have multiple service companies bidding for your business – and you not only get to play them against each other, but you also have the option of building your own captive support organization if that looks less expensive than contracting out.  The market works for you.”

“The logic is compelling; depending on closed-source code is an unacceptable strategic risk  So much so that I believe it will not be very long until closed-source single-vendor acquisitions when there is an open source alternative available will be viewed as a fiduciary irresponsibility, and rightly grounds for a share-holder lawsuit.”

THIS WAS WRITTEN IN 1997. LOOK AROUND THE WORLD WIDE WAIT NOW… WHAT DO YOU SEE?  

Open Source – full stop.

i will add that there was no technical explanation here only business incentive and responsibility to the company you are building, rebuilt, or are scaling.  Further, this allows true software malleability and reach which is the very reason for software.

i will also go on a limb here and say if you are a software corporation one that creates software you can play the monopoly and open-source models against each other within your corporation. Agility and speed to ship code is the only thing that matters these days. Where is your github? Or why is this not shipping TODAY?

This brings me to yet another amazing prescient prediction in the book that Raymond says that applications are ultimately where we will land for monetary scale.  Well yes, there is an app for that….

While i have never met Eric S. Raymond he is a legend in the field.  We have much to thank him for in the areas of software.  If you have not read CatB and work in the information sector do yourself a favor: buy it today.

As a matter of fact here is the link: The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary

Muzak To Blog To:  “Morning Phase” by Beck 

Resources:

http://www.opensource.org

https://www.apache.org/foundation/

Remembering 9.11

“We are dust in time and space.”

~ Of The Wand & Moon.

Today is a day that will live in infamy for many as a testament to what extremist beliefs can conjure into reality. To those who lost loved ones on 9.11.2001 – peace be with you. To those who survived 9.11.2001 – again peace be with you.

For me a different year 9.11.2005 is etched in my mind. It is a strange phenomenon to me as i spend much of my life always optimizing or at least making myself believe that i am optimizing. While i did not lose a father or husband i lost a dear friend and comrade – Steven Swenson.

Today 9.11.2020 one of my true friends and comrades was returning from a freediving trip with his lovely wife and i could hear the joy in both of their voices performing an activity that my comrade Sven died doing what he loved doing on this day in 2005 – freediving. Living underwater on #OneBreath.

i realized through them on this day the essence of it all came to fruition – The_Human_Do_Loop in action and it is ok to truly feel for without the depths we know no true joy.

to roma, leif, and gage – tell him i said hello – again.

i think he would have liked this song.

And I held the breath inside my lungs for days
And I saw myself as one of many waves
When I knew I’d become the ocean’s slave
I just stayed

And we carried far with all the waters past
Of the waves

I was not first I was not last
And if we saw a boat afloat we took the mast
So fast

There’s a part of it, that I’ll miss
At the heart of it, your cold kiss
From the start of it, I know this
Always apart of it

And before too long the waves grew out of hand
And they worked to keep the sea at their command
And the only thing they feared it seemed the sand
And dry land

There’s a part of it, that I’ll miss
At the heart of it, your cold kiss
From the start of it, I know this
Always apart of it

From the water there was born a bright blue roar
As it rolled and formed and calmed the ocean’s floor
And it finally rose and broke upon the shore
No more

There’s a part of it, that I’ll miss
At the heart of it, your cold kiss
From the start of it, I know this
Always apart of it (I know this)

There’s a part of it, that I’ll miss
At the heart of it, your cold kiss
From the start of it, I know this
Always apart of it
Always apart of it

i know at least one person who freedove today loves that song.

until then,

#iwishyouwater

tctjr.

Computing The Human Condition – Project Noumena (Part 1)

“I am putting myself to the fullest possible use, which is all I think any conscious entity can ever hope to do.” ~ HAL 9000

“If you want to make the world a better place take a look at yourself and then make a change.” ~ MJ.

First and foremost with this blog i trust everyone is safe.  The world is in an interesting place, space, and time both physically and dare i say collectively – mentally.

A Laundry List

Introduction

This past week we celebrated  Earth Day.  i believe i heard it was the 50th year of Earth Day.  While I applaud the efforts and longevity for a day we should have Earth Day every day.  Further just “thoughting” about or tweeting about Earth Day – while it may wake up your posterior lobe of the pituitary gland and secret some oxytocin – creating the warm fuzzies for you it really doesn’t create an action for furthering Earth Day.  (much like typing /giphy YAY! In Slack).

 As such, i decided to embark on a multipart blog that i have been “thinking” about what i call an Ecological Computing System.  Then the more i thought about it why stop at Ecology?   We are able to model and connect essentially anything, we now have models for the brain that while are coarse-grained can account for gross behaviors, we have tons of data on buying habits and advertisement data and everything is highly mobile and distributed.  Machine learning which can optimize, classify and predict with extremely high dimensionality is no longer an academic exercise.  

Thus, i suppose taking it one step further from ecology and what would differentiate it from other efforts is that <IT>  would actually attempt to provide a compute framework that would compute The Human Condition.  I am going to call this effort Project Noumena.  Kant the eminent thinker of 18th century Germany defined Noumena as a thing as it is in itself, as distinct from a thing as it is knowable by the senses through phenomenal attributes and proposed that the experience was a product of the mind.

My impetus for this are manifold:

  • i love the air, water, trees, and animals,
  • i am an active water person,
  • i want my children’s children’s children to know the wonder of staring at the azure skies, azure oceans and purple mountains,
  • Maybe technology will assist us in saving us from The Human Condition.

Timing

i have waited probably 15+ years to write about this ideation of such a system mainly due to the technological considerations were nowhere near where they needed to be and to be extremely transparent no one seemed to really think it was an issue until recently.  The pandemic seems to have been a global wakeup call that in fact, Humanity is fragile.  There are shortages of resources in the most advanced societies.  Further due to the recent awareness that the pollution levels appear (reported) to be subsiding as a function in the reduction of humans’ daily involvement within the environment. To that point over the past two years, there appears to be an uptake of awareness in how plastics are destroying our oceans.  This has a coupling effect that with the pandemic and other environmental concerns there could potentially be a food shortage due to these highly nonlinear effects.   This uptake in awareness has mainly been due to the usage of technology of mobile computing and social media which in and of itself probably couldn’t have existed without plastics and massive natural resource consumption.  So i trust the irony is not lost there.   

From a technical perspective, Open source and Open Source Systems have become the way that software is developed.  For those that have not read The Cathedral and The Bazaar and In The Beginning Was The Command Line i urge you to do so it will change your perspective.

We are no longer hampered by the concept of scale in computing. We can also create a system that behaves at scale with only but a few human resources.  You can do a lot with few humans now which has been the promise of computing.

Distributed computing methods are now coming to fruition. We no longer think in terms of a monolithic operating system or in place machine learning. Edge computing and fiber networks are accelerating this at an astonishing rate.  Transactions now dictate trust. While we will revisit this during the design chapters of the blog I’ll go out on a limb here and say these three features are cogent to distributed system processing (and possibly the future of computing at scale).

  • Incentive models
  • Consensus models
  • Protocol models

We will definitely be going into the deeper psychological, mathematical, and technical aspects of these items.

Some additional points of interest and on timing.  Microsoft recently released press about a Planetary Computer and announced the position of Chief Ecology Officer.  While i do not consider Project Nuomena to be of the same system type there could be similarities on the ecological aspects which just like in open source creates a more resilient base to work.

The top market cap companies are all information theoretic-based corporations.  Humans that know the science, technology, mathematics and liberal arts are key to their success.  All of these companies are woven and interwoven into the very fabric of our physical and psychological lives.

Thus it is with the confluence of these items i believe the time is now to embark on this design journey.  We must address the Environment, Societal factors and the model of governance.

A mentor once told me one time in a land far away: “Timing is everything as long as you can execute.”  Ergo Timing and Execution Is Everything.

Goals

It is my goal that i can create a design and hopefully, an implementation that is utilizing computational means to truly assist in building models and sampling the world where we can adhere to goals in making small but meaningful changes that can be used within what i am calling the 3R’s:  recycle, redact, reuse.  Further, i hope with the proper incentive models in place that are dynamic it has a mentality positive feedback effect.  Just as in complexity theory a small change – a butterfly wings – can create hurricanes – in this case positive effect. 

Here is my overall plan. i’m not big on the process or gant charts.  I’ll be putting all of this in a README.md as well.  I may ensconce the feature sets etc into a trello or some other tracking mechanism to keep me focused – WebSphere feel free to make recommendations in the comments section:

Action Items:

  • Create Comparative Models
  • Create Coarse-Grained Attributes
  • Identify underlying technical attributes
  • Attempt to coalesce into an architecture
  • Start writing code for the above.

Preamble

Humanity has come to expect growth as a material extension of human behavior.  We equate growth with progress.  In fact, we use the term exponential growth as it is indefinitely positive.  In most cases for a fixed time interval, this means a doubling of the relevant system variable or variables.  We speak of growth as a function of gross national production.  In most cases, exponential growth is treacherous where there are no known or perceived limits.  It appears that humanity has only recently become aware that we do not have infinite resources.  Psychologically there is a clash between the exponential growth and the psychological or physical limit.  The only significance is the relevant (usually local) limit.  How does it affect me, us, and them?  This can be seen throughput most game theory practices – dominant choice.  The pattern of growth is not the surprise it is the collision of the awareness of the limit to the ever-increasing growth function is the surprise.

One must stop and ask: 

Q: Are progress (and capacity) and the ever-increasing function a positive and how does it relate to 2nd law of thermodynamics aka Entropy?  Must it always expand?

We are starting to see that our world can exert dormant forces that within our life can greatly affect our well being. When we approach the actual or perceived limit the forces which are usually negative begin to gain strength.

So given these aspects of why i’ll turn now to start the discussion.  If we do not understand history we cannot predict the future by inventing it or in most cases re-inventing it as it where.

I want to start off the history by referencing several books that i have been reading and re-reading on subjects of modeling the world, complexity, and models for collapse throughout this multipart blog.  We will be addressing issues concerning complex dynamics as are manifested with respect to attributes model types, economics, equality, and mental concerns.  

These core references are located at the end of the blog under references.  They are all hot-linked.  Please go scroll and check them out.  i’ll still be here.  i’ll wait.

Checked them out?  i know a long list. 

As you can see the core is rather extensive due to the nature of the subject matter.  The top three books are the main ones that have been the prime movers and guides of my thinking.  These three books i will refer to as The Core Trilogy:

World Dynamics

The Collapse of Complex Societies 

Six Sources of Collapse 

 As i mentioned i have been deeply thinking about all aspects of this system for quite some time. I will be mentioning several other texts and references along the continuum of creation of this design.

We will start by referencing the first book: World Dynamics by J.W. Forrestor.  World Dynamics came out of several meetings of the Rome Club a 75 person invite-only club founded by the President of Fiat.  The club set forth the following attributes for a dynamic model that would attempt to predict the future of the world:

  • Population Growth
  • Capital Investment
  • Geographical Space
  • Natural Resources
  • Pollution
  • Food Production

The output of this design was codified in a computer program called World3.  It has been running since the 1970s what was then termed a golden age of society in many cases.  All of these variables have been growing at an exponential rate. Here we see the model with the various attributes in action. There have been several criticisms of the models and also analysis which i will go into in further blogs. However, in some cases, the variants have been eerily accurate. The following plot is an output of the World3 model:

2060 does not look good

Issues Raised By World3 and World Dynamics

The issues raised by World3 and within the book World Dynamics are the following:

  • There is a strong undercurrent that technology might not be the savior of humankind
  • Industrialism (including medicine and public health) may be a more disturbing force than the population.  
  • We may face extreme psychological stress and pressures from a four-pronged dilemma via suppression of the modern industrial world.
  • We may be living in a “golden age” despite a widely acknowledged feeling of malaise.  
  • Exhtortions and programs directed at population control may be self-defeating.  Population control, if it works, would yield excesses thereby allowing further procreation.
  • Pollution and Population seem to oscillate whereas the high standard of living increases the production of food and material goods which outrun the population.  Agriculture as it hits a space limit and as natural resources reach a pollution limit then the quality of life falls in equalizing population.
  • There may be no realistic hope of underdeveloped countries reaching the same standard and quality of life as developed countries.  However, with the decline in developed countries, the underdeveloped countries may be equalized by that decline.
  • A society with a high level of industrialization may be unsustainable.  
  • From a long term 100 years hence it may be unwise for underdeveloped countries to seek the same levels of industrialization.  The present underdeveloped nations may be in better conditions for surviving the forthcoming pressures.  These underdeveloped countries would suffer far less in a world collapse.  

Fuzzy Human – Fuzzy Model

The human mind is amazing at identifying structures of complex situations. However, our experiences train us poorly for estimating the dynamic consequences of said complexities.  Our mind is also not very accurate at estimating ad hoc parts of the complexities and the variational outcomes.  

One of the problems with models is well it is just a model  The subject-observer reference could shift and the context shifts thereof.  This dynamic aspect needs to be built into the models.

Also while we would like to think that our mental model is accurate it is really quite fuzzy and even irrational in most cases.  Also attempting to generalize everything into a singular model parameter is exceedingly difficult.  It is very difficult to transfer one industry model onto another.  

In general parameterization of most of these systems is based on some perceptual model we have rationally or irrationally invented.  

When these models were created there was the consideration of modeling social mechanics of good-evil, greed – altruism, fears, goals, habits, prejudice, homeostasis, and other so-called human characteristics.  We are now at a level of science where we can actually model the synaptic impulse and other aspects that come with these perceptions and emotions.

There is a common cross-cutting construct in most complex models within this text that consists of and mainly concerned with the concept of feedback and how the non-linear relationships of these modeled systems feedback into one another.  System-wide thinking permeates the text itself.  On a related note from the 1940’s of which Dr Norbert Weiner and others such as Claude Shannon worked on ballistic tracking systems and coupled feedback both in a cybernetic and information-theoretic fashion of which he attributed the concept of feedback as one of the most fundamental operations in information theory.  This led to the extremely famous Weiner Estimation Filters.  Also, side note: Dr Weiner was a self-styled pacifist proving you can hold two very opposing views in the same instance whilst being successful at executing both ideals.   

Given that basic function of feedback, lets look at the principle structures.  Essentially the model states there will be levels and rates.  Rates are flows that cause levels to change.  Levels can accumulate the net level. Either addition or subtraction to that level.  The various system levels can in aggregate describe the system state at any given time (t).  Levels existing in all subsystems of existence.  These subsystems as you will see include but are not limited to financial, psychological, biological, and economic.   The reason that i say not limited to because i also believe there are some yet to be identified subsystems at the quantum level.  The differential or rate of flow is controlled by one or more systems.  All systems that have some Spatio-temporal manifestation can be represented by using the two variables levels and rates.  Thus with respect to the spatial or temporal variables, we can have a dynamic model.  

The below picture is the model that grew out of interest from the initial meetings of the Club of Rome.  The inaugural meeting which was the impetus for the model was held in Bern, Switzerland on June 29, 1970.  Each of the levels presents a variable in the previously mentioned major structures. System levels appear as right triangles.  Each level is increased or decreased by the respective flow.  As previously mentioned on feedback any closed path through the diagram is a feedback loop.  Some of the closed loops given certain information-theoretic attributes be positive feedback loops that generate growth and others that seek equilibrium will be negative feedback loops.  If you notice something about the diagram it essentially is a birth and death loop. The population loop if you will.  For the benefit of modeling, there are really only two major variables that affect the population.  Birth Rate (BR) and Death Rate (DR).  They represent the total aggregate rate at which the population is being increased or decreased.  The system has coefficients that can initialize them to normal rates.  For example, in 1970 BRN is taken as 0.0885 (88.5 per thousand) which is then multiplied by population to determine BR.  DRN by the same measure is the outflow or reduction.  In 1970 it was 9.5% or 0.095.  The difference is the net and called normal rates.  The normale rates correspond to a physical normal world.  When there are normal levels of food, material standard of living, crowding, and pollution.  The influencers are then multipliers that increase or decrease the normal rates.

Feedback and isomorphisms abound


As a caveat, there have been some detractors of this model. To be sure it is very coarse-grained however while i haven’t seen the latest runs or outputs it is my understanding as i said the current outputs are close. The criticisms come in the shape of “Well its just modeling everything as a y=x*e^{{rt}}. I will be using this concept and map if you will as the basis for Noumena.  The concepts and values as i evolve the system will vary greatly from the World3 model but i believe starting with a minimum viable product is essential here as i said humans are not very good at predicting all of the various outcomes in high dimensional space. We can asses situations very quickly but probably outcomes no so much. Next up we will be delving into the loops deeper and getting loopier.

So this is the first draft if you will as everything nowadays can be considered an evolutionary draft.  

Then again isn’t really all of this just  The_Inifinite_Human_Do_Loop?

until then,

#iwishyouwater

tctjr

References:

(Note: They are all hotlinked)

World Dynamics

The Collapse of Complex Societies 

Six Sources of Collapse 

Beyond The Limits 

The Limits To Growth 

Thinking In Systems Donella Meadows

Designing Distributed Systems Brendan Burns

Introduction to Distributed Algorithms 

A Pragmatic Introduction to Secure Multi-Party Computation 

Reliable Secure Distributed Programming 

Distributed Algorithms 

Dynamic General Equilibrium Modeling 

Advanced Information Systems Engineering 

Introduction to Dynamic Systems Modeling 

Nonlinear Dynamics and Chaos 

Technological Revolutions and Financial Capital 

Marginalism and Discontinuity 

How Nature Works 

Complexity and The Economy 

Complexity a Guided Tour

Future Shock 

Agent_Zero 

Nudge Theory In Action

The Structure of Scientific Revolutions

Agent-Based Modelling In Economics

Cybernetics

Human Use Of Human Beings

The Technological Society

The Origins Of Order

The Lorax

Blog Muzak: Brain and Roger Eno: Mixing Colors

Hello Multi-Worlds With IBM Q

“If you are not completely confused by quantum mechanics, you do not understand it.”
~ John Wheeler

A chandelier or computing device?

Introduction

i wanted to take advantage of the #socialdistancing to catch up on personal blog writing. One of the areas that i have been meaning to start is my sojourn into the area of Quantum Computing specifically with IBM Q framework Qiskit (pronounced KIZ-KIT). Qiskit is an open-source quantum computing software development framework for leveraging today’s quantum processors in research, education, and business. Having read many of the latest texts (which i will add at the end of the blog) as well as initially implementing some initial Hello_World python scripts i decided to put it away due to the fact it made Alice In Wonderland’s Rabbit hole look tame. I did, however, go through some of the initial IBM Learnings and received the following:

Quantum
I am Bonafide

So given that i decided to fully re-engage and start the process the first steps as with any language or framework is to create the proverbial “Hello_World”. However, before we get into the code lets address what is in the Qiskit coding framework.

The following components are within the Qiskit framework: Terra, Aer, Aqua, and Ignis:

  • Terra: Within Terra is a set of tools for composing quantum programs at the level of circuits and pulses, optimizing them for the constraints of a particular physical quantum processor, and managing the batched execution of experiments on remote-access backends.
    • User Inputs (Circuits, and Schedules), Quantum Circuit, Pulse Schedule
    • Transpilers and optimization passes
    • Providers: Aer, IBM Quantum, and Third Party
    • Visualization and Quantum Information Tools (Histogram, State, Unitary, Entanglement)
  • Aer : It contains optimized C++ simulator backends for executing circuits compiled in Qiskit Terra and tools for constructing highly configurable noise models for performing realistic noisy simulations of the errors that occur during execution on real devices.
    • Noise Simulation (QasmSimulator Only)
    • Backends ( QasmSimulator, StatevectorSimulator, UnitarySimulator)
    • Jobs and Results: Counts, Memory, Statevector, Unitary, Snapshots
  • Aqua: Libraries of cross-domain quantum algorithms upon which applications for near-term quantum computing can be built. Aqua is designed to be extensible and employs a pluggable framework where quantum algorithms can easily be added.
    • Qiskit Aqua Translators ( Chemistry, AI, Optimization, Finance )
    • Quantum Algorithms ( QPE, Grover, HHL, QSVM, VQE, QAOA, etc… )
    • Qiskit Terra ( Compile Circuits)
    • Providers: Aer, IBM Quantium and Third Party
  • Ignis: A framework for understanding and mitigating noise in quantum circuits and systems. The experiments provided in Ignis are grouped into the topics of characterization, verification and mitigation.
    • Experiments: List of Quantum Circuits and Pulse Schedules
    • Qiskit Terra: Compile Circuits or Schedules
    • Providers: Qiskit Aer, IBM Quantum, Third Party
    • Fitters / Filters: Fit to a Model/Plot Results, Filter Noise

As one can see the components are cross-referenced across the entirety of the framework and provide the quantum developer a rich set of tools, algorithms, and methods for code creation.

Putting Your Toe In The First Quantum World

This section covers very basic quantum theory. There are several great textbooks on this subject and i will list some at the end of the blog with brief reviews. Suffice to say you cannot be scared or shy away from “greek letters or strange symbols”. To fully appreciate what is happening you need “the maths”. That said let us first define a qubit. Classical Computers operate on ( 0 ) or ( 1 ). Complete binary operations due to the nature of a diode or gate. Quantum Computers operate on quBits for Quantum Bits. These are represented by surrounding a name by ” | ” and ” > “. Thus a Qubit “named” “1” can be written as | 1\rangle. This notation is known as Dirac’s bra-ket notation. Specifically from a mathematical standpoint and this is why the above uses the label “named” it is represented by a two-dimensional vector space over complex numbers \mathbb{C}^2. This means that a Qubit takes two complex numbers to fully describe it. Okay so think about that… It takes two numbers to describe the state. Already strange huh? The computational (or standard) basis corresponds to the two levels |0\rangle and |1\rangle, which corresponds to the following vectors:

    \[\begin{split}|0\rangle = \begin{pmatrix}1\\ 0 \end{pmatrix}~~~~|1\rangle=\begin{pmatrix}0\\1\end{pmatrix}\end{split}\]

So remember that the state is described by two complex numbers. Well, the qubit does not always have to be in either |0\rangle or |1\rangle ; it can be in an arbitrary quantum state, denoted |\psi\rangle, which can be any superposition (|\psi\rangle\ = \alpha|0\rangle + \beta|1\rangle of the basis vectors. The superposition quantities \alpha and (\beta\) are complex numbers; together they obey |\alpha|^2 + |\beta| = 1 . Interesting things happen when quantum systems are measured, or observed. Quantum measurement is described by the Born rule. In particular, if a qubit in some state |\psi\rangle, is measured in the standard basis, the result 0 is obtained with probability |\alpha|^2, and the result 1 is obtained with the complementary probability |\beta|^2. Interestingly, a quantum measurement takes any superposition state of the qubit, and projects it to either the state |0\rangle or the state |1\rangle, with a probability determined from the parameters of the superposition. Whew! What i found really cool was that all of the linear algebra is the same. Here is another really cool thing: To actually create the environment the amazing scientists at IBM In the IBM Quantum Lab keep the temperature cold (15 milliKelvin in a dilution refrigerator) that there is no ambient noise or heat to excite the superconducting qubit. It is beyond the scope of why this is needed but suffices to say it involves making a superconductor, and that is when a material conducts electricity without encountering any resistance, thus without losing any energy. Ok, let’s climb out of Alice’s Rabbit Hole and get to some practical code.

Setting Up The Environment

So we are assuming the reader is familiar with setting up a python virtual environment and able to either pip install or utilize a package manager like anaconda for installing the respective libraries. The complete installation process can be found here: Installing QisKit. For completeness, i will duplicate the cogent items in the following sections. i’ll also be posting a Juypyter Notebook to github.

The simplest way to use environments is by using the conda command, included with Anaconda. A Conda environment allows you to specify a specific version of Python and set of libraries. Open a terminal window in the directory where you want to work.

Create a minimal environment with only Python installed in it.

conda create -n name_of_my_env python=3 
source activate name_of_your_env

Next, install the Qiskit package, which includes Terra, Aer, Ignis, and Aqua. ( in this writeup i will only focus on the very basics. i will get to the others in later posts! )

pip install qiskit

NOTE: Starting with Qiskit 0.13.0 pip 19 or newer is needed to install qiskit-aer from precompiled binary on Linux. If you do not have pip 19 installed you can run pip install -U pip to upgrade it. Without pip 19 or newer this command will attempt to install qiskit-aer from sdist (source distribution) which will try to compile aer locally under the covers.

If the packages installed correctly, you can run conda list to see the active packages in your virtual environment.

There are some optional packages i suggest installing for really cool circuits visualizations and like that work in conjunction with matplotlib. You can install these optional dependencies by with the following command:

pip install qiskit-terra[visualization]

To check if everything is running hop into the python prompt and type:

import Qiskit

Getting an IBM Q account and API Key

Next, you will need to register for an IBM Q account. Click this link -> Register For IBM Q Account

Here is link just in case:

https://quantum-computing.ibm.com/

IBM Q allows you to interface directly with IBM’s remote quantum hardware and quantum simulation devices. You can execute code locally on a quantum simulator however getting access to the hardware and understanding how noise affects the circuits and measurements are crucial in understanding quantum algorithm development. As with any remote system you need to lock it to an API Key. When you login you will see the following:

Generate the API token and then click on Copy API Token to copy your API Token and place into into your Jupyter Notebook. I recommend using JupyterLab Credential Store for these types of tokens and login credentials. We will come back to using the API Key so dont misplace it!

So i am assuming you made it this far and have your venv activated and your Jupyter Lab / Notebook up and running.

Check your installation by performing the following. It should print out the latest version. Also run the following commands to store your API token locally for later use in a configuration file called qiskitrc. Replace MY_API_TOKEN with the API token value that you stored in your text editor or Jupyter Notebook. Note this method saves the credentials and token to disc. It is a matter of taste you can choose in session usage as well. These are some standard imports.

%matplotlib inline
import numpy as np
from qiskit import * 
from qiskit import IBMQ
from qiskit.tools.visualization import plot_histogram
qiskit.__version__
qiskit.__qiskit_version__

IBMQ.save_account('MY_API_TOKEN') # THIS IS YOUR API KEY FROM EARLIER!

[1]: 0.12.0

i appear to be up to date.

Next you want to make sure you are up to date on the latest versioning of the platform. Since November 2019 (and with version 0.4 of this qiskit-ibmq-provider package), the IBM Quantum Provider only supports the new IBM Quantum Experience, dropping support for the legacy Quantum Experience and Qconsole accounts. The new IBM Quantum Experience is also referred to as v2, whereas the legacy one and Qconsole as v1.

IBMQ.update_account()

Depending on your credentials you will either get a listing of updating credentials or that you are up to date.

IBM Q has various backends to run your code upon. The default is a full-fledged simulator that is invoked locally which is very convenient. The next invocation method is via direct quantum computing hardware access. i must say it is astounding that one can access via open-source quantum computing resources.

By default, all IBM Quantum Experience accounts have access to the same, open project (hub: ibm-q, group: open, project: main). For convenience, the IBMQ.load_account() and IBMQ.enable_account() methods will return a provider for that project. If you have access to other projects, you can use:

provider_2 = IBMQ.get_provider(hub='MY_HUB', group='MY_GROUP', project='MY_PROJECT')

i used the following to check out the available backends that are available. Note: The name is just a name – not the location of the hardware:

provider = IBMQ.get_provider(group='open')
provider.backends()
[10:] [<IBMQSimulator('ibmq_qasm_simulator') from IBMQ(hub='ibm-q', group='open', project='main')>,
 <IBMQBackend('ibmqx2') from IBMQ(hub='ibm-q', group='open', project='main')>,
 <IBMQBackend('ibmq_16_melbourne') from IBMQ(hub='ibm-q', group='open', project='main')>,
 <IBMQBackend('ibmq_vigo') from IBMQ(hub='ibm-q', group='open', project='main')>,
 <IBMQBackend('ibmq_ourense') from IBMQ(hub='ibm-q', group='open', project='main')>,
 <IBMQBackend('ibmq_london') from IBMQ(hub='ibm-q', group='open', project='main')>,
 <IBMQBackend('ibmq_burlington') from IBMQ(hub='ibm-q', group='open', project='main')>,
 <IBMQBackend('ibmq_essex') from IBMQ(hub='ibm-q', group='open', project='main')>,
 <IBMQBackend('ibmq_armonk') from IBMQ(hub='ibm-q', group='open', project='main')>]

Running Your First Circuits

There are several ways to run your first circuits. There is online access via in place Jupyter Notebooks as well as a visual circuit designer called IBM Circuit Composer which you can access via your IBM Q account. i will be describing steps using python code and direct Qiskit usage due to flexibility, transparency, and granularity over the environment. This will set it to the 'ibmq_qasm_simulator'

my_provider = IBMQ.get_provider()
my_provider.backends()
my_provider.get_backend('ibmq_qasm_simulator')

So some terminology registers are used to create circuits. Circuits act upon registers. Now lets actually look at some code that generates some registers as well as a quantum circuit:

Here we a script that starts off with an input of 2 quantum “0” bits There is no action before it outputs a classical equivalent of bits:

So if you run this you will get the output:

Total count for 00 and 11 are: {'00': 517, '11': 483}

Here is what is happening:

  • QuantumCircuit.h(0): A Hadamard gate 𝐻on qubit 0, which puts it into a superposition state.
  • QuantumCircuit.cx(0, 1): A controlled-Not operation (𝐶𝑋) on control qubit 0 and target qubit 1, putting the qubits in an entangled state.
  • QuantumCircuit.measure([0,1], [0,1]): if you pass the entire quantum and classical registers to measure, the ith qubit’s measurement result will be stored in the ith classical bit.
Your First Quantum Circuit

So this is an ASCII printout. i was really impressed when i found out this tidbit. You can also pass in “mpl” for matplotlib or “latex” for full on latex beautification!

circuit.draw("mpl") and circuit.draw("latex")
Matplotlib representation of the circuit

NOTE: The latex and latex_source drawers need pylatexenc installed. Run "pip install pylatexenc" before using the latex or latex_source drawers. Professor Donald Knuth will be pleased.

#Plot a histogram
plot_histogram(counts)
Histogram showing the probability of results

The observed probabilities 𝑃𝑟(00) and 𝑃𝑟(11) are computed by taking the respective counts and dividing by the total number of shots.

Next Steps

So this is just a small step into the world of quantum programming. Below i have included several resources for study. If you are interested in pursuing this area i do urge you to take your time. i hope this at least gives you a perspective and provides a vehicle for entry. i personally feel completely humbled every time i start to read or re-read something in this area. Quantum computing is going to change the way view our world. i for one will be going deeper in this area as far as i am intellectually capable of taking the process.

NOTE: This the title of this blog refers to the theory of Minowski Multi-Worlds with a pun on Hello_World. The many-worlds interpretation implies that there is a very large—perhaps infinite number of universes. It is one of many multiverse hypotheses in physics and philosophy. MWI views time as a many-branched tree, wherein every possible quantum outcome is realized.

Resources

IBM Q User Guides All of the official IBM Q User Guides – very comprehensive.

IBM Q Wikipedia – A good readers digest of the history of IBM Q

The IBM Quantum Experience – the entry and dashboard experience

IBM Q online book – an amazing interactive experience covers everything from physics, linear algebra to code.

Mastering Quantum Computing with IBMQX – a great practical well-written book on how to get your hands coding on IBM Q

Dancing with Qubits – Written by Dr Bob Sutor of IBM a wonderful text on the mathematics and processes of quantum computing

Practical Quantum for Developers – a multi-disciplinary book that covers all aspects of coding for quantum from python, apis, cryptography, and even game theory.

Quantum Computing – A Gentle Introduction – this book covers the fundamentals of quantum computing in a very pragmatic fashion and focuses on the mathematical aspects.

Quantum Algorithms via Linear Algebra – the title is the content. ready set Linear Algebra – its the same stuff only quantum!

Quantum Computing for Computer Scientists – very close to the Gentle Introduction text however it covers the theory in-depth and also goes over several different types of algorithms.

Minowski Multi Worlds – The many-worlds interpretation implies that there is a very large perhaps infinite number of universes. It is one of many multiverse hypotheses in physics and philosophy. MWI views time as a many-branched tree, wherein every possible quantum outcome is realized. This is intended to resolve some paradoxes of quantum theory, such as the EPR paradox and Schrödinger’s cat since every possible outcome of a quantum event exists in its own universe. If you ask i’ll say the cat is dead.

Until then,

#IWishYouWater

tctjr

COVID-19 Complexity Relationships

As most are probably aware and i hope that you are at this point COVID-19 appears to be a very serious worldwide concern. From a complexity systems relationship standpoint there are several interesting aspects here that for some might be self-evident and for others might not be so self-evident. First, let us start with some observations concerning health and wellness in general:

Your wellness and health are the most important aspect of your life:

  • by definition, it is distributed
  • by definition, it affects others – eg its networked
  • by definition, it involves proximity – human caring and empathy

Given that it is a networked system and can have very non-linear behaviors. i was just having a discussion of an issue that could have a great effect (and affect) upon seemingly unrelated entities. Paper money is a fragile medium and also can carry chemicals and pathogens. Of interest:

The World Health Organization (WHO) has advised people to wash their hands and stop using cash if possible as the paper bills may help spread coronavirus.

here is the link:

https://www.ktvu.com/news/contaminated-cash-may-spread-coronavirus-world-health-organization-warns

The other happening is large corporations are canceling travel and conferences.

This brings me to the non-linear relationships which are two-fold (for now) but there will be several others: (1) cryptocurrency usage will skyrocket (2) “De-Officing” will start a trend in remote telecommuting work which will cause teleconferencing companies stock to increase.

Just some observations.

Until then,

Be safe and I wish You Water.

@tctjr

NuerIPS 2019

And they asked me how I did it, and I gave ’em the Scripture text,
“You keep your light so shining a little in front o’ the next!”
They copied all they could follow, but they couldn’t copy my mind,
And I left ’em sweating and stealing a year and a half behind.

~ “The Mary Gloster”, Rudyard Kipling, 1896

My Badge – I exist.

Well, your humble narrator finally made it to NuerIPS2019. There were several starts and stops to my travel itinerary but I finally persevered!

Bienvenue – Vancouver, British Columbia

First and foremost while the location at least for me required multiple hops Vancouver, BC is a beautiful city. The Vancouver conference center is spacious and an exemplary venue. Also for those that have the time Whistler / Blackcomb is one of the best mountains in North America for snow sports this time of the year. While I didn’t get to go I am being hopeful that I will win the registration the lottery system next year for 2020 and will plan accordingly.

Vancouver Conference Center – Oh Canada!

This year the conference was veritable who’s who of information-theoretic companies. Most of the top market cap companies are now information theoretic-based technology companies and as such have representation here at the conference. To wit IBM Research AI was a diamond sponsor:

While it is nearly impossible to quantify the breadth and depth of the subject matter presented here at the conference I have attempted to classify some overall themes:

  • Agent-Based Modelling and Behaviors
  • Imitation, Meta, Transfer, Policy Learning and Behavioral Cloning
  • Morphological Systems based on Evolutionary Biology
  • Optimization methods for non-convex models
  • Hybrid Bayesian and MCMC methods
  • Ordinary Differential Equation (ODE) direct Modelling and Systems
  • Neuroscience models that couple computational agents and hypotheses of consciousness

Side Note: I think it is amazing that 10 years ago you could not say “I’m using a Neural Network for …” without being laughed out the room. Now there is an entire set of tracks dedicated to said technology and algorithms.

The one major difference in this conference compared to what I have read and heard albeit second hand or through reports or blogs is the focus on ‘Where is your github?” and the question of how fast can we get to production? There was a very focused and volitional undertone to the questions

One aspect that has not changed and appears to have been amplified is the recruiter/job marketplace and (ahem) situation at the conference. To say that it was transparent and out in the open would be an understatement.

New To NeurIPS:

For those that have never been to neurips I’ll provide some recommendations:

  • Download the conference app and fill out your profile
  • Plan your agenda
  • Get to the poster sessions – early
  • Network as much as possible
  • Wear comfortable shoes – it is in the same venue next year, lots of walking.
  • Attempt to get a close hotel as possible due to P(Rain | Conference Timing) > 0.5

Trends and Catagories:

Agent-Based Modelling and Behaviors

This area is finally coming to fruition in the production market at scale. We are seeing both ABB (agent based modeling) and ABM (agent-based modeling aka self emergent / self organizing behaviors). There were many presentations on multi-agent behaviors in the context of both policy and environment responses using reinforcement learning and q-learning.

Imitation, Meta, Transfer, Policy Learning and Behavioral Cloning

I grouped all of these together while technically they are different in application and scope. However, they can and are mixed together for applied systems. For instance in imitation learning (IL). IL instead of trying to learn from the sparse rewards or manually specifying a reward function, an expert (typically a human) provides us with a set of demonstrations. The agent then tries to learn the optimal policy by following, imitating the expert’s decisions. Historically this was called Expert Systems Engineering. However, note the policy learning implicit in this area as well. Furthermore Behavioral cloning is a method by which human subcognitive skills can be captured and reproduced in a computer program. As the human subject performs the skill, his or her actions are recorded along with the situation that gave rise to the action. So as one can see all of these areas are closely related to a so-called expert reference. Algorithms of consensus among multi-agents will play a crucial role here.

Morphological Systems based on Evolutionary Biology

Morphology is a branch of biology dealing with the study of the form and structure of organisms and their specific structural features. Morphology is a branch of life science dealing with the study of a structure of an organism and its component parts. Turing wrote a paper on Morphology and S. Kaufman wrote “The Origins of Order: Self-Organization and Selection in Evolution” just to name a few. We are headed into areas where physics, chemistry, and biology are being brought into play with computing, once again at scale. This multi-modality computing will also benefit from access to the developments in accessible quantum computing.

Optimization methods for non-convex models

Gradient descent in all of its flavors has been our friend for decades. Are the local minima our friend or foe? The algorithms are now starting to ask “Where Am I”?

Hybrid Bayesian and MCMC methods

In 2007 I founded a machine learning and NLP as a service company called “BeliefNetworks”. This self-referencing name should illustrate where I stand on inference methods. Due to access to cycles and throughput, we are finally starting to see these methods integrated system-wide.

Ordinary Differential Equation (ODE) direct Modelling and Systems

Having worked for years in the areas of numerical optimization this is another area that is near and dear. I saw several papers mapping ODE’s to geometric representations. Analog computing could very well be in our return to the future. Naiver-Stokes equation anyone? I see the industry moving into flow models with truly modeling foundational Cauchy momentum equations depending on the application area. We are going to see both software and hardware development in this area.

Neuroscience models that couple computational agents and hypotheses of consciousness

Given all of the above computer scientist are pulling in physicists, biologists, chemists and finally neuroscientists-finally. Possibly the “C” word is no longer anathema? I promise I will not insert a terminator picture here. However, given the developments in cognition and understanding quantum biology, we are now starting to be able to model at least initially what we “think” we are thinking about in some cases. Yoshua Bengio gave a great talk on volitional causal and “conscious” tasks easily accomplished by humans. We also see this with the developments in the areas of spiking algorithms.

Papers, Posters, Demos – Oh My!

As part of this blog, I wanted to review a couple of my favorite presentations, posters, and papers. While this is not a ranked list nor is it a temporal chronological review it is a list of papers that resonated with me for various reasons. While I will be listing papers I will also be posting pictures of poster papers and some meetups that I attended.

Blind Super-Resolution Kernel Estimation using an Internal-GAN

This paper was interesting to me on several fronts. The basic premise for super-resolution kernels are thus:

    \[ILR =  (I{_H}{_R}∗ks)↓_S\]

The paper introduced “KernelGAN” – an image-specific internal-GAN, which estimates the SR kernel that best preserves the distribution of patches across scales of the LR image. This is what I would consider significant progress over previous methods by estimating an image-specific SR-kernel based on the LR image alone. This allows a one-shot mode for training based on the LR image. Network training is done during test time. There is no actual inference step since the training implicitly contains the resulting SR-kernel. They give results in the paper as well a metrics of performance based on NTIRE 2018 dataset although given the first application of a deep linear network I would imagine this doesn’t really do it justice. Very impressive and I can see several applications of this method and algorithm.

Project website: http://www.wisdom.weizmann.ac.il/∼vision/kernelgan

q-means: A Quantum Algorithm for Unsupervised Machine Learning

The cogent aspect of this paper was the efficiency of storing the vectors in First, classical data expressed in the form of N-dimensional complex vectors can be mapped onto quantum states over log2Nqubits: when the data is stored in a quantum random access memory (qRAM). Specifically, the distance estimation becomes very efficient when having quantum access to the vectors and the centroids via qRAM. The optimization yields a k-means optimization

    \[T=O(log(d))\]

further the paper showed that you can also query the norm of the vectors within the state preparation.

Making AI Forget You: Data Deletion in Machine Learning

One of the issues with GDPR legislation and the right to be forgotten comes up when you must re-train the entire data set. This paper addresses methodologies that enable partial re-training. The paper goes over past methods of cryptography and differential privacy of which do not delete data but attempt to make data private or non-identifiable. From the paper: “Algorithms that support efficient deletion do not have to be private, and algorithms that are private do not have to support efficient deletion. To see the difference between privacy and data deletion, note that every learning algorithm supports the naive data deletion operation of retraining from scratch. The algorithm is not required to satisfy any privacy guarantees. Even an operation that outputs the entire dataset in the clear could support data deletion, whereas such an operation is certainly not private.” The paper goes on to define four areas of metric performance for DDIML: Linearity, Laziness, Modularity, and Quantization. They do state that e also assumed that user-based deletion requests correspond to only a single datapoint and this needs to be extended. However, for the unsupervised k-means they describe they have deletion efficiency with substantial algorithm speedup.

paper here: https://arxiv.org/pdf/1907.05012.pdf

Casual Confusion in Imitation Learning

From Wikipedia: “Behavioral cloning is a method by which human sub-cognitive skills can be captured and reproduced in a computer program. As the human subject performs the skill, his or her actions are recorded along with the situation that gave rise to the action.” The fundamental premise was comparing expert versus computational policy and minimizing a graph-based approach:

    \[\mathbb{E}_G[ \mathcal {l}(fφ([X_i \bigodot\ G,G]),Ai)]\]

where G_i is drawn uniformly at random overall 2^{n} graphs and optimize for the mean squared error loss for the continuous action environments and a cross-entropy loss for the discrete action environments. Something very interesting happens during this process of imitation learning with experts. In particular, it leads to a counter-intuitive “causal misidentification” phenomenon: access to more information can yield worse performance ergo more is not better! The paper discusses with demonstrations of an autonomous vehicle scenario of phases with targeted intervention to predict the graph behavior. They did state the solutions are not production-ready. I really appreciated the honesty.

paper: https://papers.nips.cc/paper/9343-causal-confusion-in-imitation-learning.pdf

Learning To Control Self Assembling Morphologies: A Study of Generalized via Modularity

The idea of modular and self-assembling agents goes back at least to Von Neumman’s Theory of Self-Reproducing Automata. In robotics, such systems have been termed “self-reconfiguring modular robots”. E. Schrödinger posed this same question in “What is Life?”. This was one of my favorite demonstrations and presentations. I have been extremely “pro” using agent base self-organizing algorithms for quite some time. This paper and presentation utilizes zero-shot generalization and trains policies and generalizes to changes in the number of limbs of the entity as well as the environment. They then pick the best model from training and evaluate it without any fine-tuning at test-time.

paper: https://arxiv.org/pdf/1902.05546.pdf

Quantum Wassertain GANs

The poster and paper dealt with supposedly the first design of quantum Wasserstein Generative Adversarial Networks (WGANs), which has been shown to improve the robustness and the scalability of the adversarial training of quantum generative on noisy quantum hardware. Parameterized quantum circuits These circuits can be used as a parameterized representation of functions as called quantum neural networks, which can be applied to classical supervised learning models, or to construct generative models. The paper also showed how to turn the quantum Wasserstein semimetrics into a concrete design of quantum WGANs that can be efficiently implemented on quantum machines. FWIW in functional analysis, pseudometrics often come from seminorms on vector spaces, and so it is natural to call them “semimetrics”. The paper used WGANs to generate a 3-qubit quantum circuit of 50 gates that approximated a 3-qubit simulation circuit that requires over 10k gates using off the shelf standard techniques. The QWGAN then can was used to approximate complex quantum circuits with smaller circuits. A smaller circuit was then trained to approximate the Choi–Jamiolkowski isomorphism or Choi state which encodes the action of a quantum circuit.

Deep Signature Transforms

Signatures refer to a set of statistics given a stream of data. The other type of signature is for the transform. Sometimes this is also called the transform kernel. In the case of a signal kernel or transform to model a curve as a linear combination. Signatures provide a basis for functions on the space of curves. These functions can then be used as operative building blocks. The stream can then be defined as:

    \[S(V) ={x= (x1,...,xn) :xi∈V,n∈N}\]

This also has interesting ramifications as a feature mapping/engineering processes as well as embedding the signatures within algorithms, in this case, a layer within a Neural Networks. This is akin to some fingerprinting techniques in the past for media and the paper does mention: “in order to preserve the stream-like nature is to sweep a one-dimensional convolution along the stream.” The embedding techniques as part of the path and preserving nature made this an extremely enjoyable discussion.

code here: https://github.com/patrick-kidger/Deep-Signature-Transforms

paper here: https://arxiv.org/pdf/1905.08494.pdf

Metamers Of Neural Networks

This paper was near and dear to me due to some of my past lives working in the areas of psychological and perceptual media models. Metamers are a psychophysical color match between two patches of light that have different sets of wavelengths. This means that metamers are two patches of color that look identical to us in color but are made up of different physical combinations of wavelengths. In the case of this paper for metamers they “model metamers” to test the similarity between human and artificial neural network representations. The group generated model metamers for natural stimuli by performing gradient descent on noise signal, matching the responses of individual layers of image and audio networks to a natural image or speech signal. The resulting signals reflect the invariances instantiated in the network up to the matched layer. As with most things in machine learning the team sought whether the nature of the invariances would be similar to those of humans, in which case the model metamers should remain human-recognizable regardless of the stage from which they are generated. In this case, the humans were divergent from the neural networks. We need more of this type of work and how perceptions affect machine learning outcomes or possibly priors?

paper here: https://papers.nips.cc/paper/9198-metamers-of-neural-networks-reveal-divergence-from-human-perceptual-systems.pdf

Weight Agnostic Neural Networks

I particularly enjoyed this poster and the commentary “Animals have innate abilities…” I also believe most of the animal kingdom is sentiment as well as operating on literally different wavelengths (spectrum etc). The paper was to demonstrate a method that can find minimal neural network architectures that can perform several reinforcement learning tasks without weight training. Ergo the title Weight Agnostic. In place of optimizing weights of a fixed network, they sought to optimize instead for architectures that perform well over a wide range of weights. When I walked up to the poster I immediately thought of Algorithmic Information Theory (AIT) and how soft weights have been used for neural networks. AIT which based using Kolmogorov complexity of a computable object is the minimum length of the program that can compute it. The paper goes into detail concerning The Minimal Description Length (MDL) of a program and the recent dusting off of these processes applied to larger deep learning nets. The poster did not reflect the transparency of the paper in that the research was very focused on creating generalized network architectures in which IMHO is a step toward AGI and stated the WANN is not approaching the performance of engineered CNNs. I also appreciated the overall frankness of the paper. Quote from the paper: “This paper is strongly motivated towards these goals of blending innate behavior and learning, and we believe it is a step towards addressing the challenge posed by Zador. We hope this work will help bring neuroscience and machine learning communities closer together to tackle these challenges.”

Interactive version of the paper here: https://weightagnostic.github.io/

Regular paper here: https://arxiv.org/pdf/1906.04358.pdf

Inducing Brain Relevant Bias in Natural Language Processing Models

This poster was part of a general theme that I saw throughout the conference. Utilizing medical imaging devices to create better canonical models for machine learning. The paper shows the relationship between language and brain activity learned by BERT (Bidirectional Encoder Representations from Transformers) during fine-tuning transfers across multiple participants. The paper goes on to show that, for some participants, the fine-tuned representations learned from both magnetoencephalography (MEG) and functional magnetic resonance imaging(fMRI) are better for predicting fMRI than the representations learned from fMRI alone, indicating that the learned representations capture brain-activity-relevant information that is not simply an artifact of the modality. The model predicts the fMRI activity associated with reading arbitrary text passages, well enough to distinguish which of two-story segments is being read with 74% accuracy. That is impressive and I believe we need more multi-modality papers of this nature and research.

Full site with paper data etc: http://www.cs.cmu.edu/~fmri/plosone/

A Robust Non-Clairvoyant Dynamic Mechanism for Contextual Auctions

This paper caught my eye as I spend a great deal of time researching agents in game-theoretic of mechanism design based situations. What really caught my eye was the terminology non-clairvoyant. I suppose if there was a method that was truly calirvoynet we wouldn’t be concerned with the robustness of said algorithms. Actually, it is a real definition – a dynamic mechanism is non-clairvoyant if the allocation and pricing rule at each period does not depend on the type distributions in the future periods. In many types of auctions, especially ad networks the seller must rely on approximate or asymmetric models of the buyer’s preferences to effectively set auction parameters such as a reserve price. In mechanism design, you essentially have three vectors of input: [1] collective decision problem, [2] measure of quality to evaluate any candidate solution, [3] description of the resources – information – held by the participants. The paper presented a learned policy model and framework that could be applied in phases and possibly extrapolated to other types of applications. I personally think dynamic mechanism design has great applicability in the areas of distributed computing and distributed ledger platforms.

I also attended the NASA Frontier Design Labs that was sponsored by Google, Intel and Nvidia. I was part of the NASA FDL AI Astronaut Health research project over the summer of 2019. The efforts, technology and most importantly the people are astounding. The event was standing room only and several amazing conversations on the various projects with NASA FDL were had at the event.

Machine Learning For Space

I do hope you will continue to visit my site. If you continue to visit you will notice I have a type of “disease” called Biblomaniac-ism. As such I bought a book at the conference:

The future is distributed

So there you have it. While this probably was tl;dr I hope you gave it a good scan while you were doing a pull request or two. I hope this has at least provided some insight into the conference.

\forall papers: https://papers.nips.cc/book/advances-in-neural-information-processing-systems-32-2019

Until Then,

#IWishYouWater

tctjr