What No EDI?

Yes Germonio this is no longer Kansas.  I am amazed and maybe haven’t been keeping abreast of what is actually happening in the so called Enterprise world (even though I have been working in it for a while).  Here is my confusion.  If you take the definition of Electronic data interchange (EDI) one will find that it is “It is used to transfer electronic documents or business data from one computer system to another computer system without human intervention.”  So given that (and here is where I may be getting confused) many companies for several disparate reasons are requesting that they want to scale – writ large – these batch processing EDI systems – without changing anything – just by adding a SOA layer on top of the existing RDBMS architecture.  Really?  Now recently I wrote a blog about map reduction of machine learning algorithms and mentioned NoSQL architectures however  Nathan Hurst wrote a great blog and took a page out of Brewer’s CAP theorem for a visualization of NoSQL and RDBMS:

 

Where most of these companies and entities get into problems is not wanting to change the fundamental data model.  Many of these “conversions” or the “wanting to convert” start with understanding the data that presides in these decades old RDBMS or the basic process in general of what is happening within these enterprise systems:

 

Basically we are running into a problem where the fundamental issues are:  (1) the data model (2) creation of a set of canonical rules (3) choosing an architecture based on the triangle mentioned above.  Most companies do not want to take stock in understanding from a programmatic perspective what is in the data base.  Most companies keep adding data, stored procedures, key value pairs and columns and call that scale.  Then they realize that 7M, 10M, 50M, 200M people are going to hit this system in a stochastic manner.   We have to hire more analysts, more coders, more people.  That is not scale.  Scale is creating a canonical set of rules, reducing the data model to a core set of attributes and creating machine learning for data hygiene and integrity.  Then naturally most people believe or inquire as to the data accuracy.  Well it is a give and take, humans are not 100% accurate 100% percent of the time.  I would rather have an 80/20 rule and get the data results faster then operate on the focus of 100% correct data 100% of the time.  It is more natural to operate on the the 20% outliers and re-train the system based on those anomalies. Remember the first step is acknowledgement of the actual problem.   As I said in an much earlier blog on adapatibility that homeostasis is the death knell of most companies.  You must build an evolution or revolutionary strategy into your technology roadmap or at least plan with some contingencies.  Thus this brings into question what exactly is enterprise software?

Until Then,

Go Big Or Go Home!

//ted

Leave a Reply

Your email address will not be published. Required fields are marked *