Topics

User Functions

Older Stories


Welcome to myCobol.net Sunday, 24 September 2017, 12:11 @ CEST

The Input - Processing - Output paradigm

Design Patterns
  • Tuesday, 20 August 2013, 18:42 @ CEST
  • Contributed by:
  • Views:
    3,051

As long as we have computers, they need input which is manipulated to obtain some output. This transformation principle is not elitist to computers -as such-, because we find this all over in nature. For instance, a prism "manipulates" ordinary light to deliver a spectrum in a form we known as a rainbow. It does so by relatively bending the distinctive frequencies and making clear what is input and what is output. Critics may say that it is just a decomposition of the whole into "things" that were hidden in the whole.

Whatever. With computers the paradigm input - processing - output became clear. And since it has a long standing in physics and cybernetics, the principle is rediscovered in computing when the hardware became programmable by software: the IPO paradigm in the heart of Cybernetics is commonly known as the "Systems Approach".

One may encounter also the equivalent as in input - transformation - output: there is also a feedback component defined, which would either adjust the form, quantity or quality of the inputs needed, either is used to parameterize the Processing. It is not obvious what exactly is the case, but we know about feedback, don't we?

It is not always clear how to architecture the principle in computers. In most circumstances we know what we want (what output) and design a process that requires input in a certain form. Creating systems using the IPO paradigm started with a detailed description of the output which is used to design a transformation process that will be fed with data as inputs. Feeding the input data to the computer is, at the very beginning, a human task.
The paradigm's name is misleading here. We'll start with the scoping of what precisely we want, the output. Than we'll design a proper process together with the needed inputs. The inputs would normally be obvious, but the level of sophistication that can be handled by the -software- process makes architecting a daunting task.

Big Data seems to reverse this analysis technique and will revolutionize these views since it needs Persistence and starts with the inputs, massively. And we rarely worked the other way around: having inputs in a certain form, exploring the possible answers that these inputs can deliver and selecting the algorithms to handle that. Technology is the important driver. Big Data

The adjusted version of the paradigm, is to add Storage to it. That is also the current view. Cobol has a huge advantage here, since it is so enormously good at the storage layer, as opposed to the trendy languages which seem to ignore storage and are dubbing it as a the "persistence layer".

The following comments are owned by whomever posted them. This site is not responsible for what they say.