From Physics to Meaningful Information
Physical states or processes can carry information when configured in regular patterns with a limited set of states per step. In such configurations every step carries $ \log_2(|states|) $ bits of information. Anyone or any machine can read and count the bits, can calculate the information entropy, can perform various other analyses on it. But to be able to interpret what the information means requires knowledge in the form of models to decode the information and models of the domain the information is about.
For example, on the Voyager I spacecraft there is a golden record that encodes analog sounds and digital pictures. The important steps of how to render that information is in pictogram form on the record itself. NASA is assuming generic scientific knowledge is sufficient to interpret the pictogram, itself a model and a form of information. From the pictogram learn how to interpret the information on the record, the sounds and the images. From those to learn about earth, nature, human beings and our culture.
But how do systems or beings come about capable of knowledge, capable of models of the world, capable of interpreting analog or digital information? By learning systems. Which we can define as any system connected to an environment with input, output and feedback; where over time output responses to input, cause less error signal in the feedback.
Most learning algorithms adjust internal state after feedback so future responses to similar inputs yields less error. Artificial neural networks are a prime example. Another class of learning algorithms are evolutionary algorithms, they instead generate many variants and filter away the worst scoring variants afterwards. Both cause knowledge to come about in the form of input recognizers paired to useful outputs, tuned to minimize error in the feedback signal.
Another kind of learning doesn’t need the environment to provide a feedback signal. Where instead future input serves as feedback as the system tries to predict its inputs and aims to minimize surprise1 in it. To do so beyond memorization and beyond using superficial features requires creating knowledge in the form of models and concepts. Then novel situations can be interpreted using many concepts at different levels of abstraction, including concepts of how to learn what to do in novel situations. It is this kind of knowledge NASA expects can interpret the pictograms on the golden record of Voyager 1 and learn about our planet.
Knowledge, as we have used the term here, namely information that has captured something about the world, can thus be categorized in two broad groups2: input recognizers paired with outputs optimized by past events; and models that can be used to simulate, plan and solve problems. This maps on to teleonomy and teleology. To “error correcting regulators” and model based “good regulators” of Conant and Ashby3. And maps on to the difference between “learners” and “solvers” of Geffner4. It also maps on to the brain’s “System 1” and “System 2” of Kahneman.
Though how the brain implements “System 2”, how it learns new models and uses them to solve problems in a general way, or how a computer system could do so, remains an open question.
-
Both reinforcement and unsupervised learning, depending on how it is done, might lean towards the second category. ↩