Sound Centric Computing

Lets talk about the profound  impact of sound. Sound is deceivingly powerful because unlike eyelids, we do not have earlids. That means, we are immersed in a sound field 24 hours per day and seven days per week since we can not turn it off.  Constant inputs that can not be defeated eventually draw less attention than more intermittent stimuli. Why does this matter? Because engagement is needed for relevancy and no one pays attention for very long to anything deemed irrelevant. Sound has to be processed in real time for humans lack of an auditory cortex reduced the number of auditory neurons to orders of magnitude less than our visual processing systems dealing with temporal information via integration of snapshots. We can call up a visual snapshot in our memory and examine it but we can not do the same with a sound.

And what does this have to do with innovation? Sound stimulus demands a more immediate response than visual stimulus because it is more real time. It is more realtime because  although we can not see around corners, we can hear things we can to see and this has saved us from being eaten many more times in the past than seeing things we could not hear because they were a lot further way. Sound can be more local than sight.

Innovators tend to have two size dreams, initially small ones they can implement alone and eventually larger ones they need help with. And you have to raise friends before you raise money or other resources. if people do not want to spend time with you meaningful relationships are less likely to develop. This is also true for the man machine interface as well. Until sound was integrated into our digital systems they were relevant more to information processors not emotion processors and frankly most of society spends a lot more time processing emotion than information. Even seemingly rational, linear data driven folks still make most decisions based on how they feel and if they do not they are not regarded as human but machinelike.

Sound was silly more cost effective to integrate than video because much less data had to be crewed, stored and processed and the transducers involved also initially cost much less. A speaker was far less expensive than a video monitor and microphones are still less expensive than cameras. This made the evolving engagement of digital systems more dependent upon sound than sight.

But this process has not reached its ultimate destination of enabling parallel processing in computers which is one reason why more people have phones than computers and why they care more about phones than computers. For humans are social creatures who want to interact with others as a primary source of meaning creation. The bottom line is screen based computing is neither hands free nor eyes free but sound centric computing would be if it came into being which is being foreshadowed by our smart phones.

Until our hands and eyes are free we can not truly parallel process int he same way we can enter into conversation with groups of people. Sound centric computing permits the ultimate level of multitasking we crave in our relationship with the universe and with others. This is not simply replacing the keyboard with a microphone but such much more. It is an operating system that is sound centric eliminating the need for menu exploring and memorizing and forcing our digital systems to adapt to our context instead of forcing us to adapt to their generally inferior for human ones.  Emotional context tends to be more sound than visually centric.