The Future Is Farther Away Than Expected



This is the time when I tire of seeing end of the year wrap-ups and best-of-the-year lists. I particularly find predictions about the year(s) to come annoying. I'm tired of hearing people ask "Where are the things I saw on The Jetsons and in movies about the future?"

Another popular year-end news story is to look back on some past prediction about our present and see if they got anything correct. For example, Brian Williams did a 2-minute story on NBC way back in 2007 about the futuristic year 2017. Watching it, I thought that even when they got things right, the results just feel wrong. Not wrong as in "incorrect" but in the sense of illicit or reprehensible.



They got some predictions correct, but their focus is a kind of technological, biometric nightmare of ubiquitous facial recognition, microchip ID implants (more common on pets than people in 2016), that build upon iris scans and fingerprint ID (as on your phone) that were becoming viable in 2007.

Like most predictions, the writers almost always think change will happen faster than it really does occur.

Have you found really easy hospital patient identification to be a reality? My doctor is still trying to scan my old records as PDF files. Are you free of needing your wallet and keys? Yes, some (not the majority) people use their phones to pay and have a car without a key, but change comes slower than we expect. That is not so much because we can't create the new technology. It is about adoption.

The smartphone is a good example of a technology that had a rapid adoption rate. It was accepted and purchased much faster than other technologies.

Are you still thinking that a drone will deliver your pizza and Amazon order in 2017? When will the roads be filled my almost all driverless cars?

Relax, you have plenty of time.



(This post also appeared on my One-Page Schoolhouse website)

Listening to Wikipedia


visualscreenshot of the hatnote visualization of Wikipedia edits



There is a wonderful STEAMy mashup application online that lets you listen to the edits being made right now on Wikipedia. How? It assigns sounds to the edits being made. If someone makes an addition, it plays a bell. It someone makes a subtraction from an entry, you'll hear a string plucked. The pitch changes according to the size of the edit - the larger the edit, the deeper the note.

The result is a pleasantly tranquil random but musical composition that reminds me of some music from Japan and China.

You can also watch recent changes. A the top of the page, green circles show edits being made by unregistered contributors, and purple circles mark edits performed by automated bots. White circles are brought to you by Registered Users.

If you hear  a swelling string sound, it means that a new user has join the site.(You can welcome him or her by clicking the blue banner and adding a note on their talk page.)

You can select a language version of Wikimedia to listen to. When I selected English Wikipedia edits at midday ET, there were about 100 edits per minute resulting in a a slow but steady stream of sound. You can select multiple languages (refresh the page first) if you want to create a cacophony of sounds. You can listen to the very quiet sleeping side of the planet or the busier awake and active side. The developers say that there is something reassuring about knowing that every user makes a noise, every edit has a voice in the roar.

The site is at listen.hatnote.com and the notes there tell us that Hatnote grew out of a 2012 WMF Group hackathon. It is built using D3 and HowlerJS and is is based on BitListen by Maximillian Laumeister. The source code is available on GitHub. It was built by Hatnote, Stephen LaPorte and Mahmoud Hashemi.

Audiation is a term used to refer to comprehension and internal realization of music, or the sensation of an individual hearing or feeling sound when it is not physically present. Musicians previously used terms such as aural perception or aural imagery to describe this concept, though aural imagery would imply a notational component while audiation does not necessarily do so. Edwin Gordon suggests that "audiation is to music what thought is to language," and his research based on similarities between how individuals learn language and how they learn to make and understand music.

As the Hatnote site points out, Wikipedia is a top 10 website worldwide with hundreds of millions of users. It includes more than a dozen actual projects including Wiktionary, Commons, and Wikibooks. It uses more than 280 languages. Perhaps more amazingly, it has only about 200 employees and relies mostly on community support for content, edits - and donations. Compare that business model to other top 100 websites worldwide.

 


Technobiophilia

Technology pulls many "new" words from other fields. there are a surprising number of terms we get from nature. I was looking through a book by Sue Thomas called Technobiophilia: Nature and Cyberspace that has me thinking about terms we use in new ways in technology.

One example is the digital version of an ecosystem. The digital version of an ecosystem is like a tree with branching directories. A digital ecosystem grows out of a (perhaps buried) root folder

Another reworked term from nature is the online use of spider. A digital spider is a program that visits Web sites and reads their pages and other information in order to create entries for a search engine index like Google or Yahoo. All the major search engines on the Web use these program. They are also appropriately known with the spidery names of web crawlers. (Also known as bots., but that's pure cyber.)

The book also reminded me that the digital world is full of watery metaphors. We follow the Twitter stream. We surf the web. We listen to torrents of music. , and meet at online watering holes. We sometimes swim in seas and oceans of data.

Do you have a favorite tech term pulled from nature, or another field?