From my recent IEEE Spectrum article:
Moving into the speculative, what’s the near-future potential of wearable point-of-view computers? Future versions of Glass will enable a wide range of augmented cognition applications—combining the natural strengths of the human brain, the massive computational power of the cloud, cheap storage, and developments in machine learning.
For example, once we deal with the (admittedly nontrivial) privacy constraints around continuously recording video with Glass, hardware iterations with improved battery life could record everything you see and hear and upload it to the cloud, where machine-learning algorithms would sift through the data, extract salient features, and generate transcripts, thus making your audiovisual memory searchable.
Imagine being able to search through and summarize every conversation you ever had, or extract meaningful statistics about your life from aggregated visual, aural, and location data.
Ultimately, given enough time, those digital memory constructs will evolve into what can be loosely described as our external brains in the cloud—imagine a semiautonomous process that knows enough about you to act on your behalf in a limited fashion.
Even though there are significant challenges ahead for the creation of such external brains, it’s hard to imagine a future in which this doesn’t happen, once you consider that the required technological foundations are either already in place or are expected to become available in the immediate future.
To wrap up with an anecdote: A couple of days ago I was stopped by a stranger who asked me, “What can you see through Google Glass?” To which I replied, only partly tongue in cheek, “I can see the future.”
You can read the full article here.