FastCompany: Lucidscape Brings On “The Big Bang Of The Next Internet”

fast-company

Today, you probably know the Internet as a series of web pages, tweets, and lots of email. But with the rise of virtual reality headsets, companies like Facebook think that will change. You might soon “browse the web” in a place called the metaverse–a second reality rendered in 3-D.

Lucidscape is a company that’s playing with what such an Internet might look like. They’re building an open source, 3-D graphics engine that can be shared across servers. In other words, rather than rendering a website or video game on a single computer at a time (as most graphics engines do today), all of the world’s servers could combine to construct one giant, seamless world, where millions of us may flock, virtually, to see a concert, or walk on the streets of digital Times Square.

Carrying out a series of astounding stress tests, Lucidscape ramped up to simulate 10 million people traversing in the shared 3-D space provided by 800 servers. The result looks like you’re flying at light speed through a grid of planets, or maybe visiting the latest Yayoi Kusama exhibit.

Full Article

The Operating System of the Metaverse

S3

At Lucidscape we are building a new kind of massively-distributed 3D simulation engine to power the vast network of interconnected virtual worlds known as the Metaverse.

Handling massive virtual worlds requires a fundamentally different approach to engine design, one that prioritizes the requirements of scale, thoroughly embraces distributed computing and many-core architectures, and that is also unencumbered by the legacy decisions which hold current engines back.

We have recently conducted our first large scale test where we simulated the interactions of 10 million participants on a cluster of over 800 servers.

You can read more about the test here.

Building The Future of Virtual Reality

Lucidscape

Introducing my new venture, Lucidscape :

Our mission is to deliver the foundations of the metaverse, a vast network of connected virtual worlds where data is tangible and autonomous programs roam free.

We have a different vision for the web of the future. Instead of pages connected by links we envision a web of immersive worlds connected by portals. Connected but independent, each world is a sovereign simulation inhabited by users and program alike.

Our technology is a re-imagining of the fundamental technologies employed by games and virtual reality applications. Inspired by the same decentralized design philosophy of the web, we built a new kind of collaborative real-time computing platform to power a vast web of interconnected virtual worlds.

Head over to our official website for more info: http://lucidcape.com

I am on Make Magazine #38 High-Tech DIY

Make Magazine 38

Furlan was another maker who didn’t want to wait for the “official” version of Glass. He created a very respectable homebrew HMD, also using MyVu Crystal for the optics. The display is mounted on a pair of safety glasses with a Looxcie Bluetooth camera, and uses an iPod Touch as the brain. Furlan passed on voice control, instead relying on iPod’s touchscreen for navigation, with a custom app to automate and aggregate common notifications (Facebook, Twitter, stocks, email, etc.) so he wouldn’t have to be constantly taking the device out of his pocket.

Furlan says that though he started out skeptical of the technology, after wearing it for awhile he began to feel attached — taking it off produced a noticeable sense of loss. Even though his build is crude by Glass standards, it still gave him a taste of what’s to come. You can read in more detail about Furlan’s build on spectrum.ieee.org.

Through the Eyes of a Robot: Memories of Places I Have Never Been

RoboBusiness 2013

It is not everyday that you get the change to experience something for the very first time: last week I had the opportunity to attend RoboBusiness 2013 (most appropriately) by robot. I “beamed” into a Suitable Technologies Beam remote presence robot and I had a very positive experience interacting with everyone at the conference, human and machine alike.

Suitable Technologies did a great job with the Beam, the user interface is great and the robot handles very well. One-on-one conversations felt natural despite of the loud environment however multi-party conversations were challenging at times, but still workable.

Interestingly, interacting with remote locations through a mobile proxy body can be a very peculiar experience because you may end up with vivid memories of places you have never been to.

Watching a demonstration

Watching a robot demonstration, through the eyes of another robot

Being pitched by an exhibitor

Being pitched by an exhibitor

Augmented Cognition with Google Glass

DSC06259

From my recent IEEE Spectrum article:

Moving into the speculative, what’s the near-future potential of wearable point-of-view computers? Future versions of Glass will enable a wide range of augmented cognition applications—combining the natural strengths of the human brain, the massive computational power of the cloud, cheap storage, and developments in machine learning.

For example, once we deal with the (admittedly nontrivial) privacy constraints around continuously recording video with Glass, hardware iterations with improved battery life could record everything you see and hear and upload it to the cloud, where machine-learning algorithms would sift through the data, extract salient features, and generate transcripts, thus making your audiovisual memory searchable.

Imagine being able to search through and summarize every conversation you ever had, or extract meaningful statistics about your life from aggregated visual, aural, and location data.

Ultimately, given enough time, those digital memory constructs will evolve into what can be loosely described as our external brains in the cloud—imagine a semiautonomous process that knows enough about you to act on your behalf in a limited fashion.

Even though there are significant challenges ahead for the creation of such external brains, it’s hard to imagine a future in which this doesn’t happen, once you consider that the required technological foundations are either already in place or are expected to become available in the immediate future.

To wrap up with an anecdote: A couple of days ago I was stopped by a stranger who asked me, “What can you see through Google Glass?” To which I replied, only partly tongue in cheek, “I can see the future.

You can read the full article here.

Do-It-Yourself Virtual Reality

DIY Virtual Reality Headset by a member of the MTBS3D community

Excerpts from my recent interview with Vice:

Beyond the Rift: How the Oculus Kick-Started a New DIY Virtual Reality Movement

On his website, Rod Furlan describes himself as an artificial intelligence researcher, a quantitative analyst, and an alumnus of the Singularity University. He also makes things: personal augmentation devices, artificial intelligences, and virtual reality kits. The last is arguably what Furlan is best known for: In 2012, Furlan began building his own version of the Oculus Rift. By doing so, he helped catalyze the DIY virtual reality movement.

“DIY VR is here to stay, now that we have all the parts we need available at acceptable cost.” Furlan enthuses. “We can also expect DIY projects to outspec commercial products because independent makers are not bound to market cycles. For example, the team at MTBS3D is already working on a DIY design with a 2,048 by 1,536 resolution, which is a gigantic improvement in per-eye resolution vs. the Rift.”

“We are living in very interesting times. The tools of creation are becoming more accessible at an incredible pace,” Furlan says. “We are going to start seeing more and more amateur makers building incredible devices.”

Read the full article and the DIY VR building guide

 

Streaming Virtual Reality from the Cloud

Oculus RiftA while ago I was working on a fully immersive stereoscopic “remote head”. On one side the user wears a HMD and on the other side there would be a servo-actuated stereoscopic camera programmed to match the orientation of the user’s head-tracking device. Sadly even if I used very fast servos it wouldn’t be possible to move the camera fast enough to accurately match the perspective of the user. The head-tracking lag would be quite disorienting, unacceptably so even before we factor in the network lag.

On the second prototype, I decided to use a monoscopic 360 degree camera instead. The remote head would transmit the whole image-sphere to the user’s machine and I would clip the viewport on the client-side using the tracker information – effectively eliminating head-tracking lag by doing it locally. The overall experience should be great even though the video feed from the remote head could be several milliseconds behind real-time.

NVIDIA Grid ServerAnd here is how all of this intersects with VR:  a cloud-VR server could render a 360 degree image sphere around the player, transmit the whole frame to the client which would then clip the viewport based on the orientation of the user’s head. It could even be done adaptively to save bandwidth – instead of transmitting the whole image-sphere it could send only a portion of it based on how fast the user is likely to turn his head in the next N milliseconds or at a reduced frame rate, and since the raster viewport is clipped by the client, the user would still be able to look around at 60fps. Input-to-display lag would still exist but developers could overcome at least some of it by designing around this limitation.

The potential end-game could be something like a cloud-powered Oculus Rift style HMD with no console or PC required – in other words: no hassle VR that is just plug & immerse. The required tech is already available, both NVIDIA and AMD have announced support for GPU cloud rendering, OculusVR is finally shipping the $300 Rift development kits worldwide and all the required client-side processing could easily be handled by a $100 Android board.

The Machines Are Getting Better. But So Are You.

The Huffington Post

I was just mentioned on a great article by the Huffington Post:

Google Glass’s design “lets Glass record its wearer’s conversations and surroundings and store those recordings in the cloud; respond to voice commands, finger taps, and swipes on an earpiece that doubles as a touch pad; and automatically take pictures every 10 seconds,”explains IEEE Spectrum’s Elise Ackerman. A concept video for the device released by Google showed a man using the glasses to video chat with his girlfriend, respond to messages, get directions and learn about people and places he can’t immediately see.

Artificial intelligence researcher Rod Furlan speculates data gathered by Glass could “eventually be able to search my external visual memory to find my misplaced car keys.” Facial recognition could one day help you avoid the awkwardness of forgetting names, and object recognition could alert you to calorie counts of the sugary snack you’re about to eat.

Google Glass promises to be not only a communication device for answering emails or sharing photos, but a kind of personal assistant and second mind.

The goal, according to Google Glass project head Babak Parviz, is to someday ”make [accessing information] so fast that you don’t feel like you have a question, then have to go seek knowledge and analyze it, but that it’s so fast you feel like you know it … We want to be able to empower people to access information very quickly and feel knowledgeable about certain topics.”

In a 2004 interview, Google co-founder and CEO Larry Page asked the world to “imagine your brain being augmented by Google.” Nine years later, we no longer have to imagine that. This feeling that Google Glass can enhance the wearer’s mind isn’t PR spin, but something to which users of the device can attest.

Furlan, who created a homemade pair of Google Glass-like specs that could stream emails, Twitter posts and more to a lens over his eye, told IEEE Spectrum that though he initially suffered from information overload, he now feels “impoverished” when he takes off the device. Evernote CEO Phil Libin predicts, based on his own experience with Google’s glasses, that in three years’ time, gazing upon a world without the additional information offered by a Google Glass device will seem “barbaric.”

Bianca Bosker, Executive Tech Editor, The Huffington Post

Build Your Own Google Glass

IEEE Spectrum Portrait

Excerpt from my recent IEEE Spectrum article:

A wearable computer that displays information and records video

By ROD FURLAN  /  JANUARY 2013

Last April, Google announced Project Glass. Its goal is to build a wearable computer that records your perspective of the world and unobtrusively delivers information to you through a head-up display. With Glass, not only might I share fleeting moments with the people I love, I’d eventually be able to search my external visual memory to find my misplaced car keys. Sadly, there is no release date yet. A developer edition is planned for early this year at the disagreeable price of US $1500, for what is probably going to be an unfinished product. The final version isn’t due until 2014 at the earliest.

But if Google is able to start developing such a device, it means that the components are now available and anyone should be able to follow suit. So I decided to do just that, even though I knew the final product wouldn’t be as sleek as Google’s and the software wouldn’t be as polished.

Most of the components required for a Glass-type system are very similar to what you can already find in a smartphone—processor, accelerometers, camera, network interfaces. The real challenge is to pack all those elements into a wearable system that can present images close to the eye.

You can read the full article here

Coverage around the web:
Forbes – “Google Glass Project In Flux”
MIT Technology Review - “The Latest on Google Glass”
Huffington Post - “Yes, The Machines Are Getting Better. But So Are You”
Live Science - “Total Recall Offers Killer App for Google Glasses”
US News - “Google Glass Unlikely to Be Game Changer in 2013″
ExtremeTech - “ Google Glass ready to roll out to developers, but why not save $1,500 and build your own?”
Geekosystem - “Can’t Wait for Google Glass? Don’t. Build Your Own”
Lifehacker - “Build Your Own Google Glass-Style Wearable Computer”
The Verge - “One man’s journey through augmented reality with a self-built version of Project Glass”
SlashGear - “DIY Google Glass puts iOS in front of your eyes”
9to5Google – “Don’t have $1,500? Just build your own Google Glass”

The Future of Wearable Computers

Google Glass

Here are the highlights of my IEEE Spectrum interview on Google Glass and the future of wearable computers:

That’s right: Google says that Glass will make you feel smarter. “We’re talking about a device that sees every thing you see and hears everything you hear,” says Rod Furlan, an artificial intelligence researcher and angel investor. “From the starting line what you are gaining is total recall.”

Regarding privacy:

Others view the hand-wringing over privacy as passé. “We will soon be living in a hypervisible society, and there is nothing we can do to stop it,” argues Furlan, the artificial intelligence researcher. “It’s not about fighting the future; it’s about learning to live with it.”

He can’t wait to try the real Glass. Furlan believes Google’s expertise in data and in machine learning will lead to all kinds of applications that enhance people’s everyday experience. Yes, he says, you’ll have to give up some privacy, but the trade-off will be worth it. “In the end, I believe technology gives more than it takes,” Furlan says.

On my experience wearing my DIY version of Glass:

Furlan was so eager to see what a future with Glass might look like that last summer he built his own prototype from off-the-shelf parts [see “Build Your Own Google” to learn how he did it.]. It streams e-mail, Twitter updates, text messages, and the status of his servers to a monocular microdisplay. At first, he says, the flood of information felt overwhelming, but now when he takes off the gadget, he feels “impoverished.”

You can read the full article here.

Intermission

Asia Tour 2012

I will be away for several months travelling around Asia – follow me on 500px for pictures :)

DIY Oculus Rift – Because Reality is Overrated

UPDATE: You can find the first draft of the “official” building guide here.

Yesterday Hack-a-Day featured one of my summer projects – a do-it-yourself immersive virtual reality head-mounted display based on the upcoming Oculus Rift. This is a collaborative effort with several contributors from the MTBS3D community, including Palmer Luckey from OculusVR. You can follow my guide to build your own – enjoy!

DIY Personal HUD / “Google Glass”

Kopin VGA CyberDisplay 0.44"

Back in 2009 I was working on a DIY personal head-up display (HUD) driven by a Sony Vaio VGN-UX380N ultraportable PC. I ended up shelving the project because I felt the technology wasn’t there yet – the prototype was bulky, uncomfortable to wear and battery life was terrible.

Luckily we are living in exponential times and mobile technology has advanced so much since 2009 that with just a bit of research I was able to build a much better wearable computing device than the one I was experimenting with 3 years ago. Below are pictures of prototype #2, which is basically a wearable microdisplay driven by a smartphone.

For prototype #3 I will be keeping the wire (for now) and I will add a camera, a microphone, speakers and a 9dof inertial tracker to match Google Project Glass‘ known capabilities. It should serve as a good platform to explore the applications of a head-mounted wearable computer.

Robot Whispering

Last week I spent some quality time with the PR2 robot at Willow Garage. Since I don’t particularly like doing laundry, I quite enjoyed programming it to fold towels, albeit poorly. World domination is certain to follow.

On the Future of Education

Tomorrow I will be joining Peter Thiel and the 40 finalists for the 2012 class of the 20-under-20 program for brunch. Of those 40 hopefuls only 20 will become members of the Thiel Fellowship, a daring attempt to hack education and show the world that college is not the only path to success.

Or in their own words:

A radical re-thinking of what it takes to succeed, the Thiel Fellowship encourages lifelong learning and independent thought. With $100,000 and 2 years free to pursue their dreams, Thiel Fellows are changing the world one entrepreneurial venture at a time.

I first heard of the Fellowship when I was invited to participate as a mentor for the fellows. I promptly agreed because I strongly resonate with their goals. As a serial autodidact, I have had my fair share of issues with traditional education as it’s simply not suitable for everyone and certainly wasn’t for me.

As I mentioned in previous posts, I was born and raised in Brazil. Despite superficial similarities, growing up in a developing country is quite different from what readers from more affluent nations would imagine. For example, I held a full-time job during secondary school and my first few years of college. This means that I would work from 9am to 5pm and then go to school from 7pm to 11:30pm. It was tough but it was, and it still is, something very common in Brazil. For most of the population working a full-time job is the only way to stay afloat and to afford any education at all.

While it wasn’t the most pleasant lifestyle, participating in the workforce while going to college at the same time helped me develop some key insights about education. During the day I operated in the real world as a software developer for a large utility company, solving real problems and learning real lessons about life, business and relationships. During the evening I was forced to participate in this odd, almost bizarre parallel reality where professors were trying to, according to their own words, prepare me to succeed in the “real world”. However, it was all too obvious that many of them have never participated in this “real world” that they spoke of and consequently many of the things they were teaching were either outdated or flat-out wrong.

In my case, I taught myself to code when I was 8 years old and had almost ten years of experience as a software developer before I’d even applied to college. I tried to make the best of my college experience but in most part, classes felt generally dull and unstimulating. Luckily, sometime during my sophomore year, I was offered a seat at the management board of a large consulting company and suddenly I had to decide between my fast-moving career or finishing my bachelor’s degree.

The only compelling reason I could conceive to get a bachelor’s degree, other than pleasing my family, was for social validation. However, I wanted to be judged on my accomplishments in the real world and not on my tolerance to inane lectures and my ability to force myself through arbitrary, artificial exercises that would supposedly prepare me to function in the workforce. The way I saw it, I was already empirically prepared for the real world, and it just didn’t make sense to go backwards.

This would be the first time I quit college, out of four in total. Now, before you assume that I am radically against college education, this is certainly not the case. To better understand my position, we must first deconstruct the reasons why going to college wasn’t the ideal choice for me.

Computer science is a field that is both accessible (as in you can practice with a limited budget) and fast-moving. I could not have taught myself medicine when I was 8 years old, I could have read about it, maybe even developed an interest in it, but I wouldn’t be able to practice it – and deliberate practice is a key requirement for expertise. Similarly, while you may also consider medicine to be a fast-moving field, its pace of curriculum change is dwarfed by what we observe in the IT world. The odds are that by the time you graduate from medical school, a considerable portion of what you have learnt is still relevant. In contrast, the core portion of computer science that is in essence future-proof could be taught in only one or two semesters.

College is still, and will remain, the best way to become a medical doctor. The same is true for any other profession that requires access to a fully equipped lab for practice. However it is certainly not the only (or best) choice if you are passionate about a field that is both accessible and/or advancing as quickly as IT.

The key insight I want to share with you is that even though college is one of the many possible paths to success, our society is stuck on the idea that when it comes down to education, “one size fits all”. If you stop to think about it, it makes no sense at all. A good analogy would be a world where painters were only allowed to paint using shades of green and that to be successful, one must limit their palette to greenish tones and renounce the rest of the color spectrum – what a poor world that would be.

Yet this is exactly what we are doing. Our society deliberately tries to reduce the spectrum of choice in education by shunning anyone willing to stray from the beaten path – and in doing that, we all lose. Diversity of thought is also one of our greatest strengths as a species and by limiting which paths to success are acceptable, we are condemning humanity to homogeneity – and eventually to a local maximum (in layman’s terms: a place too good to be abandoned but still rather terrible compared to the best place out there).

Fundamentally, is the level of conformity imposed by college the best way to educate future world-changers?

This all boils down to the reason why I am an outspoken supporter of the Thiel Fellowship. It is not about replacing college for everyone - it is an attempt to widen the palette of choices for education. It is a bold proposal to show the world that a degree is not the best tool for all jobs – but merely one of many in the tool set. While Thiel may come across as a radical when he proclaims that higher education in the US is essentially bankrupt, he is also doing more for the future of education than most of us. His fellowship is one of the select few initiatives that have started a crucial dialog on how humanity should prepare its youth for a future that is rushing at them at an incredible pace.

It is certainly not a program for everyone or for the faint of heart, but the 20-under-20 fellows have self-selected themselves because they don’t want a degree, they want to change the world.

On Poverty, Inequality and Human Nature

Yesterday I had the pleasure to spend the afternoon with Dr. Judith Rodin of the Rockefeller Foundation at an intimate gathering organized by the IFTF for the launch of their global forecasting / brainstorming platform Catalysts for Change. Through the platform we united forces with participants all over the globe to help millions find their paths out of poverty. It was awe-inspiring to witness so many brilliant minds committed to helping our brothers and sisters born into poverty, and it gives me great hope for a better future for humanity.

However, it also made me sad as I came to the somber realization that we are not nearly as kind and compassionate as we imagine ourselves to be. While many proclaim that poverty is a failure of capitalism, this is simply another attempt to shift the blame away from ourselves so that we may preserve humanity’s self-image as quintessentially good.

Simply put, the underlying cause of poverty is our collective ability to ignore the suffering of another human being.

Poverty is a failure of human nature, that is it. Nothing else is a factor but merely another symptom of our inherent lack of “true” compassion. You (and I) know very well that there is incredible suffering around the globe yet we go on with our lives. We spend our money on frivolous things while millions starve and we conjure up problems out of thin air (not rich enough, not pretty enough) instead of focusing on the greater challenges faced by humanity.

A wise person knows to judge character based on actions instead of words. If you were to judge yourself on what you have actually done to help those in need, how would you stack up? Maybe you donated some money, or maybe you even donated some of your time but with all due respect, it all amounted to a drop not in a bucket but in an ocean. If we were biologically wired to care, to truly feel compassion, we would be on the streets overthrowing any form of government or financial system that would allow poverty to endure.

Actually, that is not completely true. If we were indeed biologically wired to feel the pain of those in need as if it were our own, it would be inconceivable to us to bring into existence any form of social structure that would provide the conditions necessary for poverty to exist in the first place.

We cannot blame the government, and we cannot blame capitalism because those were not imposed to us, they are our inventions and they reflect our natural values – both evolved from us, not the other way around.

I was born and raised in Brazil and at an early age I learnt to avoid making eye contact with the poor and keep on walking. I learnt to ignore the poverty around me to go on with my life. It wasn’t until my late teens that I came to understand that the people I was ignoring were just like me, they had hopes, they had dreams. The difference is that I had a shot at making my dreams come true while they didn’t.

We must accept our shortcomings if we want to overcome them, we must embrace the idea that “current” human nature is flawed and that the “current version” our species may not deserve the great power and responsibility bestowed upon us by our ever advancing technology.

The great blind watchmaker of nature has groomed us for survival, not for kindness and certainly not for planetary stewardship. Truth is that our lack of “true” compassion is the underlying biological cause of every single war ever fought. 

In order to become worthy of our place in the universe as a sentient species, we must accept that simply “human” may not be good enough. We must become smarter, wiser and most importantly, we must develop our compassion to levels beyond what our current biology will allow.

Luckily, our civilization already carries the seeds of moral greatness as we are able to dream about freedom, justice and unbound love. At this time we merely lack the biological machinery to fulfill those dreams.

It seems to me that the future of compassion is post-human.

Will Artificial Intelligence be America’s Next Big Thing?

Self Driving Car

Excerpts from my interview with the Institute for Ethics and Emerging Technologies:

“In economic terms, automation in general should be seen as a leveraging factor that amplifies the output of workers,” says Rod Furlan, an AI researcher and machine-learning expert based in Vancouver.

“Thanks to the availability of legal software, one lawyer can do today work that required a team of assistants 10 years ago. Ten years from now, an individual lawyer may be able to service as many cases as a small firm does today, all thanks to AI advancements. Going forward, we can expect to do less boring work and have more time for truly intellectual tasks which are less likely to be automated in the near term.”

Furlan says that as more businesses embrace aggressive automation opportunities through AI and advanced robotics, we’re likely to see more companies that, like Google, have an astronomical revenue-per-employee ratio. He adds that he’s still “bullish” on AI and is confident that businesses and individuals will be able to adapt to the new era of increased worker capability.

Read the full article.

Universal Survivalist AI

During my years as a quant trader, I found myself trying to automate my own job.

As it turns out, the market is a formidable adaptive adversary and profitable trading models tend to expire because relationships between assets are always in constant flux. Whenever you derive a predictive model for a given asset, it is basically impossible to tell if the model will remain predictive for one day, one month or one year. If you stop to think about it, it makes sense because we are always modeling the past of a dynamic non-stationary system.

My goal was to create “wide-AI” (not narrow but not strong either) within the limited domain of trading, the idea was to build a system that was not only capable of trading any asset but that was also able to decide by itself which assets to trade and which data streams to use without any human intervention.

My job as a quant was to use several statistical and machine learning tools to derive trading models. The question I wanted an answer for was: how could I build a computer program capable of replacing myself?

To accomplish this goal I built a multi-layer system where the top layers would analyze the data and generate a population of independent programs that would in turn attempt to maximize the value of a utility function. The programs that composed the populations in the bottom layers were themselves recombinant and made of several different machine learning “blocks” and data transformation pipelines.

I was eventually successful and the most important lesson I learned is that when you create a system capable of dynamically integrating different adaptive tools at runtime, you may end up with something far greater than the sum of its parts.

While remarkable, the system I built was still confined to a single domain. After reflecting over the outcome of my research, I realized that within the limited scope of the financial markets, I had built something I decided to call a “domain survivalist”.

Considering the tradable market to be its environment and the joint set of available inputs and outputs to be its embodiment, I had created an agent capable of bootstrapping itself to its “body” and “environment” in order to survive and maximize the value of an arbitrary utility function.

The same system could just as easily trade Oil, Corn, Euros or Google stock. However, even though it was a bit more flexible than what I could have achieved with more traditional methods, its operational range was still vexingly narrow – trading was all it could ever do.

Towards the creation of a Universal Survivalist

Now let’s take the concept of a survivalist AI to the next level. If we set our hearts and minds to it, could we build such thing as a “universal survivalist”? Could we build an AI agent capable of bootstrapping itself to any arbitrary embodiment and find ways to exploit its environment in order to maximize the value a computable utility function?

If the body of the agent is defined as the set of inputs and outputs that are available as means to respectively sample and actuate over its environment, the concept of “embodiment” becomes rather flexible and can extend itself nicely to include AI agents without any sort of physical components.

While creation of such universal survivalist system would not give us HAL 9000 (because language is too complicated), it could certainly give us the algorithmic underpinnings for AI that is versatile enough to bootstrap itself and intelligently control any arbitrarily complex system, be it a plane, a car or a firewall.

The same underlying architecture could then be used to infuse different machines with varying degrees of useable intelligence. A survivalist “born and raised” inside a car could be trained to obey traffic laws the same way police dogs are trained to serve and protect. Similarly, another identical survivalist instance that was instead attached to a security system could be trained to protect a particular location, such as a bank or a hospital. Once trained, any survivalist could then be cloned into as many similar “bodies” as needed.

Indeed something to think about…

My National Geographic Interview On Human Augmentation

National Geographic, Human 2.0

“Right now it’s easy to distinguish between a human being and a machine. However this line will become increasingly blurry in the future. [20 years from now] You will start by getting visual and auditory implants, then you are going to have your midlife crisis, and instead of going out and buying a sports car, you will instead buy a sports heart to boost your athletic performance.

The transition will happen little by little as you opt-in for more enhancements. Then one day you will wake up and realize that you’re more artificial than natural.

Eventually we will not be able to draw a crisp line between human beings and machines. We will reshape ourselves and by changing our bodies we will change the way we relate to the world.

This is just evolution – artificial evolution.

On that note, here is a terrific TED talk by Aimee Mullins – “How my legs give me superpowers”:

The day we finally grow up

The world is changing fast. Wave after wave of accelerating technological change is leaving society and governments struggling to adapt. Our past could never prepare us for the journey we are about to embark on and the truth is that from here on in we shoot without a script.

While we all long for a better tomorrow, very few of us have the courage to try to imagine what the future might actually look like. Bound by conventions and by fear of ridicule, most of us dare not to dream or speak about the deep future, instead we choose to focus on the short-term future, which is safe and generally agreeable.

Futurists everywhere, I applaud your courage. Even when you are wrong, you contribute more to the future of our species than your critics ever will.

Even though collectively we choose poverty of imagination as the default mode of thinking about the future, here we stand on the verge of profound societal changes that cannot be stopped and cannot be reasoned with. We are witnessing the dawn of an age of technological wonders, of technology so advanced that it is itself indistinguishable from magic.

Take a minute to admire the computer monitor in which you are reading these words. Maybe you are using a modern LCD flat panel or maybe you are using an old CRT tube. Either way, old or new, appreciate its beautiful complexity with millions of connected parts that are able to convert a symphony of electrons, bits and bytes into the perfectly weaved tapestry of light required to carry my words to you.

Now consider for a moment the most complex devices we possessed a mere 200 years ago. How does your computer monitor measure up to it? Do you even know how your monitor really works? What about your computer? Your cell phone? Would you be able to design any of these devices from scratch? Do you know anyone who could?

We came a long way in a very short period. Now try to imagine what miracles of science we will witness in the course of the next 200 years. No matter what you think you know about the future, I assure you that if we don’t destroy ourselves, the best is yet to come.

Like Martin Luther King, I too have a dream.

I dream of a world where people are once again thrilled about the future.

I dream that one day curing death, understanding the human brain and traveling to the stars will be seen as urgent challenges that must be conquered at all costs.

I dream that one day scientists will be considered celebrities and that each of us will be measured not by how much capital we have accumulated but by how much we have contributed to the future of our species.

I dream that one day all nations will unite in the war against ignorance and superstition, the true enemies of all sentient beings.

I dream of the day humanity finally grows up.

Conversations on strong AI – Part I

From: Rod (Me)
To: Quantum Lady
Subject: AGI

Yes I agree that there are many challenges ahead on the path to AGI. Right now, we should focus on acquiring a better understanding of how the brain works from an algorithmic perspective and try to derive a hypothesis of general intelligence from it. After all, the brain is the only implementation of a general intelligence “platform” currently known to us.

Our brains represent nothing but one design out of a multitude of possible general intelligence implementations. However, I believe that the search-space for viable AGI architectures is just too large to be traversed by anything other than a super-civilization. Think about the staggering amount of computation mindlessly performed by evolution over millions of years to come up with the design we carry in between our ears.

I think it must be clear to you by now that I sit on the bio-inspired AGI camp and I definitely share your newfound fascination with the brain. Just recently, I started to tell people I am a hobbyist neuroscientist.

Reactions are interesting, sometimes hilarious.

I see whole-brain emulation as the worst-case scenario or “plan B”. If everything else fails, we will achieve AGI once we become able to emulate a whole brain down to an arbitrary level of precision yet to be determined.

That begs the question – what would be the best-case scenario?

Ultimately, I believe there is a simple algorithm for general intelligence yet to be discovered: a small set of rules that give rise to ever growing complexity and intelligence after many generative iterations.

It is unquestionable that this elusive algorithm is engraved not only on the neuronal topology of the brain but also in the rules that govern how topology changes over time. That is why any simulation of the brain must take into consideration plasticity and generative topology to be useful.

I also believe that only a very small subset of the human brain is actually responsible for general intelligence. In the best-case scenario, we will be able to identify the bare minimum amount of brain tissue necessary for general intelligence and derive powerful algorithmic insights from it. I am not talking about generating connectomes or maps but about understanding how to replicate what the brain does, not the minutia of how it does it.

Because truth be told: I don’t want an artificial brain, I want to automate work. I want to copy-and-paste scientists.

Igniting a Brain-Computer Interface Revolution

At MIT with Peter Diamandis, Bob Metcalfe and Luke Hutchinson

I have just returned from an X PRIZE Foundation workshop on brain-computer interfaces (BCI) at MIT. The workshop brought together over 50 leading experts, students and enthusiasts with the objective of brainstorming ideas for an X PRIZE competition to accelerate the development of BCI solutions. During the course of this fantastic two-day event we had the opportunity to explore the many possibilities and difficulties of designing and implementing devices capable of communicating directly with the human brain… read full article