Today, you probably know the Internet as a series of web pages, tweets, and lots of email. But with the rise of virtual reality headsets, companies like Facebook think that will change. You might soon “browse the web” in a place called the metaverse–a second reality rendered in 3-D.
Lucidscape is a company that’s playing with what such an Internet might look like. They’re building an open source, 3-D graphics engine that can be shared across servers. In other words, rather than rendering a website or video game on a single computer at a time (as most graphics engines do today), all of the world’s servers could combine to construct one giant, seamless world, where millions of us may flock, virtually, to see a concert, or walk on the streets of digital Times Square.
Carrying out a series of astounding stress tests, Lucidscape ramped up to simulate 10 million people traversing in the shared 3-D space provided by 800 servers. The result looks like you’re flying at light speed through a grid of planets, or maybe visiting the latest Yayoi Kusama exhibit.
At Lucidscape we are building a new kind of massively-distributed 3D simulation engine to power the vast network of interconnected virtual worlds known as the Metaverse.
Handling massive virtual worlds requires a fundamentally different approach to engine design, one that prioritizes the requirements of scale, thoroughly embraces distributed computing and many-core architectures, and that is also unencumbered by the legacy decisions which hold current engines back.
We have recently conducted our first large scale test where we simulated the interactions of 10 million participants on a cluster of over 800 servers.
You can read more about the test here.
Introducing my new venture, Lucidscape :
Our mission is to deliver the foundations of the metaverse, a vast network of connected virtual worlds where data is tangible and autonomous programs roam free.
We have a different vision for the web of the future. Instead of pages connected by links we envision a web of immersive worlds connected by portals. Connected but independent, each world is a sovereign simulation inhabited by users and program alike.
Our technology is a re-imagining of the fundamental technologies employed by games and virtual reality applications. Inspired by the same decentralized design philosophy of the web, we built a new kind of collaborative real-time computing platform to power a vast web of interconnected virtual worlds.
Head over to our official website for more info: http://lucidcape.com
Furlan was another maker who didn’t want to wait for the “official” version of Glass. He created a very respectable homebrew HMD, also using MyVu Crystal for the optics. The display is mounted on a pair of safety glasses with a Looxcie Bluetooth camera, and uses an iPod Touch as the brain. Furlan passed on voice control, instead relying on iPod’s touchscreen for navigation, with a custom app to automate and aggregate common notifications (Facebook, Twitter, stocks, email, etc.) so he wouldn’t have to be constantly taking the device out of his pocket.
Furlan says that though he started out skeptical of the technology, after wearing it for awhile he began to feel attached — taking it off produced a noticeable sense of loss. Even though his build is crude by Glass standards, it still gave him a taste of what’s to come. You can read in more detail about Furlan’s build on spectrum.ieee.org.
It is not everyday that you get the change to experience something for the very first time: last week I had the opportunity to attend RoboBusiness 2013 (most appropriately) by robot. I “beamed” into a Suitable Technologies Beam remote presence robot and I had a very positive experience interacting with everyone at the conference, human and machine alike.
Suitable Technologies did a great job with the Beam, the user interface is great and the robot handles very well. One-on-one conversations felt natural despite of the loud environment however multi-party conversations were challenging at times, but still workable.
Interestingly, interacting with remote locations through a mobile proxy body can be a very peculiar experience because you may end up with vivid memories of places you have never been to.
Excerpts from my recent interview with Vice:
Beyond the Rift: How the Oculus Kick-Started a New DIY Virtual Reality Movement
On his website, Rod Furlan describes himself as an artificial intelligence researcher, a quantitative analyst, and an alumnus of the Singularity University. He also makes things: personal augmentation devices, artificial intelligences, and virtual reality kits. The last is arguably what Furlan is best known for: In 2012, Furlan began building his own version of the Oculus Rift. By doing so, he helped catalyze the DIY virtual reality movement.
“DIY VR is here to stay, now that we have all the parts we need available at acceptable cost.” Furlan enthuses. “We can also expect DIY projects to outspec commercial products because independent makers are not bound to market cycles. For example, the team at MTBS3D is already working on a DIY design with a 2,048 by 1,536 resolution, which is a gigantic improvement in per-eye resolution vs. the Rift.”
“We are living in very interesting times. The tools of creation are becoming more accessible at an incredible pace,” Furlan says. “We are going to start seeing more and more amateur makers building incredible devices.”
A while ago I was working on a fully immersive stereoscopic “remote head”. On one side the user wears a HMD and on the other side there would be a servo-actuated stereoscopic camera programmed to match the orientation of the user’s head-tracking device. Sadly even if I used very fast servos it wouldn’t be possible to move the camera fast enough to accurately match the perspective of the user. The head-tracking lag would be quite disorienting, unacceptably so even before we factor in the network lag.
On the second prototype, I decided to use a monoscopic 360 degree camera instead. The remote head would transmit the whole image-sphere to the user’s machine and I would clip the viewport on the client-side using the tracker information – effectively eliminating head-tracking lag by doing it locally. The overall experience should be great even though the video feed from the remote head could be several milliseconds behind real-time.
And here is how all of this intersects with VR: a cloud-VR server could render a 360 degree image sphere around the player, transmit the whole frame to the client which would then clip the viewport based on the orientation of the user’s head. It could even be done adaptively to save bandwidth – instead of transmitting the whole image-sphere it could send only a portion of it based on how fast the user is likely to turn his head in the next N milliseconds or at a reduced frame rate, and since the raster viewport is clipped by the client, the user would still be able to look around at 60fps. Input-to-display lag would still exist but developers could overcome at least some of it by designing around this limitation.
The potential end-game could be something like a cloud-powered Oculus Rift style HMD with no console or PC required – in other words: no hassle VR that is just plug & immerse. The required tech is already available, both NVIDIA and AMD have announced support for GPU cloud rendering, OculusVR is finally shipping the $300 Rift development kits worldwide and all the required client-side processing could easily be handled by a $100 Android board.
I will be away for several months travelling around Asia – follow me on 500px for pictures
UPDATE: You can find the first draft of the “official” building guide here.
Yesterday Hack-a-Day featured one of my summer projects – a do-it-yourself immersive virtual reality head-mounted display based on the upcoming Oculus Rift. This is a collaborative effort with several contributors from the MTBS3D community, including Palmer Luckey from OculusVR. You can follow my guide to build your own – enjoy!
Back in 2009 I was working on a DIY personal head-up display (HUD) driven by a Sony Vaio VGN-UX380N ultraportable PC. I ended up shelving the project because I felt the technology wasn’t there yet – the prototype was bulky, uncomfortable to wear and battery life was terrible.
Luckily we are living in exponential times and mobile technology has advanced so much since 2009 that with just a bit of research I was able to build a much better wearable computing device than the one I was experimenting with 3 years ago. Below are pictures of prototype #2, which is basically a wearable microdisplay driven by a smartphone.
For prototype #3 I will be keeping the wire (for now) and I will add a camera, a microphone, speakers and a 9dof inertial tracker to match Google Project Glass‘ known capabilities. It should serve as a good platform to explore the applications of a head-mounted wearable computer.
Excerpts from my interview with the Institute for Ethics and Emerging Technologies:
“In economic terms, automation in general should be seen as a leveraging factor that amplifies the output of workers,” says Rod Furlan, an AI researcher and machine-learning expert based in Vancouver.
“Thanks to the availability of legal software, one lawyer can do today work that required a team of assistants 10 years ago. Ten years from now, an individual lawyer may be able to service as many cases as a small firm does today, all thanks to AI advancements. Going forward, we can expect to do less boring work and have more time for truly intellectual tasks which are less likely to be automated in the near term.”
Furlan says that as more businesses embrace aggressive automation opportunities through AI and advanced robotics, we’re likely to see more companies that, like Google, have an astronomical revenue-per-employee ratio. He adds that he’s still “bullish” on AI and is confident that businesses and individuals will be able to adapt to the new era of increased worker capability.
Read the full article.
“Right now it’s easy to distinguish between a human being and a machine. However this line will become increasingly blurry in the future. [20 years from now] You will start by getting visual and auditory implants, then you are going to have your midlife crisis, and instead of going out and buying a sports car, you will instead buy a sports heart to boost your athletic performance.
The transition will happen little by little as you opt-in for more enhancements. Then one day you will wake up and realize that you’re more artificial than natural.
Eventually we will not be able to draw a crisp line between human beings and machines. We will reshape ourselves and by changing our bodies we will change the way we relate to the world.
This is just evolution – artificial evolution.“
Imagine a direct connection between the human brain and the world’s most powerful computers… What if you could type with your thoughts? Or help the blind to see? Or give an amputee control over his bionic arm? How can the Brain-Computer Interface (BCI) positively affect humanity’s grandest challenges?
Singularity University partnered with X Prize Lab @ MIT for the 2-day “Brain-Computer Interfaces: Igniting a Revolution” workshop that kicked off today to discuss these questions and more with some of the leading minds in neurobiology. Special guests included SU co-founder and CEO of the X Prize Foundation, Ed Boyden, Director of the MIT Synthetic Neurobiology Group, and Gerwin Schalk, Director BCI2000, Wadsworth Center.
SU Graduate Studies Program alum Rod Furlan interviewed a few of the BCI experts to get their thoughts on the state of BCI, where it’s headed, and how it can affect “humanity’s grand challenges.” Check back soon for those videos, as well as the lively panel discussion on the future of BCI with Peter Diamandis, Ed Boyden, and SU instructor and Omneuron founder Christopher deCharms.
I was just featured on an article published by the Estado de Sao Paulo, one of Brazil’s largest newspapers:
Welcome to the Man-Machine University
(Translated by Amazon Mechanical Turk)
He taught himself to write computer programs when he was 9 years old. At 10, he devoured books in English on the subject, using a dictionary to translate it word by word. At age 15, he founded his first company, an online bulletin board system which was precursor service of the Internet. At 22, then a director of a large technology company, he left everything behind to live abroad and “conquer the world.”
The curriculum of Rod Furlan, 30, impressed the directors of one of the boldest educational institutions in the world, Singularity University (SU) in California.
Starting with the name, inspired by the book The Singularity is Near by futurist and founder of SU, Dr. Ray Kurzweil, nothing is conventional in the institution, which is also known as the “Google University” because the Internet giant is one of the founders and supporters of the institution, located within the NASA Ames Research Center in the Silicon Valley.
“We seek enterprising people, willing to face great challenges,” says the executive director of SU, Salim Ismail, who was in Sao Paulo this month to establish a partnership with the Faculty of Information Technology (Fiap). After the program, students must submit a proposal that to positively impact on the lives of at least 1 billion people in the following decade.
Participating in this dream team university is not easy. The applicant must be an expert in matters such as networks and computer systems, biotechnology and nanotechnology, medicine and neuroscience, robotics and artificial intelligence, public policy, law or finance. Last year 1,200 candidates competed for 40 seats, this year 1600 to compete with 80 available.
“It was the best time of my life,” said Furlan. According to the Brazilian student, he alternated days of talks with senior officials from companies like Google itself with yoga classes and site visits. And at night, the participants met at the NASA lodge to discuss for hours all that they had learnt about the future. “SU is also known as Sleepless University, because students do not sleep,” jokes Ismail.
Popular Science have just published a cool article about our summer at Singularity University. Late but great!
“According to Ray Kurzweil, the Singularity is a point at which man will become one with machine and then live eternally—which makes Singularity University, a nine-week academic retreat named for the concept, sound a little cultish. Our writer traveled west to investigate and found 40 stunningly sane brainiacs out to change the world.” – Popular Science [read full article]
“Henry Markram says the mysteries of the mind can be solved — soon. Mental illness, memory, perception: they’re made of neurons and electric signals, and he plans to find them with a supercomputer that models all the brain’s 100,000,000,000,000 synapses.”
“Researcher Kwabena Boahen is looking for ways to mimic the brain’s supercomputing powers in silicon — because the messy, redundant processes inside our heads actually make for a small, light, superfast computer.”
I find it hard to imagine anything more disruptive to capitalism as we know it than nanotechnology. This summer at Singularity University we had the pleasure of meeting Ralph Merkle who taught us how to give a “nano-talk” in order to explain the benefits of nanotech to anyone:
The field of [field] is critically dependent on [product].
[Product] are made from atoms. Nanotechnology will let us make [product] that are lighter, stronger, smarter, cheaper, cleaner and just better.
This will have a huge impact on [field], for example, we could even have [product] that are [astonishing parameter] and cost only [remarkably cheap]!
Here is an example to drive the point home:
The field of [bicycling] is critically dependent on [bicycles].
[Bicycles] are made from atoms. Nanotechnology will let us make [bicycles] that are lighter, stronger, smarter, cheaper, cleaner and just better.
This will have a huge impact on [bicycling], for example, we could even have [bicycles] that are [just half a pound] and cost only [a dollar]!
Singularity University (SU) is a joint effort of NASA, Google, and some of the foremost authorities in science and technology. Its objective is to expose a group of promising graduate students and professionals to a broad range of cutting-edge research that is likely to lead to disruptive technological innovation in the near future.
In their own words:
Singularity University aims to assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity’s grand challenges.
I am 29 and I am a compulsive builder. It isn’t like I want to build things, I simply have to because otherwise they consume me. Considering that I can envision way more than I can actually work on I find myself always having to give up ideas for the sake of completing projects already underway.
Once I am done, I don’t feel like I have any time to spare to enjoy success. Off I am to the next project.
To be honest, I feel like the journey is more important than the destination. Profits are nothing but a side-effect of a job well done.
“TwitZap is a new way to use Twitter. It lets you slice Twitter into realtime streams of stuff that matters to you. On top of that, TwitZap users can tweet each other in real-time using our Twitter accelerator technology even while Twitter is down.”
It is basically a web-based, real-time Twitter client + search monitor. Think TweetDeck without the install – making it possible to use it anywhere, anytime.
All tweets are still relayed to Twitter as soon as possible. You can even see other users that are on the same channels as you – which is pretty cool and incentivize instant interactions.
However, this is probably going to be my last web project for a while. While it is incredibly gratifying to build something the whole world can use, web development is just not challenging enough to keep me excited about it.
I just feel like it is just too much busy-work for my taste, with very few really interesting problems to solve.