Build Your Own Google Glass

IEEE Spectrum Portrait

Excerpt from my recent IEEE Spectrum article:

A wearable computer that displays information and records video

By ROD FURLAN  /  JANUARY 2013

Last April, Google announced Project Glass. Its goal is to build a wearable computer that records your perspective of the world and unobtrusively delivers information to you through a head-up display. With Glass, not only might I share fleeting moments with the people I love, I’d eventually be able to search my external visual memory to find my misplaced car keys. Sadly, there is no release date yet. A developer edition is planned for early this year at the disagreeable price of US $1500, for what is probably going to be an unfinished product. The final version isn’t due until 2014 at the earliest.

But if Google is able to start developing such a device, it means that the components are now available and anyone should be able to follow suit. So I decided to do just that, even though I knew the final product wouldn’t be as sleek as Google’s and the software wouldn’t be as polished.

Most of the components required for a Glass-type system are very similar to what you can already find in a smartphone—processor, accelerometers, camera, network interfaces. The real challenge is to pack all those elements into a wearable system that can present images close to the eye.

You can read the full article here

Coverage around the web:
Forbes – “Google Glass Project In Flux”
MIT Technology Review - “The Latest on Google Glass”
Huffington Post - “Yes, The Machines Are Getting Better. But So Are You”
Live Science - “Total Recall Offers Killer App for Google Glasses”
US News - “Google Glass Unlikely to Be Game Changer in 2013″
ExtremeTech - “ Google Glass ready to roll out to developers, but why not save $1,500 and build your own?”
Geekosystem - “Can’t Wait for Google Glass? Don’t. Build Your Own”
Lifehacker - “Build Your Own Google Glass-Style Wearable Computer”
The Verge - “One man’s journey through augmented reality with a self-built version of Project Glass”
SlashGear - “DIY Google Glass puts iOS in front of your eyes”
9to5Google – “Don’t have $1,500? Just build your own Google Glass”

On the Future of Education

Tomorrow I will be joining Peter Thiel and the 40 finalists for the 2012 class of the 20-under-20 program for brunch. Of those 40 hopefuls only 20 will become members of the Thiel Fellowship, a daring attempt to hack education and show the world that college is not the only path to success.

Or in their own words:

A radical re-thinking of what it takes to succeed, the Thiel Fellowship encourages lifelong learning and independent thought. With $100,000 and 2 years free to pursue their dreams, Thiel Fellows are changing the world one entrepreneurial venture at a time.

I first heard of the Fellowship when I was invited to participate as a mentor for the fellows. I promptly agreed because I strongly resonate with their goals. As a serial autodidact, I have had my fair share of issues with traditional education as it’s simply not suitable for everyone and certainly wasn’t for me.

As I mentioned in previous posts, I was born and raised in Brazil. Despite superficial similarities, growing up in a developing country is quite different from what readers from more affluent nations would imagine. For example, I held a full-time job during secondary school and my first few years of college. This means that I would work from 9am to 5pm and then go to school from 7pm to 11:30pm. It was tough but it was, and it still is, something very common in Brazil. For most of the population working a full-time job is the only way to stay afloat and to afford any education at all.

While it wasn’t the most pleasant lifestyle, participating in the workforce while going to college at the same time helped me develop some key insights about education. During the day I operated in the real world as a software developer for a large utility company, solving real problems and learning real lessons about life, business and relationships. During the evening I was forced to participate in this odd, almost bizarre parallel reality where professors were trying to, according to their own words, prepare me to succeed in the “real world”. However, it was all too obvious that many of them have never participated in this “real world” that they spoke of and consequently many of the things they were teaching were either outdated or flat-out wrong.

In my case, I taught myself to code when I was 8 years old and had almost ten years of experience as a software developer before I’d even applied to college. I tried to make the best of my college experience but in most part, classes felt generally dull and unstimulating. Luckily, sometime during my sophomore year, I was offered a seat at the management board of a large consulting company and suddenly I had to decide between my fast-moving career or finishing my bachelor’s degree.

The only compelling reason I could conceive to get a bachelor’s degree, other than pleasing my family, was for social validation. However, I wanted to be judged on my accomplishments in the real world and not on my tolerance to inane lectures and my ability to force myself through arbitrary, artificial exercises that would supposedly prepare me to function in the workforce. The way I saw it, I was already empirically prepared for the real world, and it just didn’t make sense to go backwards.

This would be the first time I quit college, out of four in total. Now, before you assume that I am radically against college education, this is certainly not the case. To better understand my position, we must first deconstruct the reasons why going to college wasn’t the ideal choice for me.

Computer science is a field that is both accessible (as in you can practice with a limited budget) and fast-moving. I could not have taught myself medicine when I was 8 years old, I could have read about it, maybe even developed an interest in it, but I wouldn’t be able to practice it – and deliberate practice is a key requirement for expertise. Similarly, while you may also consider medicine to be a fast-moving field, its pace of curriculum change is dwarfed by what we observe in the IT world. The odds are that by the time you graduate from medical school, a considerable portion of what you have learnt is still relevant. In contrast, the core portion of computer science that is in essence future-proof could be taught in only one or two semesters.

College is still, and will remain, the best way to become a medical doctor. The same is true for any other profession that requires access to a fully equipped lab for practice. However it is certainly not the only (or best) choice if you are passionate about a field that is both accessible and/or advancing as quickly as IT.

The key insight I want to share with you is that even though college is one of the many possible paths to success, our society is stuck on the idea that when it comes down to education, “one size fits all”. If you stop to think about it, it makes no sense at all. A good analogy would be a world where painters were only allowed to paint using shades of green and that to be successful, one must limit their palette to greenish tones and renounce the rest of the color spectrum – what a poor world that would be.

Yet this is exactly what we are doing. Our society deliberately tries to reduce the spectrum of choice in education by shunning anyone willing to stray from the beaten path – and in doing that, we all lose. Diversity of thought is also one of our greatest strengths as a species and by limiting which paths to success are acceptable, we are condemning humanity to homogeneity – and eventually to a local maximum (in layman’s terms: a place too good to be abandoned but still rather terrible compared to the best place out there).

Fundamentally, is the level of conformity imposed by college the best way to educate future world-changers?

This all boils down to the reason why I am an outspoken supporter of the Thiel Fellowship. It is not about replacing college for everyone - it is an attempt to widen the palette of choices for education. It is a bold proposal to show the world that a degree is not the best tool for all jobs – but merely one of many in the tool set. While Thiel may come across as a radical when he proclaims that higher education in the US is essentially bankrupt, he is also doing more for the future of education than most of us. His fellowship is one of the select few initiatives that have started a crucial dialog on how humanity should prepare its youth for a future that is rushing at them at an incredible pace.

It is certainly not a program for everyone or for the faint of heart, but the 20-under-20 fellows have self-selected themselves because they don’t want a degree, they want to change the world.

On Poverty, Inequality and Human Nature

Yesterday I had the pleasure to spend the afternoon with Dr. Judith Rodin of the Rockefeller Foundation at an intimate gathering organized by the IFTF for the launch of their global forecasting / brainstorming platform Catalysts for Change. Through the platform we united forces with participants all over the globe to help millions find their paths out of poverty. It was awe-inspiring to witness so many brilliant minds committed to helping our brothers and sisters born into poverty, and it gives me great hope for a better future for humanity.

However, it also made me sad as I came to the somber realization that we are not nearly as kind and compassionate as we imagine ourselves to be. While many proclaim that poverty is a failure of capitalism, this is simply another attempt to shift the blame away from ourselves so that we may preserve humanity’s self-image as quintessentially good.

Simply put, the underlying cause of poverty is our collective ability to ignore the suffering of another human being.

Poverty is a failure of human nature, that is it. Nothing else is a factor but merely another symptom of our inherent lack of “true” compassion. You (and I) know very well that there is incredible suffering around the globe yet we go on with our lives. We spend our money on frivolous things while millions starve and we conjure up problems out of thin air (not rich enough, not pretty enough) instead of focusing on the greater challenges faced by humanity.

A wise person knows to judge character based on actions instead of words. If you were to judge yourself on what you have actually done to help those in need, how would you stack up? Maybe you donated some money, or maybe you even donated some of your time but with all due respect, it all amounted to a drop not in a bucket but in an ocean. If we were biologically wired to care, to truly feel compassion, we would be on the streets overthrowing any form of government or financial system that would allow poverty to endure.

Actually, that is not completely true. If we were indeed biologically wired to feel the pain of those in need as if it were our own, it would be inconceivable to us to bring into existence any form of social structure that would provide the conditions necessary for poverty to exist in the first place.

We cannot blame the government, and we cannot blame capitalism because those were not imposed to us, they are our inventions and they reflect our natural values – both evolved from us, not the other way around.

I was born and raised in Brazil and at an early age I learnt to avoid making eye contact with the poor and keep on walking. I learnt to ignore the poverty around me to go on with my life. It wasn’t until my late teens that I came to understand that the people I was ignoring were just like me, they had hopes, they had dreams. The difference is that I had a shot at making my dreams come true while they didn’t.

We must accept our shortcomings if we want to overcome them, we must embrace the idea that “current” human nature is flawed and that the “current version” our species may not deserve the great power and responsibility bestowed upon us by our ever advancing technology.

The great blind watchmaker of nature has groomed us for survival, not for kindness and certainly not for planetary stewardship. Truth is that our lack of “true” compassion is the underlying biological cause of every single war ever fought. 

In order to become worthy of our place in the universe as a sentient species, we must accept that simply “human” may not be good enough. We must become smarter, wiser and most importantly, we must develop our compassion to levels beyond what our current biology will allow.

Luckily, our civilization already carries the seeds of moral greatness as we are able to dream about freedom, justice and unbound love. At this time we merely lack the biological machinery to fulfill those dreams.

It seems to me that the future of compassion is post-human.

Universal Survivalist AI

During my years as a quant trader, I found myself trying to automate my own job.

As it turns out, the market is a formidable adaptive adversary and profitable trading models tend to expire because relationships between assets are always in constant flux. Whenever you derive a predictive model for a given asset, it is basically impossible to tell if the model will remain predictive for one day, one month or one year. If you stop to think about it, it makes sense because we are always modeling the past of a dynamic non-stationary system.

My goal was to create “wide-AI” (not narrow but not strong either) within the limited domain of trading, the idea was to build a system that was not only capable of trading any asset but that was also able to decide by itself which assets to trade and which data streams to use without any human intervention.

My job as a quant was to use several statistical and machine learning tools to derive trading models. The question I wanted an answer for was: how could I build a computer program capable of replacing myself?

To accomplish this goal I built a multi-layer system where the top layers would analyze the data and generate a population of independent programs that would in turn attempt to maximize the value of a utility function. The programs that composed the populations in the bottom layers were themselves recombinant and made of several different machine learning “blocks” and data transformation pipelines.

I was eventually successful and the most important lesson I learned is that when you create a system capable of dynamically integrating different adaptive tools at runtime, you may end up with something far greater than the sum of its parts.

While remarkable, the system I built was still confined to a single domain. After reflecting over the outcome of my research, I realized that within the limited scope of the financial markets, I had built something I decided to call a “domain survivalist”.

Considering the tradable market to be its environment and the joint set of available inputs and outputs to be its embodiment, I had created an agent capable of bootstrapping itself to its “body” and “environment” in order to survive and maximize the value of an arbitrary utility function.

The same system could just as easily trade Oil, Corn, Euros or Google stock. However, even though it was a bit more flexible than what I could have achieved with more traditional methods, its operational range was still vexingly narrow – trading was all it could ever do.

Towards the creation of a Universal Survivalist

Now let’s take the concept of a survivalist AI to the next level. If we set our hearts and minds to it, could we build such thing as a “universal survivalist”? Could we build an AI agent capable of bootstrapping itself to any arbitrary embodiment and find ways to exploit its environment in order to maximize the value a computable utility function?

If the body of the agent is defined as the set of inputs and outputs that are available as means to respectively sample and actuate over its environment, the concept of “embodiment” becomes rather flexible and can extend itself nicely to include AI agents without any sort of physical components.

While creation of such universal survivalist system would not give us HAL 9000 (because language is too complicated), it could certainly give us the algorithmic underpinnings for AI that is versatile enough to bootstrap itself and intelligently control any arbitrarily complex system, be it a plane, a car or a firewall.

The same underlying architecture could then be used to infuse different machines with varying degrees of useable intelligence. A survivalist “born and raised” inside a car could be trained to obey traffic laws the same way police dogs are trained to serve and protect. Similarly, another identical survivalist instance that was instead attached to a security system could be trained to protect a particular location, such as a bank or a hospital. Once trained, any survivalist could then be cloned into as many similar “bodies” as needed.

Indeed something to think about…

The day we finally grow up

The world is changing fast. Wave after wave of accelerating technological change is leaving society and governments struggling to adapt. Our past could never prepare us for the journey we are about to embark on and the truth is that from here on in we shoot without a script.

While we all long for a better tomorrow, very few of us have the courage to try to imagine what the future might actually look like. Bound by conventions and by fear of ridicule, most of us dare not to dream or speak about the deep future, instead we choose to focus on the short-term future, which is safe and generally agreeable.

Futurists everywhere, I applaud your courage. Even when you are wrong, you contribute more to the future of our species than your critics ever will.

Even though collectively we choose poverty of imagination as the default mode of thinking about the future, here we stand on the verge of profound societal changes that cannot be stopped and cannot be reasoned with. We are witnessing the dawn of an age of technological wonders, of technology so advanced that it is itself indistinguishable from magic.

Take a minute to admire the computer monitor in which you are reading these words. Maybe you are using a modern LCD flat panel or maybe you are using an old CRT tube. Either way, old or new, appreciate its beautiful complexity with millions of connected parts that are able to convert a symphony of electrons, bits and bytes into the perfectly weaved tapestry of light required to carry my words to you.

Now consider for a moment the most complex devices we possessed a mere 200 years ago. How does your computer monitor measure up to it? Do you even know how your monitor really works? What about your computer? Your cell phone? Would you be able to design any of these devices from scratch? Do you know anyone who could?

We came a long way in a very short period. Now try to imagine what miracles of science we will witness in the course of the next 200 years. No matter what you think you know about the future, I assure you that if we don’t destroy ourselves, the best is yet to come.

Like Martin Luther King, I too have a dream.

I dream of a world where people are once again thrilled about the future.

I dream that one day curing death, understanding the human brain and traveling to the stars will be seen as urgent challenges that must be conquered at all costs.

I dream that one day scientists will be considered celebrities and that each of us will be measured not by how much capital we have accumulated but by how much we have contributed to the future of our species.

I dream that one day all nations will unite in the war against ignorance and superstition, the true enemies of all sentient beings.

I dream of the day humanity finally grows up.

Conversations on strong AI – Part I

From: Rod (Me)
To: Quantum Lady
Subject: AGI

Yes I agree that there are many challenges ahead on the path to AGI. Right now, we should focus on acquiring a better understanding of how the brain works from an algorithmic perspective and try to derive a hypothesis of general intelligence from it. After all, the brain is the only implementation of a general intelligence “platform” currently known to us.

Our brains represent nothing but one design out of a multitude of possible general intelligence implementations. However, I believe that the search-space for viable AGI architectures is just too large to be traversed by anything other than a super-civilization. Think about the staggering amount of computation mindlessly performed by evolution over millions of years to come up with the design we carry in between our ears.

I think it must be clear to you by now that I sit on the bio-inspired AGI camp and I definitely share your newfound fascination with the brain. Just recently, I started to tell people I am a hobbyist neuroscientist.

Reactions are interesting, sometimes hilarious.

I see whole-brain emulation as the worst-case scenario or “plan B”. If everything else fails, we will achieve AGI once we become able to emulate a whole brain down to an arbitrary level of precision yet to be determined.

That begs the question – what would be the best-case scenario?

Ultimately, I believe there is a simple algorithm for general intelligence yet to be discovered: a small set of rules that give rise to ever growing complexity and intelligence after many generative iterations.

It is unquestionable that this elusive algorithm is engraved not only on the neuronal topology of the brain but also in the rules that govern how topology changes over time. That is why any simulation of the brain must take into consideration plasticity and generative topology to be useful.

I also believe that only a very small subset of the human brain is actually responsible for general intelligence. In the best-case scenario, we will be able to identify the bare minimum amount of brain tissue necessary for general intelligence and derive powerful algorithmic insights from it. I am not talking about generating connectomes or maps but about understanding how to replicate what the brain does, not the minutia of how it does it.

Because truth be told: I don’t want an artificial brain, I want to automate work. I want to copy-and-paste scientists.

Igniting a Brain-Computer Interface Revolution

At MIT with Peter Diamandis, Bob Metcalfe and Luke Hutchinson

I have just returned from an X PRIZE Foundation workshop on brain-computer interfaces (BCI) at MIT. The workshop brought together over 50 leading experts, students and enthusiasts with the objective of brainstorming ideas for an X PRIZE competition to accelerate the development of BCI solutions. During the course of this fantastic two-day event we had the opportunity to explore the many possibilities and difficulties of designing and implementing devices capable of communicating directly with the human brain… read full article

Reality Check: Brain-Computer Interfaces

From Neuromancer, to The Matrix and most recently Surrogates, Dollhouse and Avatar, brain-computer interfaces (BCI) have always been popular in science fiction. Frequently the depiction of this technology have a tendency to put a greater emphasis on “fiction” than on “science” by perpetuating the fundamentally flawed metaphor of the human brain as a hardware and software composite.

Unfortunately, the human brain is the farthest thing from a von-neumann computer (a.k.a. a stored-program computer) we could possibly imagine. Natural processes lead to the emergence of neuronal topology that then give rise to complex human behavior. Your mind is not your brain’s software – because in reality there is no software at all – information flows through the brain and computation happens naturally due to the physical properties of the neuronal pathways.

The key concept I want you to embrace is that your mind is fully described by the physical configuration of your brain. To “edit” your mind – for example, to implant a memory or instantly learn a skill – it would be necessary to either physically rewire your neurons or have your brain significantly augmented to support on-demand topology modification.

Input/Output interfaces are the most feasible in the short term

Right now we are only able to communicate with the brain by stimulating neurons (input) and measuring specific properties of neurons (output). There a lot of incredible things we can do using this approach, the key concept is to think in terms of what could be done using real-time input and output streams:

  • Give people senses they don’t have (vision to the blind, GPS to the willing);
  • Give people actuators they don’t have (arms to amputees, drive a car with your mind);
  • Read active thoughts and intentions, including memories a person is actively conjuring;
  • Give people artificial experiences using multi-sensorial stimulation;
  • External knowledge databases (Google in your head);
  • Ultimately, we could have an isolated brain with full-digital I/O, enabling for example, full-prosthetic bodies and disembodied living;

I/O interfaces in science-fiction:

  • The Matrix: the Matrix simulated world;
  • Ghost in the Shell: full-prosthetic bodies, “the net”, external memories;
  • Avatar and Surrogates: remote control of a prosthetic body;

Read/Write interfaces are possible but they will probably require advanced brain augmentation

There are things however, we might never be able to do using I/O interfaces because they require being able to read and modify the brain’s neuronal topology directly (read/write):

  • Read a memory, without the subject actively conjuring it;
  • Write a memory without generating an experience (“imprinting”);
  • Significantly faster-than-real-time learning or instant knowledge transfer;
  • “Editing” personality traits;

We currently lack significant understanding of how to address the challenge of building such R/W interface to the brain. First we would need significant advancements in neuroscience in order to learn how to design useful neuronal pathways. Secondly, we will need a few fundamental breakthroughs in nanofabrication and nanorobotics to gain the ability to manipulate matter with the degree of accuracy needed to make useful (and desirable) changes to a living human brain.

R/W interfaces in science-fiction:

  • The Matrix: instant learning through downloads;
  • Ghost in the Shell: hacked memories, “puppet” agents;
  • Dollhouse: personality imprints, “tabula rasa” programming;

Talking to the brain and altering the brain are two fundamentally different tasks

Although limited, I/O interfaces are the easiest to build. Even though every bit of information that enters the brain indirectly leads to neuronal topology change, the minutia and scope of these changes are not under our direct control. This means that there are fundamental limits of what we can do with I/O interfaces alone.

However, I/O brain-computer interfaces will significantly expand our mental landscape in the near term by adding new information streams to our conscious experience of the world. Yet, the dream of instant learning and mental imprints might never be achieved before we move on to considerably enhanced or artificial brains that provide easy R/W access to neuronal topology.

In other words, for the foreseeable future, you will not be downloading a kung-fu app into your brain. And when you are finally able to do so, you might not have what you currently call a brain anymore.

Forget the Turing test, passing the Tim Ferriss Test is what you should aim for

tim-ferriss

I see AGI as the ultimate force multiplier and as the final solution for the workforce problem. As such, I expect that at some point in the future I would be in control an AGI system that could act as my online proxy-agent, taking care of my interests, investments, relationships, etc.

The objective of such AGI proxy agent (AGI-PA) would be to intelligently automate my life as much as possible and to eventually convince me that I am better off letting it handle most of my obligations for me.

Given enough time and feedback the AGI-PA should learn to think like me (to a degree) and start making decisions on my behalf. Its decisions would initially need to be audited but just as I have learned to trust my spam filter I should eventually learn to trust my AGI-PA’s judgement.

The process of training a new AGI-PA should be similar to the process of training an off-shore virtual assistant (VA) hired from any of the currently popular outsourcing services (oDesk, GetFriday, etc).

If the AGI-PA is able to (by any means) reduce my workload, I would consider it successful by a factor that reflects how much less work I had to do in average compared to my workload before commissioning the system. Naturally, hours spent teaching and managing the AGI-PA would count as work hours.

The Tim Ferriss Test for Artificial Intelligence

I have named this test after Tim Ferriss is the author of the best seller “The 4-Hour Workweek” and a vocal advocate of the concept of outsourcing your life to off-shore workers. The test consists in having a human judge distribute several (lawful) tasks to two remote assistants over e-mail, one being an experienced human VA and the other being a machine. If the judge isn’t able to tell which assistant is the machine solely by observing the resulting work, the machine is deemed to have passed the test.

I just can’t wait to be able to copy & paste employees…

In Defense of Efficient Computing

We all know that the cost of computing is on a definitive downtrend and while this is a great thing I worry that it is also steering developers to become less proficient in writing efficient (and reliable) code.

If most problems can be solved by throwing money and hardware at sub par wasteful code, what is the incentive to writing thoughtful, efficient programs?

The answer is RELIABILITY (and possibly to save your planet)

While the cost of computing is going down the adoption of dynamic languages is going up and that worries me. Perhaps I am just a backwards man who hasn’t seen the light yet but I urge you to bear with me as I present my argument against the use of dynamic languages in a production environment.

Why do I claim that dynamic languages are wasteful?

If you appreciate theater it is likely that you also appreciate that  the actors have to work hard at every performance. It is a beautiful thing, but it is also a lot of repetitive work and that is exactly the problem with dynamic languages in general.

How so? Every execution (performance) requires that a lot of work is done by the language runtime (showtime) environment. Most of the time this is repetitive work, mindlessly repeated every time you run your code.

Compare this with the stark contrast of cinema (compiled code) – there is no showtime hustle, the movie is already shot, cut, edited and packaged for delivery. No energy is wasted.

You might ask me how significant is this waste I am talking about and the truth is that most of the time it is negligible but like everything in life, the devil is in the details.

Hypothetically, let us say that at each execution a dynamic program wastes a mere 10,000 CPU cycles doing “preventable runtime busy-work”. That figure is utterly insignificant if you are looking at the trees only, but as you step back to look at the forest your perspective might change.

Imagine this same piece of code running on a few hundred-thousand computers around the world and being executed a few thousand times a day on each. Now multiply this by each program of each dynamic language in existence and you might conclude that dynamic languages are not very good for our planet because computing demands energy and energy demands natural resources.

So, if you believe in green computing through algorithmic efficiency you already have a good case against the use of dynamic languages.

“But dynamic languages are more productive so the energy wasted on one end is saved on another!”

Some might argue that dynamic languages are able to offset their “global footprint” by being more productive. Unfortunately, I beg to differ.

Unless you are simply writing throwaway (or generally short-lived) code, the odds are that whatever you are building will remain in play for years to come. This brings two new variables into the equation, one is maintenance costs and the other is failure costs. I am going to argue that compiled languages do a better job at minimizing those costs in the long run due to the increased reliability they offer.

First, we need to define what “productive” means in a quantifiable way because there is no room for fuzzy subjective opinions in computer science.

I believe that the Total Cost of Ownership (TCO) of a software solution is the ideal metric of productivity. A language that can deliver a piece of software with a lower lifetime TCO should be considered more productive simply because more was produced for less.

TCO encompasses everything from time-to-market risk, development costs, maintenance costs, failure-rate costs as well as any other overhead incurred.

My argument is that while dynamic languages may accelerate the initial delivery of a software solution, it does so at the cost of reliability and maintainability. This is detrimental for the TCO bottom line because in practical terms (at least for large, long-lived projects) it translates into decreased productivity.

“But compiled languages are too hard!”

 

The place for dynamic languages

With all that being said, I must disclose that I often use dynamic languages in my research when it makes sense, mostly for throwaway code or prototyping. In fact, one of my favorite tools (MATLAB) is centered around a particularly terrible dynamic language. What I am against is using dynamic languages for the development of code intended for production environments. That is all.

Conclusion

If you love our planet and you love computer science, I urge you to not take the easy way out, embrace the beautiful complexity of machine efficient code and write greener programs by using a compiled language. It can only make you smarter in the long run :)

Pragmatic Automated Trading – Part 2

The Scientific-Minimalist-Economic (SME) approach to Automated Trading

The SME is my personal approach to the development of automated trading agents. You could save a lot of time and money by adhering to these three very simple principles.

This article is part of a series dedicated to explaining a no-nonsense approach to the development of automated trading agents. This series is aimed towards people with a computer science background with or without trading experience.

Principle #1 – BE SCIENTIFIC

science_robot

The Scientific component of the SME method pertains to the use of the Scientific Method to create and refine trading strategies.

Scientific method refers to bodies of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on gathering observable, empirical and measurable evidence subject to specific principles of reasoning.[1] A scientific method consists of the collection of data through observation and experimentation, and the formulation and testing of hypotheses.[2]

Although procedures vary from one field of inquiry to another, identifiable features distinguish scientific inquiry from other methodologies of knowledge. Scientific researchers propose hypotheses as explanations of phenomena, and design experimental studies to test these hypotheses. These steps must be repeatable in order to dependably predict any future results.Theories that encompass wider domains of inquiry may bind many hypotheses together in a coherent structure. This in turn may help form new hypotheses or place groups of hypotheses into context.

Among other facets shared by the various fields of inquiry is the conviction that the process be objective to reduce a biased interpretation of the results. Another basic expectation is to document, archive and share all data and methodology so they are available for careful scrutiny by other scientists, thereby allowing other researchers the opportunity to verify results by attempting to reproduce them. This practice, called full disclosure, also allows statistical measures of the reliability of these data to be established.

Source: Wikipedia


How does it apply to automated trading:

Trading books, magazines and circles are heavy on pseudo-knowledge derived from subjective experience and anecdotal evidence. In fact, there are whole books out there authored by so-called ‘trading gurus’ that do not contain even one single profitable trading system. You should never trust any trading advice to be true before you test it yourself.

The Hypothesis

To beat the market you will need to develop a profound understanding of how it works. When confronted with a potentially profitable trading idea the first thing you need to do is to develop a hypothesis of why you think it should work. This hypothesis will either be confirmed or falsified by the following steps of experimentation and analysis.

The Experiment

Once you have developed a hypothesis of why a given trading idea should be profitable, it is time to undertake an experiment called backtesting to establish if the strategy would have been profitable if you had traded it for a given period in the past.

While a successful backtest wouldn’t be enough to assert that a given hypothesis is true, an unprofitable backtest would be in most cases enough to dismiss it as a valid component of your automated trading system.

The Analysis

With the backtest results at hand you can begin to draw conclusions regarding the hypothesis you are testing. Considering you were able to determine that the backtest results are indeed correct, your major concern should be to establish if the trading strategy could actually be profitable for the near term future.

Later in this series I will discuss several statistical techniques you can use to rule-out most cases where an unprofitable trading idea generates a profitable backtest.

Principle #2 – BE MINIMALIST

The Minimalist component pertains to applying the Occam’s razor principle to every single decision you make:

It is frivolous to do with more what can be done with less.


Why should you keep your trading platform code down to a minimum:

Every single line of platform code you write will have to be debugged and maintained for years to come. That is an overhead cost you should not ignore. You should spend your time beating the market – not debugging platform code.

Why should you keep your trading system rules as simple as possible:

Backtesting will not make you rich and every time you add a rule to a trading model that translates into a more profitable backtest result you are very likely doing so in expense of future trading performance.

Later in this series I will discuss how you can assess if adding an additional rule to your trading agent would be beneficial or detrimental to its future performance.

imagesPrinciple #3 – BE ECONOMIC

We are in this business to make money, not to spend it. I consider it of paramount importance to keep costs down at all times.

Questions to ask yourself before committing money to anything other than a trade:

  • Do I know as a fact that I absolutely need this product/service?
  • Is this the right product/service for me?
  • Can I postpone this commitment a little longer?
  • Does this product/service offer the best long term value?
  • Will this acquisition make me dependent on this vendor?

Real life examples:

  • Don’t buy anything because you ‘might need it’ soon. Plans change but expensive hardware doesn’t.
  • You will live and die by the quality of your data feed but be aware that an expensive service does not necessarily translates to a quality service.
  • Invest time to find a broker with the best reliability-to-cost ratio. It will payoff in the long run.

Next on Part 3:
Automated Trading Platform – Build or Buy?

Pragmatic Automated Trading – Part 1

This article is part of a series dedicated to explaining a no-nonsense approach to the development of automated trading agents. This series is aimed towards people with a computer science background with or without trading experience.

futurama_0601_wideweb__470x330,0The development of automated trading agents is one of the most challenging and rewarding applications of artificial intelligence to date. It is rewarding because success generally translates into an account padded with profits and it is challenging because it pits your creations against the combined intelligence of millions of other agents around the globe.

I will be honest with you, the road to profits is full of misleading signs and dangerous detours. Many of you will fail not because you are not smart or talented enough but because you might decide that the personal cost of such venture might just be too high for you.

The market is a formidable foe and to beat it you will have to immerse yourself into it and crack the code from the inside. To create truly intelligent agents you will have to learn many things that will change the way you perceive the world around you.

If you are threading down this path just for the money I can save you a lot of trouble by telling you to go spend your time with something else. There are easier ways to make money out there that do not require the Herculean amount of work and research that is necessary to consistently beat the markets.

What do you need to get started:

  • A good computer. Something reliable and reasonably fast. You might need more processing power later on but you should not worry about it right now. Do not spend money on a computer just to get started – it is not worth it.
  • A broker that provides API access to market data and execution. Ideally the broker should also provide a live “paper trading” environment that you could place trades against to test your execution code. I recommend Interactive Brokers as a good starter brokerage firm.
  • Solid programming skills with the language you decide to use. This is not optional and it isn’t the kind of thing you can “learn as you go”. If you don’t have prior programming experience I recommend that you dedicate enough time to learn as much as you can before you get started. It isn’t enough to be able to write code that works – you need to be able to write efficient code that is both maintainable and reliable.
  • stats1A passion for Math – or at least the ability to force yourself to like it. There is no way around it as math will be the main weapon in your arsenal. Don’t think you can make do by simply using formulas you don’t quite understand – you would not stand a chance. It is crucial that you master all the mathematical concepts you decide to use in your trading systems.
  • An unshakeable and obsessive desire to succeed.

Next on Part 2:
The Scientific-Minimalist-Economic (SME) approach to Automated Trading

On Free Will

I have an issue with the concept of free will because for free will to exist it would require that information could be created out of nothing. My position might be counter-intuitive at first, but think about it:

Scenario #1: There is no free will and information cannot be created, just derived from previously existing knowledge. We are born with biological biases and within an environment that is not under our control. The environment will dictate what experiences we will have and what knowledge we will acquire while our biological biases will determine how we will derive new information from it. Information is derived and then re-derived and through recursion, we begin to exhibit complex emergent behavior that appears to be unpredictable.

Scenario #2: There is free will and ‘somehow’ we make information available to our mental faculties that is not sourced on pre-existing knowledge. The world stops making sense and we begin creating incredible explanations for the phenomenon – i.e. soul, spirit, fairies.

Do I even need bring up Occam’s razor on this one?

On the Acquisition and Culling of Beliefs

Last night I spent some time thinking about what exactly makes the belief systems of religion and science so different. If we take this discussion to a higher level of abstraction we would be forced to agree that both religion and science represent incomplete belief systems that rely on certain dogmas to assert their validity.

My definition of incomplete in this case pertains to the existence of corner-stone dogmas that must not be challenged for the belief system to be valid. In contrast, I would call a believe system “complete” when such dogmas are not necessary. It goes without saying that such “complete” belief system can only ever exist as a theoretical construct for use in thought experiments – you will not find it in the real world.

Following this train of thought I was able to isolate a single yet crucial difference that makes the scientific belief system superior than the religious belief system. 

Both science and religion have rules in place to acquire new beliefs. 

Science uses empiric observation and experiments while each religion has their own methods – new saints, the teaching of a reincarnated master, and so on. The critical difference isn’t necessarily the way those different systems acquire knowledge but in a crucial device that is present in the scientific method but it is lacking on most religions as far as I know.

Any information system that is supposed to manage knowledge with the intent of learning the truth about any topic must have two basic kinds of rules to be successful. The first rule set would be for knowledge acquisition which is present in both religion and science. The other rule set would be for knowledge culling and is used to remove unfit beliefs from the pool of knowledge of the system.

Most religions (if not all) lack formal rules for the culling of unfit beliefs.

Without a rule-set to remove beliefs from the pool of accepted knowledge any belief system would eventually evolve into a bloated mess of contradictory teachings, rules and expectations. What exacerbates the problem for most religions is that to cull a once divine instruction from the belief system would generaly imply that god (or any given deity) has a flawed reign over the flow of knowledge that is passed on to followers – which I believe most religion adepts would not be willing to accept.

In summary: Both religion and science are belief systems that rely on dogmas to assert themselves. While both systems have rules to acquire new knowledge, most (if not all) religions lack generally accepted rules to invalidate previously divine rules that were found to be wrong. That leads to desperate efforts to make them right with disregard to the truth itself. Ex. Dinosaurs in Noah’s ark.

Code as a Second Language

I was 8 years old during the summer of 1987. That was when I came across a personal computer for the first time. I had just arrived at my grandparents’ house in the countryside of Sao Paulo state in Brazil and my aunt had just bought a computer. I am not sure why but I was immediately interested.

"16K-BYTE RAM pack for massive add-on memory"

My aunt had just bought a second-hand Sinclair ZX-81 and she was kind enough to entertain my interest in it. The ZX-81 was a small machine resembling more an accounting calculator than anything you would call a computer. When she turned it on, I must admit that I was rather disappointed because all I saw on the screen was a white background with a small black square in the bottom left corner with an uppercase “K” inside it.

ZX81 Screenshot

That is what you would see after turning on a ZX81. Not exactly riveting.

Then I asked: “So, what can it do?”. Little did I know that her answer would literally change my world. “If you write the right code, a computer can do anything you want.” – she replied.

I stood in silence for a moment as her words echoed inside my brain. Imagine that, a machine that can do “anything you want” as long you write the “right code”. It became immediately clear to me that I had to learn how to use it.

Unfortunately she was too busy to teach me but was kind enough to let me  “play” with her computer. As she later confessed, she didn’t take me seriously at first, in fact, no one in my family did. It didn’t matter, though, for luckily I was already an avid reader and I had plenty of time on my hands.

By the end of that summer I had developed a solid intuitive understanding of the logic underpinnings shared by most programming languages even today. I felt as if I had just seized the power of creation itself. Within the boundaries of the Zilog Z80 microprocessor I experienced absolute control. It was a powerful, life-changing experience for a kid.

Back then, numbered lines were all the rage

However, the end of the summer posed a challenge because I didn’t own a computer. To make matters worse, in the context of the Brazilian economy in the late 80s, computers were without a question too expensive to buy as a “toy” for an 8 year old.

My solution was to do what kids to best: use my imagination. Sure, I didn’t have a computer, but that would not stop me. I could still write code, and then I would execute it in my head, line by line, like a human computer.

ZX81 machine code

Not my actual notes, but you get the idea...

I was nine by the time my father was finally able to buy me a computer. By then I had already become quite good at running code “in my head”. Using my supercharged imagination, I was even able to play games of my own design by simulating the code execution in my mind, keeping track of all variables and having fun doing it.

Lamentably, my first computer was defective and unable to load programs from cassette tapes. Any code I wrote for it would be forever lost every single time the computer was turned off and I would never be able to run any commercial software. If I wanted my computer to do anything, I would have to always write the code myself from scratch.

This didn’t bother me because defective or not, I finally had a computer to work with and that was definitely progress. The next issue at hand was the fact that most of the ZX81 literature available was not in Portuguese, my native language. I had to spend countless hours translating books written in English with a dictionary by my side. Painstakingly and word by word, I was able to decipher the few imported books I could get my hands on and use the acquired knowledge to write code of ever growing complexity.

Machine code for beginners

Recursive autodidacticism: I had to teach myself English before I could teach myself Z80 machine code

Even though it wasn’t long before I outgrew the ZX81, it has given me something that will always be part of me. Thanks to my early exposure to computer logic, I became able to visualize code not as static commands in a text editor, but in a much more fluid manner. The best way I can describe it is as a form of acquired synesthesia: to me, computer code has always had a kinetic-spatial dimension to it, with different logical constructs possessing specific “geometries” associated with them.

Relentless progress

Progress is relentless...

I entered the workforce early, when I was 13 I became an assistant instructor at a local computer school. I founded my first company, a dial-up BBS, when I was 15. School was essentially a bore. Unlike most people, I don’t need structure, I need concrete problems to solve. I was constantly frustrated by how disconnected from reality most classes felt. Not soon enough for me, high-school was eventually over and I decided to quit college to focus on my career. By the time I turned 21, I landed my first executive position at a software development arm of a large utilities conglomerate that employed over 400 developers.

I always wonder how different my life would have been if I had decided finish college instead of pursuing a career. I never felt as if I had a choice in this matter, I simply followed my heart. Luckily, the outcome of my decision was positive and by the time I was supposed to be graduating from college, I was already managing multi-million dollar projects and flying around the country in the company’s private jet.

Now I finally understand that my passion will always be my most important credential. I since I was a little boy I have been genuinely in love with the power and infinite potential of technology. I believe was born to build wondrous things that inspire and drive change and I will work towards that until my very last breath.

Life is an all-you-can-eat buffet of knowledge.

The ZX81 at the Computer History Museum

The ZX81 at the Computer History Museum