Reality Check: Brain-Computer Interfaces

From Neuromancer, to The Matrix and most recently Surrogates, Dollhouse and Avatar, brain-computer interfaces (BCI) have always been popular in science fiction. Frequently the depiction of this technology have a tendency to put a greater emphasis on “fiction” than on “science” by perpetuating the fundamentally flawed metaphor of the human brain as a hardware and software composite.

Unfortunately, the human brain is the farthest thing from a von-neumann computer (a.k.a. a stored-program computer) we could possibly imagine. Natural processes lead to the emergence of neuronal topology that then give rise to complex human behavior. Your mind is not your brain’s software – because in reality there is no software at all – information flows through the brain and computation happens naturally due to the physical properties of the neuronal pathways.

The key concept I want you to embrace is that your mind is fully described by the physical configuration of your brain. To “edit” your mind – for example, to implant a memory or instantly learn a skill – it would be necessary to either physically rewire your neurons or have your brain significantly augmented to support on-demand topology modification.

Input/Output interfaces are the most feasible in the short term

Right now we are only able to communicate with the brain by stimulating neurons (input) and measuring specific properties of neurons (output). There a lot of incredible things we can do using this approach, the key concept is to think in terms of what could be done using real-time input and output streams:

  • Give people senses they don’t have (vision to the blind, GPS to the willing);
  • Give people actuators they don’t have (arms to amputees, drive a car with your mind);
  • Read active thoughts and intentions, including memories a person is actively conjuring;
  • Give people artificial experiences using multi-sensorial stimulation;
  • External knowledge databases (Google in your head);
  • Ultimately, we could have an isolated brain with full-digital I/O, enabling for example, full-prosthetic bodies and disembodied living;

I/O interfaces in science-fiction:

  • The Matrix: the Matrix simulated world;
  • Ghost in the Shell: full-prosthetic bodies, “the net”, external memories;
  • Avatar and Surrogates: remote control of a prosthetic body;

Read/Write interfaces are possible but they will probably require advanced brain augmentation

There are things however, we might never be able to do using I/O interfaces because they require being able to read and modify the brain’s neuronal topology directly (read/write):

  • Read a memory, without the subject actively conjuring it;
  • Write a memory without generating an experience (“imprinting”);
  • Significantly faster-than-real-time learning or instant knowledge transfer;
  • “Editing” personality traits;

We currently lack significant understanding of how to address the challenge of building such R/W interface to the brain. First we would need significant advancements in neuroscience in order to learn how to design useful neuronal pathways. Secondly, we will need a few fundamental breakthroughs in nanofabrication and nanorobotics to gain the ability to manipulate matter with the degree of accuracy needed to make useful (and desirable) changes to a living human brain.

R/W interfaces in science-fiction:

  • The Matrix: instant learning through downloads;
  • Ghost in the Shell: hacked memories, “puppet” agents;
  • Dollhouse: personality imprints, “tabula rasa” programming;

Talking to the brain and altering the brain are two fundamentally different tasks

Although limited, I/O interfaces are the easiest to build. Even though every bit of information that enters the brain indirectly leads to neuronal topology change, the minutia and scope of these changes are not under our direct control. This means that there are fundamental limits of what we can do with I/O interfaces alone.

However, I/O brain-computer interfaces will significantly expand our mental landscape in the near term by adding new information streams to our conscious experience of the world. Yet, the dream of instant learning and mental imprints might never be achieved before we move on to considerably enhanced or artificial brains that provide easy R/W access to neuronal topology.

In other words, for the foreseeable future, you will not be downloading a kung-fu app into your brain. And when you are finally able to do so, you might not have what you currently call a brain anymore.

Welcome to the Man-Machine University

I was just featured on an article published by the Estado de Sao Paulo, one of Brazil’s largest newspapers:

Welcome to the Man-Machine University

(Translated by Amazon Mechanical Turk)

He taught himself to write computer programs when he was 9 years old. At 10, he devoured books in English on the subject, using a dictionary to translate it word by word. At age 15, he founded his first company, an online bulletin board system which was precursor service of the Internet. At 22, then a director of a large technology company, he left everything behind to live abroad and “conquer the world.”

The curriculum of Rod Furlan, 30, impressed the directors of one of the boldest educational institutions in the world, Singularity University (SU) in California.

Starting with the name, inspired by the book The Singularity is Near by futurist and founder of SU, Dr. Ray Kurzweil, nothing is conventional in the institution, which is also known as the “Google University” because the Internet giant is one of the founders and supporters of the institution, located within the NASA Ames Research Center in the Silicon Valley.

“We seek enterprising people, willing to face great challenges,” says the executive director of SU, Salim Ismail, who was in Sao Paulo this month to establish a partnership with the Faculty of Information Technology (Fiap). After the program, students must submit a proposal that to positively impact on the lives of at least 1 billion people in the following decade.

Participating in this dream team university is not easy. The applicant must be an expert in matters such as networks and computer systems, biotechnology and nanotechnology, medicine and neuroscience, robotics and artificial intelligence, public policy, law or finance.  Last year 1,200 candidates competed for 40 seats, this year 1600 to compete with 80 available.

“It was the best time of my life,” said Furlan. According to the Brazilian student, he alternated days of talks with senior officials from companies like Google itself with yoga classes and site visits. And at night, the participants met at the NASA lodge to discuss for hours all that they had learnt about the future. “SU is also known as Sleepless University, because students do not sleep,” jokes Ismail.

Greetings From Future Camp

popsci_singularity_525

Popular Science have just published a cool article about our summer at Singularity University. Late but great!

“According to Ray Kurzweil, the Singularity is a point at which man will become one with machine and then live eternally—which makes Singularity University, a nine-week academic retreat named for the concept, sound a little cultish. Our writer traveled west to investigate and found 40 stunningly sane brainiacs out to change the world.” – Popular Science [read full article]

Forget the Turing test, passing the Tim Ferriss Test is what you should aim for

tim-ferriss

I see AGI as the ultimate force multiplier and as the final solution for the workforce problem. As such, I expect that at some point in the future I would be in control an AGI system that could act as my online proxy-agent, taking care of my interests, investments, relationships, etc.

The objective of such AGI proxy agent (AGI-PA) would be to intelligently automate my life as much as possible and to eventually convince me that I am better off letting it handle most of my obligations for me.

Given enough time and feedback the AGI-PA should learn to think like me (to a degree) and start making decisions on my behalf. Its decisions would initially need to be audited but just as I have learned to trust my spam filter I should eventually learn to trust my AGI-PA’s judgement.

The process of training a new AGI-PA should be similar to the process of training an off-shore virtual assistant (VA) hired from any of the currently popular outsourcing services (oDesk, GetFriday, etc).

If the AGI-PA is able to (by any means) reduce my workload, I would consider it successful by a factor that reflects how much less work I had to do in average compared to my workload before commissioning the system. Naturally, hours spent teaching and managing the AGI-PA would count as work hours.

The Tim Ferriss Test for Artificial Intelligence

I have named this test after Tim Ferriss is the author of the best seller “The 4-Hour Workweek” and a vocal advocate of the concept of outsourcing your life to off-shore workers. The test consists in having a human judge distribute several (lawful) tasks to two remote assistants over e-mail, one being an experienced human VA and the other being a machine. If the judge isn’t able to tell which assistant is the machine solely by observing the resulting work, the machine is deemed to have passed the test.

I just can’t wait to be able to copy & paste employees…

Supercomputing the brain’s secrets

“Henry Markram says the mysteries of the mind can be solved — soon. Mental illness, memory, perception: they’re made of neurons and electric signals, and he plans to find them with a supercomputer that models all the brain’s 100,000,000,000,000 synapses.”

Towards a silicon brain

“Researcher Kwabena Boahen is looking for ways to mimic the brain’s supercomputing powers in silicon — because the messy, redundant processes inside our heads actually make for a small, light, superfast computer.”

Brains in Silicon lab @ Stanford

How to give a Nano-Talk

Ralph Merkle

I find it hard to imagine anything more disruptive to capitalism as we know it than nanotechnology. This summer at Singularity University we had the pleasure of meeting Ralph Merkle who taught us how to give a “nano-talk” in order to explain the benefits of nanotech to anyone:

The field of [field] is critically dependent on [product].

[Product] are made from atoms. Nanotechnology will let us make [product] that are lighter, stronger, smarter, cheaper, cleaner and just better.

This will have a huge impact on [field], for example, we could even have [product] that are [astonishing parameter] and cost only [remarkably cheap]!

Here is an example to drive the point home:

The field of [bicycling] is critically dependent on [bicycles].

[Bicycles] are made from atoms. Nanotechnology will let us make [bicycles] that are lighter, stronger, smarter, cheaper, cleaner and just better.

This will have a huge impact on [bicycling], for example, we could even have [bicycles] that are [just half a pound] and cost only [a dollar]!

Singularity University

This is going to be a very special summer. I am one of the 40 candidates admitted to the inaugural class of the Singularity University at the NASA Ames Research Center starting today.

Singularity University (SU) is a joint effort of NASA, Google, and some of the foremost authorities in science and technology. Its objective is to expose a group of promising graduate students and professionals to a broad range of cutting-edge research that is likely to lead to disruptive technological innovation in the near future.

In their own words:

Singularity University aims to assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity’s grand challenges.

In Defense of Efficient Computing

We all know that the cost of computing is on a definitive downtrend and while this is a great thing I worry that it is also steering developers to become less proficient in writing efficient (and reliable) code.

If most problems can be solved by throwing money and hardware at sub par wasteful code, what is the incentive to writing thoughtful, efficient programs?

The answer is RELIABILITY (and possibly to save your planet)

While the cost of computing is going down the adoption of dynamic languages is going up and that worries me. Perhaps I am just a backwards man who hasn’t seen the light yet but I urge you to bear with me as I present my argument against the use of dynamic languages in a production environment.

Why do I claim that dynamic languages are wasteful?

If you appreciate theater it is likely that you also appreciate that  the actors have to work hard at every performance. It is a beautiful thing, but it is also a lot of repetitive work and that is exactly the problem with dynamic languages in general.

How so? Every execution (performance) requires that a lot of work is done by the language runtime (showtime) environment. Most of the time this is repetitive work, mindlessly repeated every time you run your code.

Compare this with the stark contrast of cinema (compiled code) – there is no showtime hustle, the movie is already shot, cut, edited and packaged for delivery. No energy is wasted.

You might ask me how significant is this waste I am talking about and the truth is that most of the time it is negligible but like everything in life, the devil is in the details.

Hypothetically, let us say that at each execution a dynamic program wastes a mere 10,000 CPU cycles doing “preventable runtime busy-work”. That figure is utterly insignificant if you are looking at the trees only, but as you step back to look at the forest your perspective might change.

Imagine this same piece of code running on a few hundred-thousand computers around the world and being executed a few thousand times a day on each. Now multiply this by each program of each dynamic language in existence and you might conclude that dynamic languages are not very good for our planet because computing demands energy and energy demands natural resources.

So, if you believe in green computing through algorithmic efficiency you already have a good case against the use of dynamic languages.

“But dynamic languages are more productive so the energy wasted on one end is saved on another!”

Some might argue that dynamic languages are able to offset their “global footprint” by being more productive. Unfortunately, I beg to differ.

Unless you are simply writing throwaway (or generally short-lived) code, the odds are that whatever you are building will remain in play for years to come. This brings two new variables into the equation, one is maintenance costs and the other is failure costs. I am going to argue that compiled languages do a better job at minimizing those costs in the long run due to the increased reliability they offer.

First, we need to define what “productive” means in a quantifiable way because there is no room for fuzzy subjective opinions in computer science.

I believe that the Total Cost of Ownership (TCO) of a software solution is the ideal metric of productivity. A language that can deliver a piece of software with a lower lifetime TCO should be considered more productive simply because more was produced for less.

TCO encompasses everything from time-to-market risk, development costs, maintenance costs, failure-rate costs as well as any other overhead incurred.

My argument is that while dynamic languages may accelerate the initial delivery of a software solution, it does so at the cost of reliability and maintainability. This is detrimental for the TCO bottom line because in practical terms (at least for large, long-lived projects) it translates into decreased productivity.

“But compiled languages are too hard!”

 

The place for dynamic languages

With all that being said, I must disclose that I often use dynamic languages in my research when it makes sense, mostly for throwaway code or prototyping. In fact, one of my favorite tools (MATLAB) is centered around a particularly terrible dynamic language. What I am against is using dynamic languages for the development of code intended for production environments. That is all.

Conclusion

If you love our planet and you love computer science, I urge you to not take the easy way out, embrace the beautiful complexity of machine efficient code and write greener programs by using a compiled language. It can only make you smarter in the long run :)

Profit is a side-effect of a job well done

I am 29 and I am a compulsive builder. It isn’t like I want to build things, I simply have to because otherwise they consume me. Considering that I can envision way more than I can actually work on I find myself always having to give up ideas for the sake of completing projects already underway.

Once I am done, I don’t feel like I have any time to spare to enjoy success. Off I am to the next project.

To be honest, I feel like the journey is more important than the destination. Profits are nothing but a side-effect of a job well done.

My .02 :)

Originally posted as a comment by rfurlan on Howard Lindzon using Disqus.

Introducing TwitZap

twitzap

My latest project just went live. If you love Twitter as much as I do, I am sure you will love TwitZap too:

TwitZap is a new way to use Twitter. It lets you slice Twitter into realtime streams of stuff that matters to you. On top of that, TwitZap users can tweet each other in real-time using our Twitter accelerator technology even while Twitter is down.”

It is basically a web-based, real-time Twitter client + search monitor. Think TweetDeck without the install – making it possible to use it anywhere, anytime.

Its unique advantage is that communication between TwitZap users is instantaneous – tweets are delivered in less than 800ms from end-to-end even when Twitter is slow or down.

All tweets are still relayed to Twitter as soon as possible. You can even see other users that are on the same channels as you – which is pretty cool and incentivize instant interactions.

This morning, TwiZap got featured on Mashable and was the #1 trending topic on twitter search. Not bad for two weeks of work :)

However, this is probably going to be my last web project for a while. While it is incredibly gratifying to build something the whole world can use, web development is just not challenging enough to keep me excited about it.

I just feel like it is just too much busy-work for my taste, with very few really interesting problems to solve.

Introducing HotArb.com

HotArb
HotArb.com is just a little something I have been working on my spare time. It provides daily alerts about cointegrated S&P 900 stocks that are currently converging from a significant mutual mispricing.

Trade at your own risk ;)

Pragmatic Automated Trading – Part 2

The Scientific-Minimalist-Economic (SME) approach to Automated Trading

The SME is my personal approach to the development of automated trading agents. You could save a lot of time and money by adhering to these three very simple principles.

This article is part of a series dedicated to explaining a no-nonsense approach to the development of automated trading agents. This series is aimed towards people with a computer science background with or without trading experience.

Principle #1 – BE SCIENTIFIC

science_robot

The Scientific component of the SME method pertains to the use of the Scientific Method to create and refine trading strategies.

Scientific method refers to bodies of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on gathering observable, empirical and measurable evidence subject to specific principles of reasoning.[1] A scientific method consists of the collection of data through observation and experimentation, and the formulation and testing of hypotheses.[2]

Although procedures vary from one field of inquiry to another, identifiable features distinguish scientific inquiry from other methodologies of knowledge. Scientific researchers propose hypotheses as explanations of phenomena, and design experimental studies to test these hypotheses. These steps must be repeatable in order to dependably predict any future results.Theories that encompass wider domains of inquiry may bind many hypotheses together in a coherent structure. This in turn may help form new hypotheses or place groups of hypotheses into context.

Among other facets shared by the various fields of inquiry is the conviction that the process be objective to reduce a biased interpretation of the results. Another basic expectation is to document, archive and share all data and methodology so they are available for careful scrutiny by other scientists, thereby allowing other researchers the opportunity to verify results by attempting to reproduce them. This practice, called full disclosure, also allows statistical measures of the reliability of these data to be established.

Source: Wikipedia


How does it apply to automated trading:

Trading books, magazines and circles are heavy on pseudo-knowledge derived from subjective experience and anecdotal evidence. In fact, there are whole books out there authored by so-called ‘trading gurus’ that do not contain even one single profitable trading system. You should never trust any trading advice to be true before you test it yourself.

The Hypothesis

To beat the market you will need to develop a profound understanding of how it works. When confronted with a potentially profitable trading idea the first thing you need to do is to develop a hypothesis of why you think it should work. This hypothesis will either be confirmed or falsified by the following steps of experimentation and analysis.

The Experiment

Once you have developed a hypothesis of why a given trading idea should be profitable, it is time to undertake an experiment called backtesting to establish if the strategy would have been profitable if you had traded it for a given period in the past.

While a successful backtest wouldn’t be enough to assert that a given hypothesis is true, an unprofitable backtest would be in most cases enough to dismiss it as a valid component of your automated trading system.

The Analysis

With the backtest results at hand you can begin to draw conclusions regarding the hypothesis you are testing. Considering you were able to determine that the backtest results are indeed correct, your major concern should be to establish if the trading strategy could actually be profitable for the near term future.

Later in this series I will discuss several statistical techniques you can use to rule-out most cases where an unprofitable trading idea generates a profitable backtest.

Principle #2 – BE MINIMALIST

The Minimalist component pertains to applying the Occam’s razor principle to every single decision you make:

It is frivolous to do with more what can be done with less.


Why should you keep your trading platform code down to a minimum:

Every single line of platform code you write will have to be debugged and maintained for years to come. That is an overhead cost you should not ignore. You should spend your time beating the market – not debugging platform code.

Why should you keep your trading system rules as simple as possible:

Backtesting will not make you rich and every time you add a rule to a trading model that translates into a more profitable backtest result you are very likely doing so in expense of future trading performance.

Later in this series I will discuss how you can assess if adding an additional rule to your trading agent would be beneficial or detrimental to its future performance.

imagesPrinciple #3 – BE ECONOMIC

We are in this business to make money, not to spend it. I consider it of paramount importance to keep costs down at all times.

Questions to ask yourself before committing money to anything other than a trade:

  • Do I know as a fact that I absolutely need this product/service?
  • Is this the right product/service for me?
  • Can I postpone this commitment a little longer?
  • Does this product/service offer the best long term value?
  • Will this acquisition make me dependent on this vendor?

Real life examples:

  • Don’t buy anything because you ‘might need it’ soon. Plans change but expensive hardware doesn’t.
  • You will live and die by the quality of your data feed but be aware that an expensive service does not necessarily translates to a quality service.
  • Invest time to find a broker with the best reliability-to-cost ratio. It will payoff in the long run.

Next on Part 3:
Automated Trading Platform – Build or Buy?

Pragmatic Automated Trading – Part 1

This article is part of a series dedicated to explaining a no-nonsense approach to the development of automated trading agents. This series is aimed towards people with a computer science background with or without trading experience.

futurama_0601_wideweb__470x330,0The development of automated trading agents is one of the most challenging and rewarding applications of artificial intelligence to date. It is rewarding because success generally translates into an account padded with profits and it is challenging because it pits your creations against the combined intelligence of millions of other agents around the globe.

I will be honest with you, the road to profits is full of misleading signs and dangerous detours. Many of you will fail not because you are not smart or talented enough but because you might decide that the personal cost of such venture might just be too high for you.

The market is a formidable foe and to beat it you will have to immerse yourself into it and crack the code from the inside. To create truly intelligent agents you will have to learn many things that will change the way you perceive the world around you.

If you are threading down this path just for the money I can save you a lot of trouble by telling you to go spend your time with something else. There are easier ways to make money out there that do not require the Herculean amount of work and research that is necessary to consistently beat the markets.

What do you need to get started:

  • A good computer. Something reliable and reasonably fast. You might need more processing power later on but you should not worry about it right now. Do not spend money on a computer just to get started – it is not worth it.
  • A broker that provides API access to market data and execution. Ideally the broker should also provide a live “paper trading” environment that you could place trades against to test your execution code. I recommend Interactive Brokers as a good starter brokerage firm.
  • Solid programming skills with the language you decide to use. This is not optional and it isn’t the kind of thing you can “learn as you go”. If you don’t have prior programming experience I recommend that you dedicate enough time to learn as much as you can before you get started. It isn’t enough to be able to write code that works – you need to be able to write efficient code that is both maintainable and reliable.
  • stats1A passion for Math – or at least the ability to force yourself to like it. There is no way around it as math will be the main weapon in your arsenal. Don’t think you can make do by simply using formulas you don’t quite understand – you would not stand a chance. It is crucial that you master all the mathematical concepts you decide to use in your trading systems.
  • An unshakeable and obsessive desire to succeed.

Next on Part 2:
The Scientific-Minimalist-Economic (SME) approach to Automated Trading

On Free Will

I have an issue with the concept of free will because for free will to exist it would require that information could be created out of nothing. My position might be counter-intuitive at first, but think about it:

Scenario #1: There is no free will and information cannot be created, just derived from previously existing knowledge. We are born with biological biases and within an environment that is not under our control. The environment will dictate what experiences we will have and what knowledge we will acquire while our biological biases will determine how we will derive new information from it. Information is derived and then re-derived and through recursion, we begin to exhibit complex emergent behavior that appears to be unpredictable.

Scenario #2: There is free will and ‘somehow’ we make information available to our mental faculties that is not sourced on pre-existing knowledge. The world stops making sense and we begin creating incredible explanations for the phenomenon – i.e. soul, spirit, fairies.

Do I even need bring up Occam’s razor on this one?

On the Acquisition and Culling of Beliefs

Last night I spent some time thinking about what exactly makes the belief systems of religion and science so different. If we take this discussion to a higher level of abstraction we would be forced to agree that both religion and science represent incomplete belief systems that rely on certain dogmas to assert their validity.

My definition of incomplete in this case pertains to the existence of corner-stone dogmas that must not be challenged for the belief system to be valid. In contrast, I would call a believe system “complete” when such dogmas are not necessary. It goes without saying that such “complete” belief system can only ever exist as a theoretical construct for use in thought experiments – you will not find it in the real world.

Following this train of thought I was able to isolate a single yet crucial difference that makes the scientific belief system superior than the religious belief system. 

Both science and religion have rules in place to acquire new beliefs. 

Science uses empiric observation and experiments while each religion has their own methods – new saints, the teaching of a reincarnated master, and so on. The critical difference isn’t necessarily the way those different systems acquire knowledge but in a crucial device that is present in the scientific method but it is lacking on most religions as far as I know.

Any information system that is supposed to manage knowledge with the intent of learning the truth about any topic must have two basic kinds of rules to be successful. The first rule set would be for knowledge acquisition which is present in both religion and science. The other rule set would be for knowledge culling and is used to remove unfit beliefs from the pool of knowledge of the system.

Most religions (if not all) lack formal rules for the culling of unfit beliefs.

Without a rule-set to remove beliefs from the pool of accepted knowledge any belief system would eventually evolve into a bloated mess of contradictory teachings, rules and expectations. What exacerbates the problem for most religions is that to cull a once divine instruction from the belief system would generaly imply that god (or any given deity) has a flawed reign over the flow of knowledge that is passed on to followers – which I believe most religion adepts would not be willing to accept.

In summary: Both religion and science are belief systems that rely on dogmas to assert themselves. While both systems have rules to acquire new knowledge, most (if not all) religions lack generally accepted rules to invalidate previously divine rules that were found to be wrong. That leads to desperate efforts to make them right with disregard to the truth itself. Ex. Dinosaurs in Noah’s ark.

Code as a Second Language

I was 8 years old during the summer of 1987. That was when I came across a personal computer for the first time. I had just arrived at my grandparents’ house in the countryside of Sao Paulo state in Brazil and my aunt had just bought a computer. I am not sure why but I was immediately interested.

"16K-BYTE RAM pack for massive add-on memory"

My aunt had just bought a second-hand Sinclair ZX-81 and she was kind enough to entertain my interest in it. The ZX-81 was a small machine resembling more an accounting calculator than anything you would call a computer. When she turned it on, I must admit that I was rather disappointed because all I saw on the screen was a white background with a small black square in the bottom left corner with an uppercase “K” inside it.

ZX81 Screenshot

That is what you would see after turning on a ZX81. Not exactly riveting.

Then I asked: “So, what can it do?”. Little did I know that her answer would literally change my world. “If you write the right code, a computer can do anything you want.” – she replied.

I stood in silence for a moment as her words echoed inside my brain. Imagine that, a machine that can do “anything you want” as long you write the “right code”. It became immediately clear to me that I had to learn how to use it.

Unfortunately she was too busy to teach me but was kind enough to let me  “play” with her computer. As she later confessed, she didn’t take me seriously at first, in fact, no one in my family did. It didn’t matter, though, for luckily I was already an avid reader and I had plenty of time on my hands.

By the end of that summer I had developed a solid intuitive understanding of the logic underpinnings shared by most programming languages even today. I felt as if I had just seized the power of creation itself. Within the boundaries of the Zilog Z80 microprocessor I experienced absolute control. It was a powerful, life-changing experience for a kid.

Back then, numbered lines were all the rage

However, the end of the summer posed a challenge because I didn’t own a computer. To make matters worse, in the context of the Brazilian economy in the late 80s, computers were without a question too expensive to buy as a “toy” for an 8 year old.

My solution was to do what kids to best: use my imagination. Sure, I didn’t have a computer, but that would not stop me. I could still write code, and then I would execute it in my head, line by line, like a human computer.

ZX81 machine code

Not my actual notes, but you get the idea...

I was nine by the time my father was finally able to buy me a computer. By then I had already become quite good at running code “in my head”. Using my supercharged imagination, I was even able to play games of my own design by simulating the code execution in my mind, keeping track of all variables and having fun doing it.

Lamentably, my first computer was defective and unable to load programs from cassette tapes. Any code I wrote for it would be forever lost every single time the computer was turned off and I would never be able to run any commercial software. If I wanted my computer to do anything, I would have to always write the code myself from scratch.

This didn’t bother me because defective or not, I finally had a computer to work with and that was definitely progress. The next issue at hand was the fact that most of the ZX81 literature available was not in Portuguese, my native language. I had to spend countless hours translating books written in English with a dictionary by my side. Painstakingly and word by word, I was able to decipher the few imported books I could get my hands on and use the acquired knowledge to write code of ever growing complexity.

Machine code for beginners

Recursive autodidacticism: I had to teach myself English before I could teach myself Z80 machine code

Even though it wasn’t long before I outgrew the ZX81, it has given me something that will always be part of me. Thanks to my early exposure to computer logic, I became able to visualize code not as static commands in a text editor, but in a much more fluid manner. The best way I can describe it is as a form of acquired synesthesia: to me, computer code has always had a kinetic-spatial dimension to it, with different logical constructs possessing specific “geometries” associated with them.

Relentless progress

Progress is relentless...

I entered the workforce early, when I was 13 I became an assistant instructor at a local computer school. I founded my first company, a dial-up BBS, when I was 15. School was essentially a bore. Unlike most people, I don’t need structure, I need concrete problems to solve. I was constantly frustrated by how disconnected from reality most classes felt. Not soon enough for me, high-school was eventually over and I decided to quit college to focus on my career. By the time I turned 21, I landed my first executive position at a software development arm of a large utilities conglomerate that employed over 400 developers.

I always wonder how different my life would have been if I had decided finish college instead of pursuing a career. I never felt as if I had a choice in this matter, I simply followed my heart. Luckily, the outcome of my decision was positive and by the time I was supposed to be graduating from college, I was already managing multi-million dollar projects and flying around the country in the company’s private jet.

Now I finally understand that my passion will always be my most important credential. I since I was a little boy I have been genuinely in love with the power and infinite potential of technology. I believe was born to build wondrous things that inspire and drive change and I will work towards that until my very last breath.

Life is an all-you-can-eat buffet of knowledge.

The ZX81 at the Computer History Museum

The ZX81 at the Computer History Museum