Tuesday, September 4, 2012

I want more Life!

Computers have obviously been a fantastic tool in improving productivity. We can make financial calculations will blazing speed, have access to the world's knowledge with a few key strokes. We live in a wonderful time.

One thing that computers are not good at is helping make people more productive in the physical world. There are many productivity 'leaks' everyday. These productivity leaks are numerous and what they leak away is more precious than gold or even Apple stock.  Here are a few:



An incomplete list of Time Leaks in human life in no particular order of importance. 

  1. Household chores
    1. Doing the dishes
    2. Vacuuming
    3. Organizing the home
    4. Cleaning vertical and horizontal surfaces
    5. Taking out the trash
    6. Weeding the garden
    7. Changing bed sheets
    8. Doing laundry
    9. Home maintenance (painting, minor plumbing repair)
    10. Re-arranging furniture (especially heavy furniture).
    11. Cooking wholesome food 
      1. Kitchen prep work: cutting veggies and fruit, washing meet etc
      2. Actually cooking to a recipe 
  2. Factory world
    1. Transport of material for production
    2. Disposal of waste
    3. Repetitive tasks with non-uniform parts
    4. Repetitive tasks requiring a high degree of dexterity
    5. Moving heavy items into place
    6. Joining items
    7. Cleaning, lapping and finishing parts
    8. Packing variable sized items to minimize shipping charges. 
  3. Office world
    1. Physically moving books and binders
    2. Making coffee and cookies of guests
    3. Picking up the bosses dry cleaning (see 4 below)
  4. Store world
    1. Stocking shelves
    2. Keeping track of inventory and losses
    3. Telling customers buying clothes:  "you look fabulous!" 
    4. Assisting customers in finding the right "thingy" 
    5. Unloading trucks
    6. Pricing merchandise. 
    7. Re-ordering the store (putting misplaced items back in their rightful places). 
    8. Demonstrating how things work (toys, electronics, etc.) 
  5. Tasks involving driving
    1. Picking up and dropping off kids from school
    2. Going shopping 
    3. Driving to work in busy traffic
    4. Picking laundry and dry cleaning 
  6. On a personal level
    1. bathing an infirmed relative
    2. Assisting someones mobility

Obviously this is a partial list.  But, much of what is on this... no... all of what is on this list can be addressed robotically. However, sadly, roboticist do not follow a monolithic vision.  Roboticists are usually firmly in one of two camps on how to go about this. 

Camp 1: Many useful, special purpose robots 

Camp 1 is inhabited by sensible people who know how hard it is to build a "universal robot." These people have realized how difficult it is to make a robot do the simplest task,  just picking up a dish, and washing it is really difficult.  Well, getting a robot to see the dish is the first problem. Getting it to angle a gripper to at a proper grasp point is the next problem. Regrasping the dish, it the final problem... before we get started with the actual task. 

Besides, robots that are capable would be really, really expensive and out of reach, price wise, for most people. The most expensive appliances run about $1000-$2000 tops for the average middle class appliance. How would we get someone to spend 10x or perhaps 1000x that much money on an "appliance"? It's absurd. 

Camp 2: One Universal Robot

Camp 2 is inhabited by people who hope to build a universal robot that can do all of the above and more. Each task might be an "app." (App is not an analogy I embrace because apps run independently of each other with no coordination and are launched by the user, not autonomously).

This camp argues that a universal robot will be like a universal computer, i.e. like your desk top or lap top it can run any code. With a proper Operating System, this robot will be able to perform any task possible.


Critique


In the short term, camp 1 guys will win out. Indeed, they are already winning out. iRobot has a much greater sales volume in the appliance world than, say Aldebaran. And the iRobot products actually do useful things. 

But here is what I am afraid of. I am afraid that to address all of the chores above, we will need perhaps dozens of these single purpose robots.  That will lead to an ever escalating problem of maintenance of said robots... changing batteries, interacting with them, each with a unique user interface.  Then, the constant stream of upgrading appliances. The effort of dozens of companies designing robots with unique or overlapping niches. It will become a mess.  Don't get me going about how I am going to keep 30 items charged every day!

Camp 2  is an elegant solution. What!? No realist would claim that, surely. But hear me out.

One robot that does it all. That gives the consumer one point of interaction. One item to get fixed if it breaks. One model to upgrade. One user manual to read.  Further, the device can expand beyond a set of well defined niche tasks and can evolve.

Programmers will work on one robot. They will share their code synergistically.  Instead of fractured disconnected efforts, there will be one code base that interacts seamlessly.

What about the cost? 

Well, the question is not really about cost, is it? How much do you spend on your mortgage? On your car payment? Throw in utilities, insurance, and maintenance and those two bills account for about 1/2 of many people's take home pay. 

While I am not here to give financial advice, it is clear that people will pay a significant amount of their income to buy things  they need.

But what a universal robot can offer is a unique kind of value. A robot offers you Time. The most precious thing you have. More value that all the riches, is an extra day of life.  You have N days left in your life. In some actuary manual or written in some scroll in someone's religion, there is a day. And that is the day you will die. 

The question, then, is, how much is it worth to you to have say 20%-30% maybe even more life?  How much would it be worth to you to spend  that time with your friends and family? Traveling? Meeting new people?  Doing the things that you enjoy doing? Writing that novel that you have always wanted? How much is that worth to you each month? What would an extra 1000 hours a year be worth to you? 

If that value proposition can be refined and made real, I predict that people will pay a lot for a robot. A huge portion of their take home pay.  They will get that robot by any means possible be it by saving, by financing,  or by leasing.

Nearly everyone will realize at some point that their days are numbered. That is the day that a robot that can do "all of the above" can help out and come to the rescue and give you back more of the only thing there isn't too much of: Time.


Perhaps later I will write about how to methodically spec out the above Universal Robot, coordinate a group of really cleaver people, and get it done.

  

Friday, August 17, 2012

The cost-function roadmap of intelligent machines

The idea of a Universal robot has been around for at least 10,000  years where legend has it that a Chinese inventor created the worlds first walking talking humanoid.

Leonardo di Vinci even got into the act imagining and prototyping a robotic soldier. The dream of a servant with the intelligence of a human being, but where the owner has no moral obligation to the creature, is appealing.  Being born and raised in the US,  I was not raised with servants around the house, picking up clothes after me, preparing my meals and so on. If I had the means to hire such a servant, I would not feel comfortable.

I would accept a robotic servant, if it could do the things I needed and not get in the way.
To build such a servant there is a key conundrum that must be addressed. It is not technical. It is a business problem. I call this conundrum the Cost-Function roadmap.

What is the idea behind this road map? I can say with 90% certainty that we could build a universal robot for a billion dollars or so.  A billion dollars is a government scale project  or at least that of a company the size of a small government.  

It is obvious that if we did have such a robot, we could improve  productivity in America dramatically. The impact on productivity in the workspace and at home would be measured not in the billions of dollars but in the trillions of dollars in the US alone.

It is well worth doing. People feel it in the bones, but so far investors and the government are only willing to throw some loose change in the direction of small project of this sort. No one is ready for the moon shot.

That is the reality that we live in today. To harmonize this dream with the needs of the stake holders (i.e. investors and the eventual end users) we need to develop a road map. This road map would establish a string of products that would be organized as a ladder of capability. At each rung of the ladder, the product must have a value that exceeds it cost to the consumer.  So far,  with a nod  to iRobot (who's director of communication Mathew Lloyd recently gave me a left handed compliment about  my newest work... but hey, I have a thick skin).... I think we have not reached even rung one of that ladder in the consumer space.

What? ! What about the millions of robotic vacuum cleaners and RoboSapiens that have been sold?!  In a previous post, I talked about "crossing the chasm" or moving from early adopters of technology to the early majority... that is building a product that a practical mom or dad would buy, not just something of interest to gadget freaks or robot lovers. Or divorced dads wanting to buy the next cool toy for their kid that they see every other week.

No, I am talking about devices that outcompete the competitors for the big markets.  Consumer robotics is not there yet.

The hardest nut to crack in robotics is not the technology itself. It is to identify and build that ladder of technology rung by rung until we reach the Universal Robot. Obama is not going to fund a billion dollar project to get there.... at least not in the consumer space.

My hope is that companies like RethinkRobotics which is focusing on the manufacturing space, may be developing some super secret technology that will spill over into the consumer space.

I also think that as the era of the PC is ending we may start to ascend the ladder.   Who wants to sit in front of a computer ? The computer is used at home primarily for social purposes not work. It no longer needs to be a high tech imitation of a typewriter.  The computer is an aberrant and unnatural form for social communication. New technology will replace it.  The telephone has already morphed into an instrument of social media, the smart phone. Obviously the TV is next. After that the kitchen (the soul of any home) will transform into a social center. And the computer will fad away.

As the era of the PC ends, I predict that  we will finally begin to climb the ladder to the universal robot.


Saturday, July 21, 2012

Crossing the Chasm: will consumer robotics ever do it in my lifetime?


When will a consumer robot cross the chasm?

 What will be the first consumer robot application to bridge the chasm between early adopter and the early majority?



This, I believe, is the central question this consumer robotics must answer. Crossing the chasm means selling a product not just to enthusiasts, but to the early majority. I submit for your consideration that consumer robotics has NOT been able to jump the chasm yet, and the whole industry is stuck, and is of no real interest to major companies like Google, MS, Apple etc... those with companies with huge stacks of money and who have to power to transform this market. While MS has shown leadership in this area, it is the sole standout in the crowd.  See the article by BG : A robot in every home.

What could that first product be that jumps the chasm? There are so many possible areas where consumer robotics can grow. Obviously any new field must be technically in reach within say, several  years, it must address a pressing need. So, will it be Telepresence? Security? Intelligent toys? Elder care? Maybe even a nannybot? Or the famous "get me a beer robot" (and is the 'get me the beer robot' an expression of complete lack of imagination in the publics mind as to what robots can do for us).

Military robotics, lead mostly by iRobot, has been  able to bridge the chasm, and now we see the widespread use of robots in the military and police forces. Lego Mindstorms, may have done it (or is in the process of doing it) with education/hobbyist robots. Intuitive surgical has transformed surgery. What is wrong with consumer robotics? Why should anyone be enthusiastic about it as a business opportunity?

Here is some background to the crossing the chasm problem: "Crossing the Chasm: Marketing and Selling Disruptive Products to Mainstream Customers", Geoffrey A. Moore


Here is what I think is standing in the way.  We are stuck with the notion that consumers will not pay more than X dollars for a robot. People are focused on that X number because there is a lot of data to support the idea that $100.00 for an entertainment robot is the max and around $300.00 for a utility robot is a maximum people will pay.

Yet, people also have cars, which cost a lot more. Why? Because they deliver something of unique value to the consumer. People will also pay a few hundred a month for cable. Why? Because there is a constantly renewing source of entertainment.  TV is part of our culture.

What we need to think about is delivering VALUE to the consumer and let the price fall where it may.  What is the pain that consumers undergo that a consumer robot can alleviate? 

The first step, I think, in performing a methodical analysis of this question is compile a pain-list of consumer activities. 

Then, the second step is to determine the technical readiness of robotic technology to address each of these pains.

It is only through a methodical analysis of this problem are we going to get anywhere. Building stuff and seeing what sticks has not gotten us very fare.  Just my 2 cents. 


Tuesday, July 17, 2012

The case for neuromorphic engineering

Neuromorphic engineering... trying to build computers that are more brain like... has become mainstream. Companies like Qualcomm are hiring neuromorphic engineers. Major companies like IBM and HP have DARPA funding to build more brain like computers.

But, the question is, is there really any benefit to using neural style computing versus good-old-fashioned CPUs which have gotten us oh so very far.

For neuromorphic engineering to advance further, this question has to be answered crisply and definitively.

Before we can answer that question, we have to ask: what do we want to use the computer for? I can buy a calculator that costs less than $10.00 at my local drug store that can add, subtract, multiply and divide far faster than I can in my head, and with much greater precision!  I feel thick witted when compared to the simplest pocket calculator.

And there are a lot of places where being able to "crunch numbers" quickly is very important. For example, financial calculation, rendering graphics, and designing complex machines. So, lets not dismiss the obvious benefits of these machines.

However, that is not all we want computer to do. As we begin to attach computer to the real world by adding sensors and actuators-- essentially making them the computational core of robot-like machines, our computational needs are changing.


Here is where our world of computation turns upside. Humans excel at interpreting large volumes of data, ignoring what is not important, and emphasizing what is import. We attend to that which is relevant. We have also come to realize from neuroscience that the world we live in is mostly in our heads, in the form of models which we can use to reason with, but that are grounded in the real world.

Now, the interesting thing is that as our algorithms begin to resemble, more brainlike computation, we find that these algorithms cannot run efficiently on CPUs designed for computing balances on checking accounts. A new kind, of find grain parallelism is needed to handle the fire-hose of data that is flowing into these systems. As we drill down, we find that with this fine grain parallelism, in order to create efficient machines, we need to colocate computation with memory. If memory and processing are kept separate we need to have lots of long connections with use power and generate heat. Adding the local ability to adapt helps things as well. A global adaptation scheme just can't send enough signals to enough processor to keep up. This leads to extreme decentralization of computing.  In the end, we end up with a collection of highly parallel, computational elements with local learning, and local data storage. We end up brainlike.

And here is the real kicker. The reason why we end up designing computers like this is to save power.  Now this gives us the idea that when we look at real brains, we should also be considering how limited power and cooling capabilities shaped the creation of brains. How efficiency may have given rise to the partitioning of the brain into distinct functional regions.

So, the case for neuromorphic engineering really comes down to not necessarily computation, but the practical issue of how to host that computation most efficiently on a physical substrate. Since both brains and silicon inhabit a world with real consequences for their organization, the strongest case for neuromorphics is going to be made on the basis of power.



Sunday, July 8, 2012

Most Biologically Accurate Model of Human Walking

The press release a couple of days ago the work that Theresa Klein and I did has really caught the world's  imagination.  It also caught me off guard. My cell phone started ringing at 7:00 am the day before the press release with people asking for interviews.  It was fun, we were covered by the AFP the Paris news agency, and BBC Radio up all night... and various other places.

Before looking at the video, which you can see here, I would recommend either reading the article. Or the summary of the results published at the National Health Services in Britain here.

The point of the work was to create a physical model of the walking system in a human being. That includes getting the biomechanics right. We had to build a system with  essential elements of the biomechanics of the lower human limb. That alone was difficult starting from scratch.

We used a 3-D printer from Dimension. It took me about 5 years for my students and I to master building robots using a 3-d printer.  There were a lot of techniques we learned on the way. One day, I hope to publish all of the tricks that allowed us to do this.

Next we had to build muscle like actuators that used tendons to pull on the limb. Easy right? well, we had to sense the force in the tendons. Theresa went through many iterations until she invented a sensor that was accurate but durable enough to be used in a robot. When I say many I mean many iteration. Robotics is nothing if not about persistence.

Theresa then had to experiment with building neural circuits. Typically, neurorobot in the past have used dynamical equations that produce oscillations... not really neurons... we used Izhikevich spiking neurons which we felt are much more like what you would see in the true spinal cord.

I have been working with building spiking neural networks for some time now. We have even built a series of ASIC chips that implement the dynamics of neurons in collaboration with our colleagues at Johns Hopkins. We have used those chips to control movement in animals.

What we needed was a platform to figure out how the entire neuro-mechanical-sensory system interacts. So, if we one day want to restore locomotion, we need to have some basic understanding of walking.

Now, why is this important to science? Biologist collect lots of data about the bits and pieces of locomotory system, but they don't know if they put all of the pieces together if the system will actually work as in a human or animal. They can never know if the elements they have uncovered are both "Necessary and Sufficient."  That is where robots can really help.

So our work does just that, we took what we knew about biology then had to fill in details where details were missing (think Jurassic park :-)) . And we were able to get walking using ONLY suggestions from biology. No gimmicks.

So, that is why I think the work is cool and it is relevant SCIENTIFICALLY.

Now, a lot of people have compared what we have done to PetMan. And one commentator implied that we where trying to play "catch up."

Well, Petman has been extremely well funded, and had practical, engineering goals. We did our work on our own dime and some funds provided by the University of Arizona (thank you!).  I am sure the ratio of spending is something like 1000:1. To be perfectly honest, initial funding for the core concepts behind this  work was provided by Tom McKenna of the Office of Naval Research years ago. He got me going in the direction of building biologically inspired humanoids. Dr. McKenna has been the most important force in legged locomotion over the past several decades in the United States PERIOD.  Now Gill Pratt at Darpa is really making a push, but I think Dr. Pratt's charter is more towards building systems with direct military importance. Perhaps I can persuade Darpa to do a neuro-prosthetics program for the lower limb like they funded for upper limb prosthesis. that would be cool!

Like I said, funding is hard to get for this kind of work. In this country engineers don't care about the biology at all. And biologist are skeptical of "engineers" encroaching on their turf competing for very limited funds in biology... which I totally understand.  Really, only a handful of places around the country have been able to make a living in this biologically inspired walking robot paradigm. I would say the oldest program was  at Case Western Reserve and that work is still being continued by Roger Quinn and Roy Ritzmann and colleagues.

But, non-the-less, Theresa and I  felt this work was important and we put our own money sweat and time into building it. She preserved for years... I am proud of her. Unfortunately, the experience left her jaded about research.  She had fantastic opportunities to continue this work when she graduated  and a possibly brilliant research career ahead of her but, she decided to go into industry, ultimately.

It is a loss to science.

It is telling that most attention to this work has come from overseas... internationally... where I think they understand interdisciplinary research a bit better.

But, I think I will continue this work somehow.  I have gotten many letters from people and seen many comments on sites that indicate that people see this robot as hope that one day we will be able to restore locomotion in people with spinal cord injury. Of course,  the robot we built is just a tool for understanding. We still need the medical experts to make it real.

Saturday, June 16, 2012

On Dmitry Itskov's Dream of transplanting human brains to robots.

Russian mogul, Dmitry Itskov envisions a future where human brains can be transplanted into robotic bodies.  Read more about him here. http://www.wired.com/dangerroom/2012/02/dmitry-itskov/

I was asked to comment on this article by a writer for the German publication Die Zeit online


Here is my take:

First, I would suggest people read the book "Mind Children" by Hans Moravac. It is a 20 year old book, but I think he describes a similar future.


Regarding the state of the art in synthetic humans: We are on the verge of a huge breakthrough in this area.  Researchers have worked diligently for 20 years building brain-like computers. Progress was initially very slow. But, researchers at IBM, HRL Laboratories, MIT, Johns Hopkins and elsewhere under DARPA funded projects are posed to demonstrate impressive new results in brain-like computing.


I think we will see a huge leap forward in demonstrated capability within the next 24 months.


As for robot bodies, I think we are beginning to understand how to build humanoids  that act like real humans. The key is to mimic not only the form of a human but the biomechanics of a human. In an article to be published shortly, (Klein and Lewis, Journal of Neural Engineering), we demonstrate what I think is the first step in thisdirection: A robot with biomechanics like a human and incorporating a simple artificial neural network that mimics human walking very well. This paper is already generating excitement and it has yet to be published. I think it is important at a theoretical level.


The Boston Dynamics PetMan robot is a very practical and compelling demonstration of good engineering. I do not know what kind of "brain" it has, but DARPA is planning to fund a number of researchers to create the intelligence needed to make PetMan a "super robot" with science fiction like capability. They are offering a reward of $2,000,000 US for the group that can first demonstrate this intelligence.



So, will we have robots with computing power and ability similar to a human being by 2045?: I think why not?  The pathway is available today. Yes we can do it with near 100% certainty, in my opinion, based on what I see coming in the next few months.
The question of interfacing the  a person to an avatar: That is doable. Yes, but currently I think this would require so much invasive surgery that the benefit to the patient would need to be very, very clear since invasive recordings can damage the soft tissue in the brain. Perhaps this problem will be solved, but, I think, it will take a much longer time that building the actual body.


The question: can we transfer a persons intelligence?  I think that question comes down to playing with definitions. By analogy, consider art.  I can make a stick figure drawing of a human. That I can say represents the basic characteristic of a human. But no one would confuse that drawing with a human being. Likewise we will initially be able to create intelligence that is similar in personality to a particular human, and may even make gestures and statements like a stereotypical person.


If you are familiar with the famous American Billionaire, Donald Trump, I think we could mimic his "on screen" personality fairly quickly. I think that his "public face" is very strongly stereotyped, and people would recognize a "donald" humanoid very quickly.  We may be able to create an android that would fool people for a short periodof time into thinking they are talking to Donald Trump. I think that could happen within 10 years.  But that would be like a stick figure drawing.


How would we transfer intelligence?  One way is to mimic the neural structure of an individuals brain. But, mimicking the neural structure of a brain is a little like coping all the streets of a city and saying "I have created a duplicate of New York" say. It may look the  same, but it will be functionally very different. I cannot envision how that would be done at a neural level using the tools we have today.


At the same time, in the sci-fi series "caprica" the premise is that massive amounts of data being collected about people from the day they are born could be used to create an imitation of a particular human  being.  I think a base-line human intelligence could be shaped to resemble a particular human being in the future, particularly as our lives become  more and more digital and more of our life is archived away in storage.


From that point, to create more subtle and nuanced models of humans will be a gradual process. By 2045, we will certainly have robots that behave similar to a particular person. But that robot will never *be that person.  Think of the Magritte painting "Ceci n'est pas un pipe."


We will NOT duplicate persons, but rather artificial creatures that have personality, and a level of intelligence en par with their human counter-parts at some point in my lifetime.


This will not be a pathway to immortality in the normal sense. The human prototype will die, but their artificial self may be immortal. So, one day children may have 'recreated' grandfathers and grandmothers that they can get to know even if the original prototypes died before they were born.


The main question for the success of this project is:

Who is Itskov's "master architect"? 

It has been famously said that getting a group of  really smart people to go in the same direction is like "herding cats." This project willneed the smartest people possible, and will need to be lead by the right person.
Itskov must find the right master architect who can pull this off. When we know who that is, we will know if this is possible. Secondly, does he really have the billions of dollars needed to pull this off?This will not be an inexpensive project.

Sunday, May 27, 2012

Complexity and robotics, what a concept.



I came across a remarkable quote that sums up my a key feeling about the complexity of robotic design:

"it is not possible to say anything definitive or profound about a complex system without an appropriately complex proof"- David E. Goldberg, The design of innovation


I work in legged locomotion. Too often I have been irked by theoreticians claiming that they have found the perfect "proof" of global stability in a walking system. By global stability, one means that if there is a perturbation to the walking robot, of sufficiently small magnitude, the robot will continue walking unperturbed. Don't get me wrong. This is nice to know, but it is not everything.

Humans do remarkable things related to locomotion. They dance, they run for a touch down dogging opponents who seem to be several times their mass. They walk up stairs, crawl on their stomachs.... Somehow, the balance systems that we have for walking is readily adapted to skiiing, snowboarding and surfing.   How weird is that?

My point is that if we are going to build a truly functional humanoid, we will never be able to prove that a robotic system will work. The pursuit of a proof may be a red herring in the design of complex robotic systems.

As I write this, I am also reminded by a quip Michael Arbib once made "it may turn out that the brain is just a big hack." I was horrified when I first heard this (as with many of Prof Arbib's quips, they seem to become more true over time, however).

As I work more and more in the cross disciplinary world of robots and neurocomputation I am beginning to think that more theorem proving is NOT what we need. Rather we need better design tools.

We need to be give a robotic system a high level requirement, and have it find a solution on its own. We can test the solution for competence the same way we test, say, a potential surgeon for competence. How? By an exam in Medicine?

No. Testing begins in kindergarten for human beings. To make it to medical school, you have to prove through years of observation that you are reliable, and consistent, and have some baseline talent. At any stage from k-through medical school to your residency, your careers as a surgeon can be derailed for something as small as not being able to sit still through a lecture...

When you are under the surgeons knife, there is no guarantee that the surgeon won't slice out your liver and eat it. There is no mathematical proof available that proves the surgery will be successful and your liver intact when you leave the hospital.

We infer it. And inference is not proof in physical systems.

Likewise, if we are to build complex robots, we have to get away from the worship of theorem proving. What is the alternative?

We need to think rather: What are a set of competence gates that we can setup to assess if a robot is making progress toward fulfilling our requirements.   Second, we need to have a way of automatically trying new variations of robots. I say automatically because today we use the human brain for innovation. However, our minds will always be limited by the models we hold in our heads of the systems we try to control. Inevitably, these models are wrong. We need powerful, automatic (read learning) methods that can innovate automatically.


If we spend more time in developing these tools we will accelerate our progress toward designing truly capable, robotic assistants.

And at the end of the day, I am convinced that it will be, as Prof Arbib remarked, one big hack.



Friday, March 16, 2012

Robbie/I,Robot

I am fascinated by Asimov's Gedankenexperiment  in robot design.  With pen and paper, and a degree in biochemistry he mapped out interesting territory in the intersection of robotics and humans.

Every sci-fi buff who has been to a movie theatre has had the three laws of robotics drilled into them, whether by watching the charming flick Bicentennial Man  or  "I, Robot"  (or as I think of it, I, Robot, the remix), we know, par coeur.

The Three Law, apart from being well founded, a good idea etc, provide a brilliant dramatic device for Asimov. His robots must follow the three laws, which seem reasonable and good, but when actually applied lead to some tense situation.

Over the last day or so, I have begun to re-read I,Robot. Each time I read it I know more about robotics and each time I read it,  I get the chilling feeling that Asimov was some kind of visitor from the future. How could he have figured out so much about robots without ever having built one?



I, Robot, chapter 1: Robbie

The story opens with a little 8 y.o. girl playing hid and go seek with her robot companion. The child is "it" first. She counts to 100 (in my day it was 10, I guess kids were more patient in those day.. !) . The child looks about, and leaves the magic zone of being between robbie and the home tree. Robbie sprints at high speed. Of course, the robot being of superior speed could easily outrun the child to"home base" but,  in the last 10 steps it slows down and lets the child win.

 From Principles of robot motion, H. Choset et al.. 
From an algorithmic point of view, this feels like Robbie is using a kind of potential field method to trigger his action to run. The Potential Field method, is a motion planning method were objects are represented as repelling objects, and the goal, the home tree in this case, is an attractor. Each object creates a Vector Field.

One way of thinking about of vector field is to stand outside your house with an arrow pointing from the center of your chest. Point the arrow towards your front door. Now  have a friend stand near you, say 20 paces away in some direction. Have her use and arrow in the same way.  Now image hundreds of friends with such arrows pointing toward home. That is a vector field. The robot needs only follow the arrows to get home.

One way of constructing a vector field is to start with a function of two variables, say x, and y representing the a grid stretched across a local path of earth, and let the center be the goal location in the case of a the vector field associated with the goal and taking the gradient of the function, and putting a minus sign in front of it.

What?

Well, think of a really big bowl, centered at the home location, now if you were standing inside the bowl, the gradient would be the direction you would want to head to get out of the bowl, while the opposite of that (the negative) would lead you to the bottom of the bowl.  If all of our friends are standing in the bowl with arrows pointing to the bottom of the bowl, then, you get the idea.


[This is not the only way of creating a vector field and in fact many really useful vector field can exist with are not simple .]

Ok, now the problem with such methods is simply, what function should you start with to give you a vector field.

If we start with a cone  the gradient is of constant magnitude (and hence the robot will move at a constant speed). Now, that is not a good thing. If our friend robbie is running at top speed, he would not slow down, but would just smack into the tree.

Hmmm, what about parabola (beautiful example!).   Well, when the robot is far away, it would have a gradient approaching infinity. So, this is clearly not possible.  But as we got closer, the gradient of the parabola would gradually approach zero.


One idea is to use a cone for distances far away, say, more than ten feet, and then a parabola for distances closer. The effect would be that the robot would move at top speed to the obstacle, then, just before reaching the obstacle would begin to slow down quickly. Just as Asimov described in his book!


Of course, robbie may just have wanted to let the little girl win. Or, did he predict this now well known algorithm for robot path planning?


Dr -T

Thursday, March 15, 2012

Rasberry PI for PI day

The Raspberry Pi is causing quite a stir on the streets.  With a built in Graphics Processing unit... which I suspect can be used for video processing... this $25.00 board is capable of 24 GFLOPS.

Microchip has a nice product in the PIC32, but the performance pales in comparison. Futher, the PIC32 development systems cost far more and offer far less in speed performance.

I have not looked at the power per flop usage of the two, but, it is  clear that for serious robotics... trying to tap into vision, SLAM, and my favorite- Legged Robots- we need a little more ump than what the PIC32 currently offers.

More critically, the ARM architecture is what Cell phone are made from. It is what students in college want to learn about. However, traditional hands-on microprocessor courses focus on PICs, ATMEL processors with rather anemic processing power.

Student: "Heh Dr. Lewis, are you going to teach us how to build a cell phone?"
Dr. T: "er.. not in this class.. ah... no.."
But now, with this board, we have the guts of a system that could be expanded into a pretty usable almost phone like device. Especially if you go on a shopping spree at SparkFun and check out the cool bits and pieces they have.

While I am a huge fan of microchip, I am frustrated by the fact that it is difficult to run Linux on this processors. The Rasberry has a linux distro, and very importantly comes with nice programming languages. Languages like the language of choice for beginning roboticists: Python.

Python is a very elegant scripting language. (I can't believe I am saying this. On my first encounter with Python, I found it unbelievably frustrating and draconian in its indentation policy)
You type, and you get results instantly. What can be more rewarding! Its like being back in the old days of Lisp. Programming is actually fun and incredibly productive.

With such a huge about of code available.  I feel like Trinity in the Matrix in the famous Helicopter scene:

Neo:  "can you fly that thing? 
Trinity: "not yet, Tank, I need a pilot program for a B 212 helicopter. Hurry!"

Trinity's eye flutter, for a moment then

Trinity: "let's go"

That's pretty much my experience with using python.

Me thinking : "I need a elegant machine learning modules that is easy to use."

On my Mac it is as easy as :
port search learn
I get a bunch of results and then select one promising module:
port install py27-scikits-learn 
for my version of python and in minutes I have a naive Bayesian classier running.

Need code to do voice reco on my mac using google voice? Bam! Within 15 minutes world class speech reco is up and running.

Need vision, OpenCv has great python binding.... (Ok, there are a few tricks to building it so that it runs fast... but the standard mac port build is pretty darn good.

So, what does this have to do with the Rasberry?

Here is my vision of the future of Robotic Education:

Rasberry+Mobile Platform+ Camera+Python = Fast Fun and Effective learning.

If you want to dig deeper into the inner working of the platforms, feel free to tear it appart. No worries, its only $25.00!

Yes, yes, there have been Panda boards with ARMs, Beagle Boards, BeagleBone boards, but when you look at the cost of those platforms, it is out of reach for many state schools and high schools.

In my minds eye I see a $50.00 robot kit with awe inspiring processing power, Matrix like programmability doing real, hardcore robotics tasks.

IMHO, this board will take the world by storm.

-Dr T




Tuesday, March 6, 2012

Robots are cool to watch. They are technology in motion. Unlike a tablet or even an iPhone with siri, people anthropomorphize robots. In some cases, they project more intelligence into them than they have, and sometimes the intelligence is just not evident.

So robot videos very likely deceive the viewer. Robot guys like to know what is under the hood first before they see the video.

The public likes to see the video and very rarely cares about what is under the hood.

But it matters. At least to me.

What I mean is that people seem to have an appetite for robots and have an unfailing admiration for even the most minuscule advance in robotics. I often see videos of robots which the public finds interesting. I scratch my head in wonder and say, why do people find that cool?

Well, there are some clear rules as to what makes good research.

First, all research should a goal. These goals can be broadly defined along two dimensions: Applied Research or Basic Research.

Since this is a robotics blog, I willcontrive an example. Suppose you want to solve one of the two most important applied problems in robotics: "get me a beer" (the other having something to do with "lover robots" which are surely on the horizon).

You can approach this by a process commonly known as "hacking." You write a program, you tweak, you modify. You come up with a basic script which goes something like this:

Listen for a command.

Execute Command.

Under the execute command, you subdivide the problem into:

Command = Get a beer?

if yes, your robot executes this procedure (aka subroutine):

(1) go to the kitchen

(2) Open the fridge

(3) Find the beer

(4) Grab the beer

(5) Close the fridge

(6) Find you

(7) give you the beer.

So, that is a "hacked" solution.

It is a one-off, ad hoc (done for a specific purpose). If you sell this program to someone the process is commonly known in the robotics world as fobbing (I credit this colorful term to my friend Mark Tilden).

It is not general. Suppose you said: "bring me a class of merlot! " While to you and me these seem like related tasks, the robot would not be able to respond. It has no program to find the merlot, uncork it get a glass etc.

Regardless, we can play in this play ground of applied research for a long time and probably get a lot of papers, a paycheck and a girl or guy to have diner with on Saturday night... after cashing said paycheck.

We can ask such meaningful questions as: "How many times does the robot get the correct beer (and not a bottle of soy sauce)? "

How many times does it leave the door open.

How many types of refrigerators can we adapt to, etc.

All very practical questions of great concern to employers of roboticist. All of some commercial value. However, in my mind such solutions are, well, like one-line jokes. Fun the first time, but really old the second and third time.

I mean, would you rather be stuck in an elevator with Rodney Dangerfield (bless his soul) or Stephen Colbert? (if you don't know who I am talking about, google is your friend). Rodney has some incredible one-liners, but once you have heard them, the second time they are stale (lets give him credit and say the 3rd or forth time). Colbert, trained as improvisational comedian, could probably go on for hours without repeating himself.

So, back to robotics. If you want to be a robot scientist do you want to be a Rodney Dangerfield or a Stephen Colbert?

To build a robot like Colbert (asymmetric ears optional) you have to build something that can improvise sensibly in any situation. To do so, you need to have awareness and understanding of your surroundings.

So, we begin to think deeper. We begin about questions like:" What does it mean to "Understand" a command? " Why should the robot carry out a command in the first place. Can a robot have freewill (I would argue that any true robot can say, "nope, get it your self" or "Haven't you had enough already" Or "be careful you don't trip over your gut next time you go running" You know. The robot would have free will.

You might say: "What is a beer?" It is a kind of "object." "What is an object?"

Ah... you know it is a thingy... that I can pick up... Can you pick up all objects... no somethings are components like door knobs.. Are door knobs objects?

You get the idea. You begin to dig deeper into the meaning of things.

Finally, after about fifteen years, you emerge from your basement cellar, and you have thought very deep thought for a very long time (and didn't get tenure I may add) and have come up with fundamental answers.

What you realize is that to answer these questions you cannot reference just the physical world, you must reference the world of human perception. Of how the world is constructed by humans and you must understand human psychology well enough to match your robots view of the world with your own. Your say that your robot must share your "Merkwelt" (fancy german world for "world view). Using fancy foreign is really a good idea if you want to be published.

Wow.. Profound. Now we are getting somewhere. A robot that understands the world as you do. When you say "get me a beer" (and I am assuming the robot is in jovial and cooperative mood) it brings you the beer.

Both robots can perform the same task with the same input. Both look equally good on video. However, they are profoundly different in the approach they took to solve the problem.

Which one is better?

I cannot decide that answer for you. I can only say, that the "hack" is something one might do to whet one's appetite for robotic science but it is not robotics science itself.

While Steven Colbert might come up with a one-liner that floors you. His best one-liners (usually at the end of an interview with a guess) seem completely spontaneous, valid in just that moment and, therefore, genius.

Rodney Dangerfield may have a carefully crafted one liner. For my money, it would be a lot more fun to be trapped in an elevator with Colbert.

Anyway, I got a lot our of reading this book: Pasteur's Quadrant: Basic Science and Technological Innovation




I think it is a must for aspiring scientists of any sort.