EXTREME

Evolution Simulator Reveals the Secret to Mating Without Social Skills



Without social skills, the only way to meet a mate is by complete chance. Right? Not according to a new model that simulates the way an individual’s genes can interact with the environment. 

Finding a sexual partner is a complex business for humans. At its simplest, it requires two willing participants to be present at the same place at the same time. And unsurprisingly, humans have developed sophisticated social skills to coordinate their movements for just this purpose (as have many organisms).
But what if the participants have no social skills and so are unable to coordinate in this way? How do participants lacking social skills ever mate?  That’s an important question, and not just for humans with poor social skills. Indeed, many simple organisms reproduce sexually but do not seem to have the social skills to coordinate their movements.
This conundrum is called the social coordination problem, and sociologists have long puzzled over how socially challenged species survive.
Today we get an answer thanks to the work of Chris Marriott at the University of Washington in Seattle and Jobran Chebib at the University of Zürich in Switzerland. These guys have created a computer model that simulates the interaction between organisms, their genes and the environment in which they exist.
This model shows how individuals without social skills can still mate successfully and provides a unique insight into the way social skills can eventually evolve in these kinds of populations.
A key part of the new model is its ability to simulate the interaction between the genetic make-up of a population of individuals and their environment.  And it does this in a clever way.
In the new model, the “environment” consists of a network of nodes connected at random. An individual can explore this world by jumping from one node to the next using the links between them.
Individuals top up energy at each node but use it as they move. The net gain or loss of energy each day determines if the creature lives or dies.
At the same time, an individual with enough energy can indulge in sex with another creature that happens to be at the same location, provided that this one also has sufficient energy. This results in the birth of a new creature with characteristics of both parents. Individuals that do not have sex can also reproduce asexually.
The way individuals choose their routes is important. Each creature does this using information encoded in its “genome”: a long sequence of potential routes through the environment from one location to another.
At a specific location, the individual searches its genome for routes associated with that position. It then chooses the route that maximizes its future resources, and this determines where it moves next.
That has important consequences for an emerging population. Marriott and Chebib begin by releasing a single individual into this environment. It obviously cannot have sex and so reproduces asexually, producing another individual with the same genome.
Since both individuals have the same genome, they move through the environment in the same way, producing other individuals with the same genome or having sex to produce individuals with similar genomes.
After many generations, the result is a group of individuals with similar genomes which move through the environment in the same way. In other words, a herd.
This leads to a breeding pattern called assortative mating, where individuals mate with similar others rather than random partners. That’s a simple consequence of being part of a herd with similar behavior patterns.
Individuals also tend to return to their birthplaces, because this information is automatically encoded in their genomes. That’s how natal philopatry emerges.
All this is in stark contrast to populations of individuals with different genomes that are dropped into the environment at random. These individuals tend to die, because they only meet other individuals by complete chance. So sexual reproduction is rare.
And when it does occur, it tends to create individuals with similar genomes that end up producing herds and indulging in assortative mating and natal philopatry in exactly the same way as the less diverse populations.
The extraordinary thing is that all these behaviors emerge from the interaction between the individuals’ genetic make-up and their environment. There are no social skills involved at all.
“We find three kinds of social organization that help solve this social coordination problem (herding, assortative mating, and natal philopatry) emerge in populations of simulated agents with no social mechanisms available to support these organizations,” say Marriott and Chebib.
That’s fascinating work and not just because it shows how mating can occur between individuals with no social skills.  Marriott and Chebib speculate that the emergence of these mating behaviours provides an environment in which social coordination skills can eventually evolve. “We conclude that the non-social origins of these social organizations around sexual reproduction may provide the environment for the development of social solutions to the same and different problems,” they say.
Many creatures learn social skills from other individuals or come under social pressure of one kind or another to behave in a specific way. But nobody has ever been sure how these skills have emerged because of the chicken-and-egg nature of the problem: you can’t learn social skills unless you’re part of a group, and you can’t be part of a group unless you have social skills.
Marriott and Chebib have found a way through this paradox based on the connection between genes and environment. There next step? To see whether real social coordination skills evolve in the populations they produce. We’ll be watching

 

 Toolkits for the Mind

 

Programming languages shape the way their users think—which helps explain how tech startups work and why they are able to reinvent themselves.  

When the Japanese computer scientist Yukihiro Matsumoto decided to create Ruby, a programming language that has helped build Twitter, Hulu, and much of the modern Web, he was chasing an idea from a 1966 science fiction novel called Babel-17 by Samuel R. Delany. At the book’s heart is an invented language of the same name that upgrades the minds of all those who speak it. “Babel-17 is such an exact analytical language, it almost assures you technical mastery of any situation you look at,” the protagonist says at one point. With Ruby, Matsumoto wanted the same thing: to reprogram and improve the way programmers think.
It sounds grandiose, but Matsumoto’s isn’t a fringe view. Software developers as a species tend to be convinced that programming languages have a grip on the mind strong enough to change the way you approach problems—even to change which problems you think to solve. It’s how they size up companies, products, their peers: “What language do you use?”
That can help outsiders understand the software companies that have become so powerful and valuable, and the products and services that infuse our lives. A decision that seems like the most inside kind of inside baseball—whether someone builds a new thing using, say, Ruby or PHP or C—can suddenly affect us all. If you want to know why Facebook looks and works the way it does and what kinds of things it can do for and to us next, you need to know something about PHP, the programming language Mark Zuckerberg built it with.
Among programmers, PHP is perhaps the least respected of all programming languages. A now canonical blog post on its flaws described it as “a fractal of bad design,” and those who willingly use it are seen as amateurs. “There’s this myth of the brilliant engineering that went into Facebook,” says Jeff Atwood, co-creator of the popular programming question–and-answer site Stack Overflow. “But they were building PHP code in Windows XP. They were hackers in almost the derogatory sense of the word.” In the space of 10 minutes, Atwood called PHP “a shambling monster,” “a pandemic,” and a haunted house whose residents have come to love the ghosts.
Most successful programming languages have an overall philosophy or set of guiding principles that organize their vocabulary and grammar—the set of possible instructions they make available to the programmer—into a logical whole. PHP doesn’t. Its creator, Rasmus Lerdorf, freely admits he just cobbled it together. “I don’t know how to stop it,” he said in a 2003 interview. “I have absolutely no idea how to write a programming language—I just kept adding the next logical step along the way.”
Programmers’ favorite example is a PHP function called “mysql_escape_string,” which rids a query of malicious input before sending it off to a database. (For an example of a malicious input, think of a form on a website that asks for your e-mail address; a hacker can enter code in that slot to force the site to cough up passwords.) When a bug was discovered in the function, a new version was added, called “mysql_real_escape_string,” but the original was not replaced. The result is a bit like having two similar-looking buttons right next to each other in an airline cockpit: one that puts the landing gear down and one that puts it down safely. It’s not just an affront to common sense—it’s a recipe for disaster.
Yet despite the widespread contempt for PHP, much of the Web was built on its back. PHP powers 39 percent of all domains, by one estimate. Facebook, Wikipedia, and the leading publishing platform WordPress are all PHP projects. That’s because PHP, for all its flaws, is perfect for getting started. The name originally stood for “personal home page.” It made it easy to add dynamic content like the date or a user’s name to static HTML pages. PHP allowed the leap from tinkering with a website to writing a Web application to be so small as to be imperceptible. You didn’t need to be a pro.
PHP’s get-going-ness was crucial to the success of Wikipedia, says Ori Livneh, a principal software engineer at the Wikimedia Foundation, which operates the project. “I’ve always loathed PHP,” he tells me. The project suffers from large-scale design flaws as a result of its reliance on the language. (They are partly why the foundation didn’t make Wikipedia pages available in a version adapted for mobile devices until 2008, and why the site didn’t get a user-friendly editing interface until 2013.) But PHP allowed people who weren’t—or were barely—software engineers to contribute new features. It’s how Wikipedia entries came to display hieroglyphics on Egyptology pages, for instance, and handle sheet music.
The programming language PHP ­created and sustains Facebook’s move-fast, hacker-oriented corporate culture.
You wouldn’t have built Google in PHP, because Google, to become Google, needed to do exactly one thing very well—it needed search to be spare and fast and meticulously well engineered. It was made with more refined and powerful languages, such as Java and C++. Facebook, by contrast, is a bazaar of small experiments, a smorgasbord of buttons, feeds, and gizmos trying to capture your attention. PHP is made for making—for cooking up features quickly.
You can almost imagine Zuckerberg in his Harvard dorm room on the fateful day that Facebook was born, doing the least he could to get his site online. The Web moves so fast, and users are so fickle, that the only way you’ll ever be able to capture the moment is by being first. It didn’t matter if he made a big ball of mud, or a plate of spaghetti, or a horrible hose cabinet (to borrow from programmers’ rich lexicon for describing messy code). He got the thing done. People could use it. He wasn’t thinking about beautiful code; he was thinking about his friends logging in to “Thefacebook” to look at pictures of girls they knew.
Today Facebook is worth more than $200 billion and there are signs all over the walls at its offices: “Done is better than perfect”; “Move fast and break things.” These bold messages are supposed to keep employees in tune with the company’s “hacker” culture. But these are precisely PHP’s values. Moving fast and breaking things is in fact so much the essence of PHP that anyone who “speaks” the language indelibly thinks that way. You might say that the language itself created and sustains Facebook’s culture.
The secret weapon
If you wanted to find the exact opposite of PHP, a kind of natural experiment to show you what the other extreme looked like, you couldn’t do much better than the self-serious Lower Manhattan headquarters of the financial trading firm Jane Street Capital. The 400-person company claims to be responsible for roughly 2 percent of daily equity trading volume in the United States.
When I meet Yaron Minsky, Jane Street’s head of technology, he’s sitting at a desk with a working Enigma machine beside him, one of only a few dozen of the World War II code devices left in the world. I would think it the clear winner of the contest for Coolest Secret Weapon in the Room if it weren’t for the way he keeps talking about an obscure programming language called OCaml. Minsky, a computer science PhD, convinced his employer 10 years ago to rewrite the company’s entire trading system in OCaml. Before that, almost nobody used the language for actual work; it was developed at a French research institute by academics trying to improve a computer system that automatically proves mathematical theorems. But Minsky thought OCaml, which he had gotten to know in grad school, could replace the complex Excel spreadsheets that powered Jane Street’s trading systems.
OCaml’s big selling point is its “type system,” which is something like Microsoft Word’s grammar checker, except that instead of just putting a squiggly green line underneath code it thinks is wrong, it won’t let you run it. Programs written with a type system tend to be far more reliable than those written without one—useful when a program might trade $30 billion on a big day.
Minsky says that by catching bugs, OCaml’s type system allows Jane Street’s coders to focus on loftier problems. One wonders if they have internalized the system’s constant nagging over time, so that OCaml has become a kind of Newspeak that makes it impossible to think bad thoughts.
The catch is that to get the full benefits of the type checker, the programmers have to add complex annotations to their code. It’s as if Word’s grammar checker required you to diagram all your sentences. Writing code with type constraints can be a nuisance, even demoralizing. To make it worse, OCaml, more than most other programming languages, traffics in a kind of deep abstract math far beyond most coders. The language’s rigor is like catnip to some people, though, giving Jane Street an unusual advantage in the tight hiring market for programmers. Software developers mostly join Facebook and Wikipedia in spite of PHP. Minsky says that OCaml—along with his book Real World OCaml—helps lure a steady supply of high-quality candidates. The attraction isn’t just the language but the kind of people who use it. Jane Street is a company where they play four-person chess in the break room. The culture of competitive intelligence and the use of a fancy programming language seem to go hand in hand.
Google appears to be trying to pull off a similar trick with Go, a high-performance programming language it developed. Intended to make the workings of the Web more elegant and efficient, it’s good for developing the kind of high-stakes software needed to run the collections of servers behind large Web services. It also acts as something like a dog whistle to coders interested in the new and the difficult.
Growing up
In late 2010, Facebook was having a crisis. PHP was not built for performance, but it was being asked to perform. The site was growing so fast it seemed that if something didn’t change fairly drastically, it would start falling over.
Switching languages altogether wasn’t an option. Facebook had millions of lines of PHP code, thousands of engineers expert in writing it, and more than half a billion users. Instead, a small team of senior engineers was assigned to a special project to invent a way for Facebook to keep functioning without giving up on its hacky mother tongue.
One part of the solution was to create a piece of software—a compiler—that would translate Facebook’s PHP code into much faster C++ code. The other was a feat of computer linguistic engineering that let Facebook’s programmers keep their PHP-ian culture but write more reliable code.
Startups can cleverly use the power of programming languages to manipulate their organizational psychology.
The rescue squad did it by inventing a dialect of PHP called Hack. Hack is PHP with an optional type system; that is, you can write plain old quick and dirty PHP—or, if you so choose, you can tie yourself to the mast, adding annotations to let the type system check the correctness of your code. That this type checker is written entirely in OCaml is no coincidence. Facebook wanted its coders to keep moving fast in the comfort of their native tongue, but it didn’t want them to have to break things as they did it. (Last year Zuckerberg announced a new engineering slogan: “Move fast with stable infra,” using the hacker shorthand for the infrastructure that keeps the site running.)
Around the same time, Twitter underwent a similar transformation. The service was originally built with Ruby on Rails—a popular Web programming framework created using Matsumoto’s Ruby and inspired in large part by PHP. Then came the deluge of users. When someone with hundreds of thousands of followers tweeted, hundreds of thousands of other people’s timelines had to be immediately updated. Big tweets like that would frequently overwhelm the system and force engineers to take the site down to allow it to catch up. They did it so often that the “fail whale” on the company’s maintenance page became famous in its own right. Twitter stopped the bleeding by replacing large pieces of the service’s plumbing with a language called Scala. It should not be surprising that Scala, like OCaml, was developed by academics, has a powerful type system, and prizes correctness and performance even at the expense of the individual programmers’ freedom and delight in their craft.


Much as startups “mature” by finally figuring out where their revenue will come from, they can cleverly use the power of programming languages to manipulate their organizational psychology. Programming–language designer Guido van Rossum, who spent seven years at Google and now works at Dropbox, says that once a software company gets to be a certain size, the only way to stave off chaos is to use a language that requires more from the programmer up front. “It feels like it’s slowing you down, because you have to say everything three times,” van Rossum says. That is why many startups wait as long as they can before making the switch. You lose some of the swaggering hackers who got you started, and the possibility that small teams can rush out new features. But a more exacting language helps people across the company understand one another’s code and gives your product the stability needed to be part of the furniture of daily life.That software startups can perform such maneuvers might even help explain why they can be so powerful. The expanding reach of computers is part of it. But these companies also have a unique ability to remake themselves. As they change and grow, they can do more than just redraw the org chart. Because they are built in code, they can do something far more drastic. They can rewire themselves, their culture, the very way they think.

Robots That Learn Through Repetition, Not Programming

A startup says getting a robot to do things should be less about writing code and more like animal training.  

Eugene Izhikevich thinks you shouldn’t have to write code in order to teach robots new tricks. “It should be more like training a dog,” he says.  “Instead of programming, you show it consistent examples of desired behavior.”
Izhikevich’s startup, Brain Corporation, based in San Diego, has developed an operating system for robots called BrainOS to make that possible. To teach a robot running the software to pick up trash, for example, you would use a remote control to repeatedly guide its gripper to perform that task. After just minutes of repetition, the robot would take the initiative and start doing the task for itself. “Once you train it, it’s fully autonomous,” says Izhikevich, who is cofounder and CEO of the company.
Izhikevich says the approach will make it easier to produce low-cost service robots capable of simple tasks. Programming robots to behave intelligently normally requires significant expertise, he says, pointing out that the most successful home robot today is the Roomba, released in 2002. The Roomba is preprogrammed to perform one main task: driving around at random to cover as much of an area of floor as possible.
Brain Corporation hopes to make money by providing its software to entrepreneurs and companies that want to bring intelligent, low-cost robots to market. Later this year, Brain Corporation will start offering a ready-made circuit board with a smartphone processor and BrainOS installed to certain partners. Building a trainable robot would involve connecting that “brain” to a physical robot body.
The chip on that board is made by mobile processor company Qualcomm, which is an investor in Brain Corporation. At the Mobile Developers Conference in San Francisco last week, a wheeled robot with twin cameras powered by one of Brain Corporation’s circuit boards was trained live on stage.
In one demo, the robot, called EyeRover, was steered along a specific route around a chair, sofa, and other obstacles a few times. It then repeated the route by itself. In a second demo, the robot was taught to come when a person beckoned to it. One person held one hand close to the robot’s twin cameras, so that EyeRover could lock onto it. A second person then maneuvered the robot forward and back in synchronization with the trainer’s hand. After being led through a rehearsal of the movements just twice, the robot correctly came when summoned.
Those quick examples are hardly sophisticated. But Izhikevich says more extensive training conducted over days or weeks could teach a robot to perform a more complicated task such as pulling weeds out of the ground. A company would need to train only one robot, and could then copy its software to new robots with the same design before they headed to store shelves.
Brain Corporation’s software is based on a combination of several different artificial intelligence techniques. Much of the power comes from using artificial neural networks, which are inspired by the way brain cells communicate, says Izhikevich. Brain Corporation was previously collaborating with Qualcomm on new forms of chip that write artificial neural networks into silicon (see “Qualcomm to Build Neuro-Inspired Chips”). Those “neuromorphic” chips, as they are known, are purely research projects for the moment. But they might eventually offer a more powerful and efficient way to run software like BrainOS.

Brain Corporation previously experimented with reinforcement learning, where a robot starts out randomly trying different behaviors, and a trainer rewards it with a virtual treat when it does the right thing. The approach worked, but had its downsides. “Robots tend to harm themselves when they do that,” says Izhikevich.Training robots through demonstration is a common technique in research labs, says Manuela Veloso, a robotics professor at Carnegie Mellon University. But the technique has been slower to catch on in the world of commercial robotics, she says. The only example on the market is the two-armed Baxter robot, aimed at light manufacturing. It can be trained in a new production line task by someone manually moving its arms to direct it through the motions it needs to perform (see “This Robot Could Transform Manufacturing”).
Sonia Chernova, an assistant professor in robotics at Worcester Polytechnic Institute, says that most other industrial robot companies are now working to add that type of learning to their own robots. But she adds that training could be tricky for mobile robots, which typically have to deal with more complex environments.
Izhikevich acknowledges that training a robot via demonstration, while faster than programming it, produces less predictable behavior. You wouldn’t want to use the technique to ensure that an autonomous car could detect jaywalkers, for example, he says. But for many simple tasks, it could be acceptable. “Missing 2 percent of the weeds or strawberries you were supposed to pick is okay,” he says. “You can get them tomorrow.”

 Amazon Robot Contest May Accelerate Warehouse Automation

 

Robots will use the latest computer-vision and machine-learning algorithms to try to perform the work done by humans in vast fulfillment centers. 

Packets of Oreos, boxes of crayons, and squeaky dog toys will test the limits of robot vision and manipulation in a competition this May. Amazon is organizing the event to spur the development of more nimble-fingered product-packing machines.

Participating robots will earn points by locating products sitting somewhere on a stack of shelves, retrieving them safely, and then packing them into cardboard shipping boxes. Robots that accidentally crush a cookie or drop a toy will have points deducted. The people whose robots earn the most points will win $25,000.
Amazon has already automated some of the work done in its vast fulfillment centers. Robots in a few locations send shelves laden with products over to human workers who then grab and package them. These mobile robots, made by Kiva Systems, a company that Amazon bought in 2012 for $678 million, reduce the distance human workers have to walk in order to find products. However, no robot can yet pick and pack products with the speed and reliability of a human. Industrial robots that are already widespread in several industries are limited to extremely precise, repetitive work in highly controlled environments.
Pete Wurman, chief technology officer of Kiva Systems, says that about 30 teams from academic departments around the world will take part in the challenge, which will be held at the International Conference on Robotics and Automation in Seattle (ICRA 2015). In each round, robots will be told to pick and pack one of 25 different items from a stack of shelves resembling those found in Amazon’s warehouses. Some teams are developing their own robots, while others are adapting commercially available systems with their own grippers and software.
The 25 items that participating robots will need to retrieve from shelves.
The challenge facing the robots in Amazon’s contest will be considerable. Humans have a remarkable ability to identify objects, figure out how to manipulate them, and then grasp them with just the right amount of force. This is especially hard for machines to do if an object is unfamiliar, awkwardly shaped, or sitting on a dark shelf with a bunch of other items. In the Amazon contest, the robots will have to work without any remote guidance from their creators.
“We tried to pick out a variety of different products that were representative of our catalogue and that pose different kinds of grasping challenges,” Wurman said. “Like plastic wrap; difficult-to-grab little dog toys; things you don’t want to crush, like the Oreos.”
The video below shows the approach taken by a team at the University of Colorado. The team is using off-the-shelf software and building a robot arm specialized for the task, says Dave Coleman, a PhD student involved.
The contest could offer a way to judge the progress that has been made in the past few years, when some cheaper, safer, and more adaptable robots have emerged (see “How Technology Is Destroying Jobs”) thanks to advances in the technologies underlying machine dexterity. New types of robot manipulators are making machines less ham-handed at picking up fiddly or awkward objects, for example. Several startups are developing robot hands that seek to copy the flexibility and sense of touch found in human digits. Progress in machine learning could help robots perform far more sophisticated object manipulation in coming years.
A key breakthrough in this area came in 2006, when a group of researchers led by Andrew Ng, then at Stanford and now at Baidu, devised a way for robots to work out how to manipulate unfamiliar objects. Instead of writing rules for how to grasp a specific object or shape, the researchers enabled their robot to study thousands of 3-D images and learn to recognize which types of grip would work for different shapes. This allowed it to figure out suitable grips for new objects.
In recent years, robotics researchers have increasingly used a powerful machine-learning approach known as deep learning to improve these capabilities (see “10 Breakthrough Technologies 2013: Deep Learning”). Ashutosh Saxena, a member of Ng’s team at Stanford and now an assistant professor at Cornell University, is using deep learning to train a robot that will take part in the Amazon challenge. He is working with one of his students, Ian Lenz.
While the Amazon challenge might seem simple, Saxena believes it could quickly make an impact in the real world. “If robots are able to handle even the light types of grasping tasks the contest proposes,” he says, “we could actually start to see a lot of robots helping people with different tasks.”

No comments:

Post a Comment