CHICAGO AXIS 2014 INTERNATIONAL ART FESTIVAL

CHICAGO AXIS 2014 INTERNATIONAL ART FESTIVAL

Overlords, an installation by Tom Estes at last years AXIS International Art Festival in Chicago- an event which stands to anchor important art initiatives for years to come.

CHICAGO AXIS 2014 INTERNATIONAL ART FESTIVAL will take place this year from September 19th – 21st.  The location of the Exhibition is VENUE ONE located in West Loop in Chicago. ( address : 1044 W Randolph St, Chicago, IL 60607, venueonechicago.com)

AXIS is focused on becoming a leading international contemporary art institution in an array of multimedia art disciplines, strivin to respect traditional art practices while examining new media with educational overtones.

The aim is to bridge the gap between cultures, generations and educational themes and intends on becoming an important entity in the international contemporary art scene.

http://youtu.be/r1pOZ6GiQ-Q

Final Application due: June 14th

The acceptance will be notified in 15 to 30 days from the submission.

Booth Installation view available: after 2pm, September 17th
Artwork display: September 18th
Exhibitor’s social: TBA
First View: September 19th through out the day
Art Fair open: September 20th and 21st
Time is in the exhibitor’s Information file.

 

CLICK BELOW FOR EXHIBITOR’S APPLICATION INFORMATION

https://www.dropbox.com/s/o2vhqnqx6yvesi2/CHICAGO-AXIS2014ExhibitorINFO.pdf

FAIR SCHEDULE

FIRST VIEW
September 19th Friday 11am – 6pm
Proceeds benefit Prak-Sis Contemporary Art
Association

SHOW HOURS
September 19th Fri 6pm- 9pm
September 20th Sat 9:30am- 7pm
September 21st Sun 11 am- 6pm

ADMISSION
Free for VVIP cardholders
First View Friday $100

GENERAL
Day Pass $10
Fun Pass $85 (10 tickets )

Exhibitor Application Materials/ Submit a pdf file to submit.axis@gmail.com. 

For further information go to: http://chicago-axis.com/FOUNDATION

Image credit: Overlords by Tom Estes

 

Image | Posted on by | Leave a comment

Marfa Digital Residency Announced

Temptation Image: The Temptation of Christ,  digital installation by artist Tom Estes on show in Marfa Texas- An interdisciplinary work that conjures up a kind of ghostly field of material and imagined objects

The Biennial Project’s Marfa Artist Residency provides an international artist with the opportunity to participate in a career-making event. It took over a year of intense fundraising from local and international private patrons for BRMAC to realize the Marfa Digital Residency.

Now, you may ask, why Marfa, Texas? And for those of you who might not know – prepare to get all fired up, because the tiny town of Marfa, perched on the high plains of the Chihuahua desert, is nothing less than an art’s world station of the cross, like Art Basel or Documenta in Germany. It is a blue-chip arts destination for the sort of glamorous scenesters who visit Amsterdam for the Rijksmuseum and the drugs.

BRMAC  scoured the world to identify the most talented artist whose body of work demonstrated the vision, maturity and collaborative ability to contribute to the Overall Biennial Project Oeuvre. And they have finally announced the selected artist for the Marfa Digital Residency- London based artist Tom Estes.

Marfa

As Artist-in-Residence, the artist selected for this prestigious residency organised by the Biennial Roadshow Marfa Advisory Council (BRMAC) will have full access to the support of the sophisticated creative apparatus of The Biennial Project to develop his/her own work and to contribute to the greater good of The Biennial Project. An opportunity for profound dialogues, an echo chamber for the inspirational and the social with political dimensions. Better than a poke in the eye with a sharp stick. Much better.

Marfa was founded in the early 1880s as a railroad stop; the population increased during World War II, but the growth stalled and reversed somewhat during the late 20th century.  It all started to kick off when the acclaimed minimalist artist Donald Judd left New York City in the 1970s for this dusty dot of a town. He wanted to escape the art scene he claimed to disdain. With the help of the DIA Foundation, Judd acquired an entire Army base, and before he died in 1994, he filled it with art, including light installations by Dan Flavin and Judd’s own signature boxes.  Attractions include Building 98, the Chinati Foundation, artisan shops, historical architecture, a classic Texas town square, modern art installments, art galleries, and the Marfa lights. Today, it’s a whole creative community. An extremely fashionable and well-connected creative community.

Marfa2

This years Artist in Residence Tom Estes will create a new state-of-the-art work to be shown within the main site for Biennial Roadshow Marfa, El Cosmico Centre for Artistic Development. Through performance and sound, Estes plans to  hopes to reawaken a sense of wonder for what is nearest.

For this new commission Estes plans to unveil a new work that will incorporate Live Art encompassing telecommunications, technologies, wireless communications, computer science, telematics and live video streaming. Informed by  social and cultural research Estes hopes to dissolve the space between the creative artistic hubs of London in the U.K. and Marfa Texas. The actual movement of the material becomes the agent of its final composition – reinforcing the idea of the digital image as a temporal entity that can denote both a process and an outcome.

As we move through a place we leave behind what is called in forensic science ‘Impression Evidence’. This includes any markings produced when one object comes into contact with another leaving behind some kind of indentation or print. Common evidence ecounters include footwear impressions, tire marks and markings created by tools and similar instruments. But we also collect impressions through our visual experience-  the patina of a building, the shape of the sky – which are then recognized, stored and aid in forming our understanding of a place.  For this new commission in Marfa, Estes want to draw on these forms over as a way of understanding them within our new cyber-reality.

 

Marfaroom

The work will b e on display on Friday April 4th, 2014 – where all selected art work will be seen by the contemporary artists and artisans who inhabit or visit this Western hamlet and by The Biennial Project’s massive entourage who will be in Marfa that entire week. The Marfa Digital Residency will take place between March 30th thru April 6th, 2014.

 

biennial-roadshow-200px

Marfa Digital Residency

El Cosmico
802 S Highland Ave Marfa, TX 79843 United States
 
March 30th thru April 6th, 2014
 
 

  Directions: http://www.yelp.co.uk/map/el-cosmico-marfa Program Schedule: http://biennialroadshow.com/ http://the-biennial-project.com/blogengine.net/post/2014/01/11/.aspx

Image | Posted on by | Leave a comment

Facebook buys Oculus VR for $2 billion

Facebook buys Oculus VR for $2 billion

Virtual reality has an unexpected new champion. Facebook has announced its surprise purchase of Oculus VR, the maker of the Oculus Rift virtual reality headset, for $2 billion.

Imagine sharing not just moments with your friends online but entire experiences. Say hello to the Oculus Rift, the virtual reality headset that’s got the tech and gaming community abuzz. And now Mark Zuckerberg has put together a deal comprised of $400 million in cash and 23.1 million shares of Facebook stock in order to buy Oculus VR. Not bad for a company that first shot to fame with a Kickstarter campaign for the Oculus Rift headset. The Facebook CEO has said: ’

“Oculus’s mission is to enable you to experience the impossible. Their technology opens up the possibility of completely new kinds of experiences,” said Zuckerberg.

The obvious question though is what does Facebook want with what was a start-up company aimed primarily at hardcore PC gamers? The response from hardcore gamers and developers has been predictably reactionary, with Minecraft creator Markus ‘Notch’ Persson immediately cancelling an Oculus Rift version of Minecraft. In fact he got so angry about the whole deal he then went on to write a blog about how virtual reality is going to change the world but Facebook is ‘creepy’. And as you might imagine many ordinary Kickstarter backers have also been demanding their money back.

$2 billion does seem an awful lot when virtual reality has been a commonplace idea for many years now and Oculus VR don’t own any important patents on the technology. This is evident from Sony’s recently unveiled Project Morpheus, which many have already described as superior to Oculus Rift.

Back in October 2010, Stanford University graduates Kevin Systrom and Mike Krieger launched a new iPhone application, Instagram, yet another oddly named tech start-up in a crowded field of hopefuls. Two years later, the two twenty-somethings sold their photo-sharing service—which had about a dozen employees and no revenue—to Facebook Inc. for $1 billion in cash and stock.

So as you might imagine gaming isn’t really what Facebook is interested in. Oculus Rift was created by Palmer Luckey is a young guy from California, smart as a whip, and obsessed with virtual reality. After amassing a serious collection of the day’s top virtual reality tech, he realized nothing came close to the Matrix-like experience he wanted. So, he decided to build it himself. Perhaps what Facebook creator and CEO Mark Zuckerberg is doing is buying up not the technology but any potential future competition. Smile, guys you’re now multimillionaires.

As  Zuckerberg  revealed as Facebook’s reasons for the deal:

‘Mobile is the platform of today, and now we’re also getting ready for the platforms of tomorrow,’  ‘Oculus has the chance to create the most social platform ever, and change the way we work, play and communicate.’

‘This is just the start’, wrote Zuckerberg. ‘After games, we’re going to make Oculus a platform for many other experiences. Imagine enjoying a court side seat at a game, studying in a classroom of students and teachers all over the world or consulting with a doctor face-to-face – just by putting on goggles in your home.’

‘Virtual reality was once the dream of science fiction. But the internet was also once a dream, and so were computers and smartphones. The future is coming and we have a chance to build it together. I can’t wait to start working with the whole team at Oculus to bring this future to the world, and to unlock new worlds for all of us’.

Oculus

But how is the Rift different from prior attempts at head mounted VR? Instead of a 30- or 40-degree field of view (a screen with edges), the Rift offers a 110-degree field of view. There’s no discernible edge to the Rift’s curved 7”, 1280 x 800 display. The screen is split into 640 x 800 halves, lowering the resolution per eye, but allowing the world to be rendered stereoscopically—that is pairing two side-by-side images viewed from slightly different angles (parallax) to make things appear 3D. You’re fully immersed in the game-world.

The headset uses a gyroscope, accelerometer, and magnetometer to translate head movements into changes in perspective in the virtual world. In the past, high latency, or how much time it takes the system to translate physical movements into virtual ones, has been a limiting factor. Carmack believes the “magic number for immersive VR” is 20 millisecond latency. Get it faster than 20 milliseconds, and humans can’t detect a delay. (For more detail, check out this dense meditation on VR latency on Carmack’s blog.)

The Rift’s latency from head movement to game engine is down to 2 milliseconds, thanks to a proprietary chip (as opposed to the off-the-shelf sensor used in the early prototype). But total latency is still above that 20 millisecond threshold, in large part due to how long it takes the LCD screen to update. (Improving screen technology, made cheaper and in smaller form factors, will likely improve this lag in coming years.)

Resolution is a consistently noted drawback to the Rift. And it still only allows for head tracking, as opposed to full body position tracking. You can’t move your hand in the real world and see it in the virtual world.

But neither is doomed to remain a major stumbling block. Compact displays are moving to higher resolution at lower price points. And as for body tracking, although Carmack says the old Kinect added too much latency, maybe the newly released Kinect 2 would work better. Or maybe a custom version of Leap Motion’s super high-resolution (1/100 mm) gesture tracking technology would pair nicely with a set of VR goggles. (Any developers happen to get both?)

It’s hard not to want an Oculus Rift in your home right now. But there’s no word yet on the consumer version—only “we’re working tirelessly to make it available as soon as possible.” The firm has shipped 6,000 developer kits and aims to send the other 1,500 out by the end of May. The all-important developer phase will likely go through the end of the year. And well it should. Anticipation and expectations will be sky-high, and delivering that first, jaw dropping moment at home will be critical.

 

Sources:

http://singularityhub.com/2013/05/31/oculus-rift-is-breathing-new-life-into-the-dream-of-virtual-reality/

http://metro.co.uk/2014/03/26/facebook-buys-oculus-rift-vr-company-for-2-billion-4678798/

Image | Posted on by | Leave a comment

Controlled Mid-Air Collision With A Planet

Controlled Mid-Air Collision With A Planet

What a beautiful day for sticking a cucumber through someone’s letterbox and shouting, “Help, help, the Martians have landed! Because NASA’s Morpheus Project has developed and tested a prototype planetary lander capable of vertical takeoff and landing.

They say a “great” landing is one that lets you use the ship another time. NASA’s strategic goal of extending human presence across the solar system requires an integrated architecture. Such architecture would include advanced, robust space vehicles for a variety of lunar, asteroid, and planetary missions; automated hazard detection and avoidance technologies to reduce risks to crews, landers, and precursor robotic payloads; and in situ resource utilization to support crews during extended stays on extraterrestrial surfaces and to provide for their safe return to Earth. NASA’s Advanced Exploration Systems (AES) portfolio includes several fast-paced projects that are developing these necessary capabilities.

So ladies and gentlemen, let us introduce NASA’s rather groovy Project Morpheus – a prototype planetary lander capable of vertical takeoff and landing. For Range Safety purposes the Morpheus#1 prototype falls into the category of guided suborbital reusable rocket. Morpheus uses a liquid oxygen (LOX)/liquid methane propulsion system with up to 321 second burn time. Specifically, the Morpheus project and the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project provide technological foundations for key components of the greater exploration architecture necessary to move humans beyond low Earth orbit (LEO).
Project Morpheus is a NASA project to develop a vertical takeoff and landing (VTOL) test vehicle called Morpheus Lander in order to demonstrate a new nontoxic spacecraft propellant system (methane and oxygen) and an autonomous landing and hazard detection technology. The prototype planetary lander is capable of vertical takeoff and landings. The vehicles are NASA designed robotic landers that will be able to land and takeoff with 1,100 pounds (500 kg) of cargo on the Moon. The prospect is an engine that runs reliably on propellants that are not only cheaper and safer here on Earth, but could also be potentially manufactured on the Moon or even Mars. (See: In-situ resource utilization.)
The Alpha prototype lander was manufactured and assembled at NASA’s Johnson Space Center (JSC) and Armadillo Aerospace’s facility near Dallas. The prototype lander is a “spacecraft” that is about 12 ft (3.7 m) in diameter, weighs approximately 2,300 lb (1,000 kg) and consists of four silver spherical propellant tanks topped by avionics boxes and a web of wires.

o-MORPHEUSdesert2
The project is trying out cost and time saving “lean development” engineering practices. Other project activities include appropriate ground operations, flight operations, range safety and the instigation of software development procedures. Landing pads and control centers were also constructed. From the project start in July 2010, about $10 million was spent on materials in the following 3+ years; so the Morpheus project is considered lean and low-cost for NASA. In 2012 the project employed 25 full-time team members, and 60 students.

Project Morpheus started in July 2010 and was named after Morpheus, the Greek god of dreams. The Morpheus spacecraft was derived from the experimental lander produced by Project M with the assistance of Armadillo Aerospace. Project M (NASA) was a NASA initiative to design, develop and land a humanoid robot on the lunar surface in 1000 days. Work on some of the landers systems began in 2006, when NASA’s Constellation program planned a human return to the Moon.

In the same year 2006, Armadillo Aerospace entered the first Pixel rocket lander into the Lunar Lander Challenge part of NASA’s Centennial Challenges.
The Morpheus #1 Unit A test vehicle was first hot-fired 15 April 2011.
Morpheus’s new 4,200 pounds-force (19,000 N) engine permitted NASA to design a larger vehicle than its parent, a copy of Armadillo Aerospace’s Pixel rocket lander. The engine was upgraded again to 5,000 pounds-force (22,000 N) in 2013. A new design of landing gear was part of the Mechanical changes. NASA also replaced the avionics – this included power distribution and storage, instrumentation, the flight computer, communications and software. The enhanced landing system permits Morpheus, unlike the Pixels, to land without help from a pilot.

Image | Posted on by | Leave a comment

Solar Collector Captures The Power Of 2,000 Suns

IBMsolarcollecter

A recent study by Greenpeace International revealed that “it would only take 0.04% of the solar energy from the Sahara desert to cover the electricity demand of Europe”. 

The global increase in the level of awareness regarding CO2 emissions has been a strong push for “clean” or “green” sources of energy. Consequently, there has been a surge in implementing concentrator solar photovoltaic systems in order to generate free power from the sun by converting sunlight into electricity with zero emissions and no moving parts.

A team at IBM recently developed what they call a High Concentration Photo Voltaic Thermal (HCPVT) system that is capable of concentrating the power of the sun 2,000 times. How many suns is that? Well if you have perfect vision you can see in dark skies about six thousand stars, give or take a few. That is the total number of stars above 6th magnitude visible to someone with 20/20 vision in a dark location.

The team are even claiming to be able to concentrate energy safely up to 5,000X, which is perhaps a milepost in the efficiency of CSP systems. So imagine that many suns all shining down on the earth with the same intensity as our own sun. The trick is that each solar PV cell is cooled using technology developed for supercomputers; microchannels inspired by blood vessels but only a few tens of micrometers in width pipe liquid coolant in and extract heat “10 times more effective than with passive air cooling.”

Concentrated photovoltaic (CPV) systems allow the conversion of sunlight into electricity with higher efficiencies than conventional flat plate collectors. This is mainly due to the use of highly efficient multi-junction PV cells and to the increasing conversion yield of chips as a function of irradiation. In addition, CPV systems offer cost advantages over flat plate collectors because the semiconductor area is reduced by the concentration factor of the lens, which is typically 500. However, the packaging of current commercial CPV systems is not yet mature. For example, current systems only collect electrical power and dissipate thermal power to the ambient surroundings, which causes a general problem of reduced efficiency due to high chip temperatures.

Milkyway

Nevertheless, the use of optical concentrators to obtain high optical intensities means that the solar cells located at the focal point must be cooled. Thus, the initial objective of the project is to improve the overall packaging of CPV chips such that the overheating problems are minimized, while the thermal waste energy is collected as a useful resource for a multi-effect boiling (MEB) desalination process. The first milestone of this project is to design a high-performance cooler and to optimize the thermal contact of the photovoltaic chip with this cooler. For that project, our group’s expertise in processor chip cooling is leveraged to achieve high-performance liquid cooling of the CPV chip. Intermediate targets in this area are to demonstrate the removal of a 100-200W/cm2 heat load with ΔT<20ºC on a solar cell package and to verify the thermal cycling reliability for the PV cell–cooler assembly.

Another objective of this project is to optimize the design of a (non-imaging) concentrating optical system, which would then be integrated with the photovoltaic chip package in order to obtain higher optical concentration factors. Consequently, it is essential to optimize the conversion yield of the multi-junction photovoltaic chip and yet collect the remaining thermal energy at a high temperature level while keeping the chip’s temperature low. Combining the electrical and thermal output will not only allow the overall efficiency of the system to be pushed beyond 50% (which outperforms the capabilities of the vast majority of competing solar technologies), but will also make the technology more profitable than current solar technologies.

cost-of-solar-power-graph-

Image: IBM/ DOE

Source: https://www.zurich.ibm.com/st/energy/photovoltaic.html

Image | Posted on by | Leave a comment

Birth of a Snowflake

Birth of a Snowflake

The common adage that no two snowflakes are alike, it’s almost certainly true. Each snowflake contains around a quintillion molecules, resulting in a nearly endless number of combinations.

Snowflakes form in a wide variety of intricate shapes, leading to the popular expression that “no two are alike”. Although possible, it is very unlikely for any two randomly selected snowflakes to appear exactly alike due to the many changes in temperature and humidity the crystal experiences during its fall to earth. A non-aggregated snowflake often exhibits six-fold radial symmetry. The initial symmetry can occur because the crystalline structure of ice is six-fold. The six “arms” of the snowflake, or dendrites, then grow independently, and each side of each arm grows independently.

“The number of possible arrangements of the 10^18 water molecules [in a snowflake] is such a large number that it dwarfs the number of atoms in the universe many, many times over,” Joe Hanson of It’s Okay To Be Smart told HuffPost Science. “Somewhere, a supercomputer is weeping just thinking about having to calculate a number that large.”

Most snowflakes are not completely symmetric. The micro-environment in which the snowflake grows changes dynamically as the snowflake falls through the cloud, and tiny changes in temperature and humidity affect the way in which water molecules attach to the snowflake. Since the micro-environment (and its changes) are very nearly identical around the snowflake, each arm can grow in nearly the same way. However, being in the same micro-environment does not guarantee that each arm grows the same; indeed, for some crystal forms it does not because the underlying crystal growth mechanism also affects how fast each surface region of a crystal grows. Empirical studies suggest less than 0.1% of snowflakes exhibit the ideal six-fold symmetric shape

Snowtime is a 2-minute “microscopic time-lapse” by Vyacheslav Ivanov and it shows how mesmerizing a bloom of budding ice crystals can be.

Snowflakes are formed “when an extremely cold water droplet freezes onto a pollen or dust particle in the sky. This creates an ice crystal. As the ice crystal falls to the ground, water vapor freezes onto the primary crystal, building new crystals – the six arms of the snowflake.”

A snowflake is either a single ice crystal or an aggregation of ice crystals which falls through the Earth’s atmosphere. They begin as snow crystals which develop when microscopic supercooled cloud droplets freeze. Snowflakes come in a variety of sizes and shapes. Complex shapes emerge as the flake moves through differing temperature and humidity regimes, such that individual snowflakes are nearly unique in structure. Snowflakes encapsulated in rime form balls known as graupel. Snowflakes appear white in color despite being made of clear ice. This is due to diffuse reflection of the whole spectrum of light by the small crystal facets.

Snow crystals form when tiny supercooled cloud droplets (about 10 μm in diameter) freeze. These droplets are able to remain liquid at temperatures lower than −18 °C (0 °F), because to freeze, a few  moleculed in the droplet need to get together by chance to form an arrangement similar to that in an ice lattice, then the droplet freezes around this “nucleus.” Experiments show that this “homogeneous” nucleation of cloud droplets only occurs at temperatures lower than −35 °C (−31 °F).In warmer clouds an aerosol particle or “ice nucleus” must be present in (or in contact with) the droplet to act as a nucleus. The particles that make ice nuclei are very rare compared to nuclei upon which liquid cloud droplets form; however, it is not understood what makes them efficient. Clays, desert dust and biological particles may be effective, although to what extent is unclear. Artificial nuclei include particles of silver iodide and dry ice, and these are used to stimulate precipitation in cloud seeding.

Once a droplet has frozen, it grows in the supersaturated environment, which is one where air is saturated with respect to ice when the temperature is below the freezing point. The droplet then grows by deposition of water molecules in the air (vapor) onto the ice crystal surface where they are collected. Because water droplets are so much more numerous than the ice crystals due to their sheer abundance, the crystals are able to grow to hundreds of micormeters or millimeters in size at the expense of the water droplets. This process is known as the Wegener–Bergeron–Findeisen process. The corresponding depletion of water vapor causes the droplets to evaporate, meaning that the ice crystals grow at the droplets’ expense. These large crystals are an efficient source of precipitation, since they fall through the atmosphere due to their mass, and may collide and stick together in clusters, or aggregates. These aggregates are usually the type of ice particle that falls to the ground. The exact details of the sticking mechanism remain controversial. Possibilities include mechanical interlocking, sintering, electrostatic attraction as well as the existence of a “sticky” liquid-like layer on the crystal surface. The individual ice crystals often have hexagonal symmetry. Although the ice is clear, scattering of light by the crystal facets and hollows/imperfections mean that the crystals often appear white in color due to diffuse reflection of the whole spectrum of light by the small ice particles. The shape of the snowflake is determined broadly by the temperature and humidity at which it is formed.Rarely, at a temperature of around −2 °C (28 °F), snowflakes can form in threefold symmetry — triangular snowflakes. The most common snow particles are visibly irregular, although near-perfect snowflakes may be more common in pictures because they are more visually appealing. It is unlikely that any two snowflakes are alike due to the estimated 1019 (10 quintillion) water molecules which make up a typical snowflake, which grow at different rates and in different patterns depending on the changing temperature and humidity within the atmosphere that the snowflake falls through on its way to the ground.

Source: http://en.wikipedia.org/wiki/Snowflake

Image | Posted on by | Leave a comment

Swarm of Robots Construct Collaboratively Like Termites

Swarm of Robots construct collaboratively like termites

Cambridge, Mass. — On the plains of Namibia, millions of tiny termites are building a mound of soil—an 8-foot-tall “lung” for their underground nest. During a year of construction, many termites will live and die, wind and rain will erode the structure, and yet the colony’s life-sustaining project will continue.

Inspired by the termites’ resilience and collective intelligence, a team of computer scientists and engineers at the Harvard School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering at Harvard University has created an autonomous robotic construction crew. The system needs no supervisor, no eye in the sky, and no communication: just simple robots—any number of robots—that cooperate by modifying their environment.

Harvard’s TERMES system demonstrates that collective systems of robots can build complex, three-dimensional structures without the need for any central command or prescribed roles. The results of the four-year project were presented this week at the AAAS 2014 Annual Meeting and published in the February 14 issue of Science.

The TERMES robots can build towers, castles, and pyramids out of foam bricks, autonomously building themselves staircases to reach the higher levels and adding bricks wherever they are needed. In the future, similar robots could lay sandbags in advance of a flood, or perform simple construction tasks on Mars.

“The key inspiration we took from termites is the idea that you can do something really complicated as a group, without a supervisor, and secondly that you can do it without everybody discussing explicitly what’s going on, but just by modifying the environment,” says principal investigator Radhika Nagpal, Fred Kavli Professor of Computer Science at Harvard SEAS. She is also a core faculty member at the Wyss Institute, where she co-leads the Bioinspired Robotics platform.

The TERMES robots can carry bricks, build staircases, and climb them to add bricks to a structure, following low-level rules to independently complete a construction project. Credit: Eliza Grinnell, Harvard SEAS

“We try to draw inspiration from the elegant ways in which Nature self organizes and self regulates,” said Wyss Institute Founding Director Don Ingber, Ph.D., M.D., “and this latest feat by our robotics team is clear evidence of the tremendous potential of bioinspired engineering, and its ability to spawn truly game-changing technologies.”

Termites exposed (72)

The Macrotermes michealseni termite mounds of southern Africa.

Most human construction projects today are performed by trained workers in a hierarchical organization, explains lead author Justin Werfel, a staff scientist in bioinspired robotics at the Wyss Institute and a former SEAS postdoctoral fellow.

“Normally, at the beginning, you have a blueprint and a detailed plan of how to execute it, and the foreman goes out and directs his crew, supervising them as they do it,” he says. “In insect colonies, it’s not as if the queen is giving them all individual instructions. Each termite doesn’t know what the others are doing or what the current overall state of the mound is.”

Instead, termites rely on a concept known as stigmergy, a kind of implicit communication: they observe each others’ changes to the environment and act accordingly. That is what Nagpal’s team has designed the robots to do, with impressive results. Supplementary videos published with the Science paper show the robots cooperating to build several kinds of structures and even recovering from unexpected changes to the structures during construction.

Each robot executes its building process in parallel with others, but without knowing who else is working at the same time. If one robot breaks, or has to leave, it does not affect the others. This also means that the same instructions can be executed by five robots or five hundred. The TERMES system is an important proof of concept for scalable, distributed artificial intelligence.

Nagpal’s Self-Organizing Systems Research Group specializes in distributed algorithms that allow very large groups of robots to act as a colony. Close connections between Harvard’s computer scientists, electrical engineers, and biologists are key to her team’s success. They created a swarm of friendly Kilobots a few years ago and are contributing artificial intelligence expertise to the ongoing RoboBees project, in collaboration with Harvard faculty members Robert J. Wood and Gu-Yeon Wei.

“When many agents get together—whether they’re termites, bees, or robots—often some interesting, higher-level behavior emerges that you wouldn’t predict from looking at the components by themselves,” says Werfel. “Broadly speaking, we’re interested in connecting what happens at the low level, with individual agent rules, to these emergent outcomes.”

Coauthor Kirstin Petersen, a graduate student at Harvard SEAS with a fellowship from the Wyss Institute, spearheaded the design and construction of the TERMES robots and bricks. These robots can perform all the necessary tasks—carrying blocks, climbing the structure, attaching the blocks, and so on—with only four simple types of sensors and three actuators.

We co-designed robots and bricks in an effort to make the system as minimalist and reliable as possible,” Petersen says. “Not only does this help to make the system more robust; it also greatly simplifies the amount of computing required of the onboard processor. The idea is not just to reduce the number of small-scale errors, but more so to detect and correct them before they propagate into errors that can be fatal to the entire system.“In contrast to the TERMES system, it is currently more common for robotic systems to depend on a central controller. These systems typically rely on an “eye in the sky” that can see the whole process or on all of the robots being able to talk to each other frequently. These approaches can improve group efficiency and help the system recover from problems quickly, but as the numbers of robots and the size of their territory increase, these systems become harder to operate. In dangerous or remote environments, a central controller presents a single failure point that could bring down the whole system.”It may be that in the end you want something in between the centralized and the decentralized system—but we’ve proven the extreme end of the scale: that
it could be just like the termites,” says Nagpal. “And from the termites’ point of view, it’s working out great.“This research was supported by the Wyss Institute for Biologically Inspired Engineering at Harvard University.

Termes2_0

The TERMES robots, developed at Harvard, act independently but collectively. Credit: Eliza Grinnell, Harvard SEAS.

What can a TERMES robot do?

- Move forward, backward, and turn in place
- Climb up or down a step the height of one brick
- Pick up a brick, carry it, and deposit it directly in front of itself
- Detect other bricks and robots in immediate vicinity
- Keep track of its own location with respect to a “seed” brick

What instructions do the TERMES robots follow?

- Obey predetermined traffic rules
- Circle the growing structure to find the first, “seed” brick (for orientation)
- Climb onto the structure
- Obtain a brick
- Attach the brick at any vacant point that satisfies local geometric requirements
- Climb off the structure
- Repeat

Watch a video of the robots in action.

Source: http://www.da6nci.com/tag/harvard/

Image | Posted on by | Leave a comment