viernes, 16 de junio de 2017

Scientists Design Molecular System for Artificial Photosynthesis

System is designed to mimic key functions of the photosynthetic center in green plants to convert solar energy into chemical energy stored by hydrogen fuel

Etsuko Fujita and Gerald Manbeck of Brookhaven Lab's Chemistry Division carried out a series of experiments to understand why their molecular system with six light-absorbing centers (made of ruthenium metal ions bound to organic molecules) produced more hydrogen than the system with three such centers. This understanding is key to designing more efficient molecular complexes for converting solar energy into chemical energy—a conversion that green plants do naturally during photosynthesis.
Finding inspiration from nature
The leaves of green plants contain hundreds of pigment molecules (chlorophyll and others) that absorb light at particular wavelengths. When light of the proper wavelength strikes one of these molecules, the molecule enters an excited state. Energy from this excited state is shuttled along a chain of pigment molecules until it reaches a specific type of chlorophyll in the photosynthetic reaction center. Here, the energy is used to drive the charge-separation process required for photosynthesis to proceed. The electron “hole” left behind in the chlorophyll molecule is used for water-to-oxygen conversion. Hydrogen ions formed during the water-splitting process are eventually used for the reduction of carbon dioxide to glucose in the second stage of photosynthesis, known as the light-independent reaction.
UPTON, NY—Photosynthesis in green plants converts solar energy to stored chemical energy by transforming atmospheric carbon dioxide and water into sugar molecules that fuel plant growth. Scientists have been trying to artificially replicate this energy conversion process, with the objective of producing environmentally friendly and sustainable fuels, such as hydrogen and methanol. But mimicking key functions of the photosynthetic center, where specialized biomolecules carry out photosynthesis, has proven challenging. Artificial photosynthesis requires designing a molecular system that can absorb light, transport and separate electrical charge, and catalyze fuel-producing reactions—all complicated processes that must operate synchronously to achieve high energy-conversion efficiency.

Now, chemists from the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory and Virginia Tech have designed two photocatalysts (materials that accelerate chemical reactions upon absorbing light) that incorporate individual components specialized for light absorption, charge separation, or catalysis into a single “supramolecule.” In both molecular systems, multiple light-harvesting centers made of ruthenium (Ru) metal ions are connected to a single catalytic center made of rhodium (Rh) metal ions through a bridging molecule that promotes electron transfer from the Ru centers to the Rh catalyst, where hydrogen is produced.
Photosystems (PS) I and II are large protein complexes that contain light-absorbing pigment molecules needed for photosynthesis. PS II captures energy from sunlight to extract electrons from water molecules, splitting water into oxygen and hydrogen ions (H+) and producing chemical energy in the form of ATP. PS I uses those electrons and H+ to reduce NADP+ (an electron-carrier molecule) to NADPH. The chemical energy contained in ATP and NADPH is then used in the light-independent reaction of photosynthesis to convert carbon dioxide to sugars.
They compared the hydrogen-production performance and analyzed the physical properties of the supramolecules, as described in a paper published in the June 1 online edition of Journal of the American Chemical Society, to understand why the photocatalyst with six as opposed to three Ru light absorbers produces more hydrogen and remains stable for a longer period of time. 

Developing efficient molecular systems for hydrogen production is difficult because processes are occurring at different rates,” said lead author Gerald Manbeck, a chemist in the artificial photosynthesis group at Brookhaven Lab. “Completing the catalytic turnover of hydrogen before the separated charges—the negatively charged light-excited electron and positive “hole” left behind after the excited molecule absorbs light energy—have a chance to recombine and wastefully produce heat is one of the major challenges.”

Another complication is that two electrons are needed to produce each hydrogen molecule. For catalysis to happen, the system must be able to hold the first electron long enough for the second to show up. “By building supramolecules with multiple light absorbers that may work independently, we are increasing the probability of using each electron productively and improving the molecules’ ability to function under low light conditions,” said Manbeck.

Manbeck began making the supramolecules at Virginia Tech in 2012 with the late Karen Brewer, coauthor and his postdoctoral advisor. He discovered that the four-metal (tetrametallic) system with three Ru light-absorbing centers and one Rh catalytic center yielded only 40 molecules of hydrogen for every catalyst molecule and ceased functioning after about four hours. In comparison, the seven-metal (heptametallic) system with six Ru centers and one Rh center was more than seven times more efficient, cycling 300 times to produce hydrogen for 10 hours. This great disparity in efficiency and stability was puzzling because the supramolecules contain very similar components.
This depiction of the heptametallic system upon exposure to light shows light harvesting by the six Ru centers (red) and electron transfer to the Rh catalyst (black), where hydrogen is produced. Efficient electron transfer to Rh is essential for realizing high catalytic performance.

Manbeck joined Brookhaven in 2013 and has since carried out a series of experiments with coauthor Etsuko Fujita, leader of the artificial photosynthesis group, to understand the fundamental causes for the difference in performance.

The ability to form the charge-separated state is a partial indicator of whether a supramolecule will be a good photocatalyst, but realizing efficient charge separation requires fine-tuning the energetics of each component,” said Fujita. “To promote catalysis, the Rh catalyst must be low enough in energy to accept the electrons from the Ru light absorbers when the absorbers are exposed to light.

Through cyclic voltammetry, an electrochemical technique that shows the energy levels within a molecule, the scientists found that the Rh catalyst of the heptametallic system is slightly more electron-poor and thus more receptive to receiving electrons than its counterpart in the tetrametallic system. This result suggested that the charge transfer was favorable in the heptametallic but not the tetrametallic system.

They verified their hypothesis with a time-resolved technique called nanosecond transient absorption spectroscopy, in which a molecule is promoted to an excited state by an intense laser pulse and the decay of the excited state is measured over time. The resulting spectra revealed the presence of a Ru-to-Rh charge transfer in the heptametallic system only.

The data not only confirmed our hypothesis but also revealed that the excited-state charge separation occurs much more rapidly than we had imagined,” said Manbeck. “In fact, the charge migration happens faster than the time resolution of our instrument, and probably involves short-lived, high-energy excited states.” The researchers plan to seek a collaborator with faster instrumentation who can measure the exact rate of charge separation to help clarify the mechanism.

In a follow-up experiment, the scientists performed the transient absorption measurement under photocatalytic operating conditions, with a reagent used as the ultimate source of electrons to produce hydrogen (a scalable artificial photosynthesis of hydrogen fuel from water would require replacing the reagent with electrons released during water oxidation). The excited state generated by the laser pulse rapidly accepted an electron from the reagent. They discovered that the added electron resides on Rh in the heptametallic system only, further supporting the charge migration to Rh predicted by cyclic voltammetry.

The high photocatalytic turnover of the heptametallic system and the principles governing charge separation that were uncovered in this work encourage further studies using multiple light-harvesting units linked to single catalytic sites,” said Manbeck.

This research is supported by DOE’s Office of Science.
Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Related Links

ORIGIN: Brookhaven National Lab
June 2, 2017
Contact: Ariana Tantillo, (631) 344-2347, or Peter Genzer, (631) 344-3174

jueves, 15 de junio de 2017

Scientists Hack a Human Cell and Reprogram It Like a Computer

GETTY IMAGES
CELLS ARE BASICALLY tiny computers: They send and receive inputs and output accordingly. If you chug a Frappuccino, your blood sugar spikes, and your pancreatic cells get the message. Output: more insulin.

But cellular computing is more than just a convenient metaphor. In the last couple of decades, biologists have been working to hack the cells’ algorithm in an effort to control their processes. They’ve upended nature’s role as life’s software engineer, incrementally editing a cell’s algorithm—its DNA—over generations. In a paper published today in Nature Biotechnology, researchers programmed human cells to obey 109 different sets of logical instructions. With further development, this could lead to cells capable of responding to specific directions or environmental cues in order to fight disease or manufacture important chemicals.
Large-scale design of robust genetic circuits with multiple inputs and outputs for mammalian cells
Benjamin H Weinberg, N T Hang Pham, Leidy D Caraballo, Thomas Lozanoski, Adrien Engel, Swapnil Bhatia & Wilson W Wong
Affiliations Contributions Corresponding author

Nature Biotechnology 35, 453–462 (2017) 
doi:10.1038/nbt.3805
Received 20 June 2016
Accepted 27 January 2017
Published online 27 March 2017

Engineered genetic circuits for mammalian cells often require extensive fine-tuning to perform as intended. We present a robust, general, scalable system, called 'Boolean logic and arithmetic through DNA excision' (BLADE), to engineer genetic circuits with multiple inputs and outputs in mammalian cells with minimal optimization. The reliability of BLADE arises from its reliance on recombinases under the control of a single promoter, which integrates circuit signals on a single transcriptional layer. We used BLADE to build 113 circuits in human embryonic kidney and Jurkat T cells and devised a quantitative, vector-proximity metric to evaluate their performance. Of 113 circuits analyzed, 109 functioned (96.5%) as intended without optimization. The circuits, which are available through Addgene, include a 3-input, two-output full adder; a 6-input, one-output Boolean logic look-up table; circuits with small-molecule-inducible control; and circuits that incorporate CRISPR–Cas9 to regulate endogenous genes. BLADE enables execution of sophisticated cellular computation in mammalian cells, with applications in cell and tissue engineering.
Their cells execute these instructions by using proteins called DNA recombinases, which cut, reshuffle, or fuse segments of DNA. These proteins recognize and target specific positions on a DNA strand—and the researchers figured out how to trigger their activity. Depending on whether the recombinase gets triggered, the cell may or may not produce the protein encoded in the DNA segment.

A cell could be programmed, for example, with a so-called NOT logic gate. This is one of the simplest logic instructions: Do NOT do something whenever you receive the trigger. This study’s authors used this function to create cells that light up on command. Biologist Wilson Wong of Boston University, who led the research, refers to these engineered cells as “genetic circuits.

Here’s how it worked: Whenever the cell did contain a specific DNA recombinase protein, it would NOT produce a blue fluorescent protein that made it light up. But when the cell did not contain the enzyme, its instruction was DO light up. The cell could also follow much more complicated instructions, like lighting up under longer sets of conditions.

Wong says that you could use these lit up cells to diagnose diseases, by triggering them with proteins associated with a particular disease. If the cells light up after you mix them with a patient’s blood sample, that means the patient has the disease. This would be much cheaper than current methods that require expensive machinery to analyze the blood sample.

Now, don’t get distracted by the shiny lights quite yet. The real point here is that the cells understand and execute directions correctly.It’s like prototyping electronics,” says biologist Kate Adamala of the University of Minnesota, who wasn’t involved in the research. As every Maker knows, the first step to building complex Arduino circuits is teaching an LED to blink on command.

Pharmaceutical companies are teaching immune cells to be better cancer scouts using similar technology. Cancer cells have biological fingerprints, such as a specific type of protein. Juno Therapeutics, a Seattle-based company, engineers immune cells that can detect these proteins and target cancer cells specifically. If you put logic gates in those immune cells, you could program the immune cells to destroy the cancer cells in a more sophisticated and controlled way.

Programmable cells have other potential applications. Many companies use genetically modified yeast cells to produce useful chemicals. Ginkgo Bioworks, a Boston-based company, uses these yeast cells to produce fragrances, which they have sold to perfume companies. This yeast eats sugar just like brewer’s yeast, but instead of producing alcohol, it burps aromatic molecules. The yeast isn’t perfect yet: Cells tend to mutate as they divide, and after many divisions, they stop working well. Narendra Maheshri, a scientist at Ginkgo, says that you could program the yeast to self-destruct when it stops functioning properly, before they spoil a batch of high-grade cologne.

Wong’s group wasn’t the first to make biological logic gates, but they’re the first to build so many with consistent success. Of the 113 circuits they built, 109 worked. “In my personal experience building genetic circuits, you’d be lucky if they worked 25 percent of the time,” Wong says. Now that they’ve gotten these basic genetic circuits to work, the next step is to make the logic gates work in different types of cells.

But it won’t be easy. Cells are incredibly complicated—and DNA doesn’t have straightforward “on” and “off” switches like an electronic circuit. In Wong’s engineered cells, you “turn off” the production of a certain protein by altering the segment of DNA that encodes its instructions. It doesn’t always work, because nature might have encoded some instructions in duplicate. In other words: It’s hard to debug 3 billion years of evolution.

ORIGINAL: Wired
By SOPHIA CHEN.
03.27.17

lunes, 12 de junio de 2017

Researchers take major step forward in Artificial Intelligence

The long-standing dream of using Artificial Intelligence (AI) to build an artificial brain has taken a significant step forward, as a team led by Professor Newton Howard from the University of Oxford has successfully prototyped a nanoscale, AI-powered, artificial brain in the form factor of a high-bandwidth neural implant.

Professor Newton Howard (pictured above and below) holding parts of the implant device
In collaboration with INTENT LTD, Qualcomm Corporation, Intel Corporation, Georgetown University and the Brain Sciences Foundation, Professor Howard’s Oxford Computational Neuroscience Lab in the Nuffield Department of Surgical Sciences has developed the proprietary algorithms and the optoelectronics required for the device. Rodents’ testing is on target to begin very soon.

This achievement caps over a decade of research by Professor Howard at MIT’s Synthetic Intelligence Lab and the University of Oxford, work that resulted in several issued US patents on the technologies and algorithms that power the device, 
  • the Fundamental Code Unit of the Brain (FCU)
  • the Brain Code (BC) and the Biological Co-Processor (BCP) 
are the latest advanced foundations for any eventual merger between biological intelligence and human intelligence. Ni2o (pronounced “Nitoo”) is the entity that Professor Howard licensed to further develop, market and promote these technologies.
The Biological Co-Processor is unique in that it uses advanced nanotechnology, optogenetics and deep machine learning to intelligently map internal events, such as neural spiking activity, to external physiological, linguistic and behavioral expression. The implant contains over a million carbon nanotubes, each of which is 10,000 times smaller than the width of a human hair. Carbon nanotubes provide a natural, high-bandwidth interface as they conduct heat, light and electricity instantaneously updating the neural laces. They adhere to neuronal constructs and even promote neural growth. Qualcomm team leader Rudy Beraha commented, 'Although the prototype unit shown today is tethered to external power, a commercial Brain Co-Processor unit will be wireless and inductively powered, enabling it to be administered with a minimally-invasive procedures.'


The device uses a combination of methods to write to the brain, including 
  • pulsed electricity
  • light and 
  • various molecules that simulate or inhibit the activation of specific neuronal groups
These can be targeted to stimulate a desired response, such as releasing chemicals in patients suffering from a neurological disorder or imbalance. The BCP is designed as a fully integrated system to use the brain’s own internal systems and chemistries to pattern and mimic healthy brain behavior, an approach that stands in stark contrast to the current state of the art, which is to simply apply mild electrocution to problematic regions of the brain. 

Therapeutic uses
The Biological Co-Processor promises to provide relief for millions of patients suffering from neurological, psychiatric and psychological disorders as well as degenerative diseases. Initial therapeutic uses will likely be for patients with traumatic brain injuries and neurodegenerative disorders, such as Alzheimer’s, as the BCP will strengthen the weak, shortening connections responsible for lost memories and skills. Once implanted, the device provides a closed-loop, self-learning platform able to both determine and administer the perfect balance of pharmaceutical, electroceutical, genomeceutical and optoceutical therapies.

Dr Richard Wirt, a Senior Fellow at Intel Corporation and Co-Founder of INTENT, the company’s partner of Ni2o bringing BCP to market, commented on the device, saying, 'In the immediate timeframe, this device will have many benefits for researchers, as it could be used to replicate an entire brain image, synchronously mapping internal and external expressions of human response. Over the long term, the potential therapeutic benefits are unlimited.'
The brain controls all organs and systems in the body, so the cure to nearly every disease resides there.- Professor Newton Howard
Rather than simply disrupting neural circuits, the machine learning systems within the BCP are designed to interpret these signals and intelligently read and write to the surrounding neurons. These capabilities could be used to reestablish any degenerative or trauma-induced damage and perhaps write these memories and skills to other, healthier areas of the brain. 

One day, these capabilities could also be used in healthy patients to radically augment human ability and proactively improve health. As Professor Howard points out: 'The brain controls all organs and systems in the body, so the cure to nearly every disease resides there.' Speaking more broadly, Professor Howard sees the merging of man with machine as our inevitable destiny, claiming it to be 'the next step on the blueprint that the author of it all built into our natural architecture.'

With the resurgence of neuroscience and AI enhancing machine learning, there has been renewed interest in brain implants. This past March, Elon Musk and Bryan Johnson independently announced that they are focusing and investing in for the brain/computer interface domain. 

When asked about these new competitors, Professor Howard said he is happy to see all these new startups and established names getting into the field - he only wonders what took them so long, stating: 'I would like to see us all working together, as we have already established a mathematical foundation and software framework to solve so many of the challenges they will be facing. We could all get there faster if we could work together - after all, the patient is the priority.'

© 2017 Nuffield Department of Surgical Sciences, John Radcliffe Hospital, Headington, Oxford, OX3 9DU

ORIGINAL: NDS Oxford
2 June 2017 

domingo, 11 de junio de 2017

The Big, Hot, Expensive Problem Facing Cities Now

Cities will lose billions, and the planet will suffer–but designers could help.
[Photo: Max Ostrozhinskiy/Unsplash]
Certain climate change scenarios lend themselves to the imagination. Our brains can easily understand the risks; they’re almost filmic. Storms intensify. Cities heat up. Drought and disease explode. Coastlines are abandoned. Comparatively, financial losses can seem like an afterthought. But as economists piece together a more complex understanding of how climate change will impact the world, they’re raising the alarm.

The latest warning comes from economists from Mexico, the U.K., and the Netherlands, who show that most estimates of the cost of climate change are missing something important: the fact that global warming will be much worse in cities thanks to the urban heat island effect. Not only will cities be much hotter, they’ll pay for it, losing as much as 11% of their GDP in the most extreme cases. And overall, this “local” warming will make global warming worse. Cities need to act now to increase cool roofs, cool asphalt, and other design changes that can dampen the effect, they argue.

In the 1800s, a British scientist named Luke Howard observed that the temperature in London was consistently higher than nearby areas. Today that phenomenon is called the urban heat island effect: Asphalt, dense architecture, energy usage, and a lack of green space all conspire to make cities much warmer than areas nearby–which actually cascades to dramatically alter the weather patterns around cities in general. The effect also compounds climate change in cities, which see hotter temperatures than what the rest of the world experiences.

[Photo: Vladimir Kudinov/Unsplash]
In the journal Nature Climate Change, the economists Francisco Estrada, W.J. Wouter Botzen, and Richard S.J. Tol write that this “local” form of climate change will deeply depress the urban economy–and dramatically “amplify” global climate change overall. “Any hard-won victories over climate change on a global scale could be wiped out by the effects of uncontrolled urban heat islands,” Tol said in a University of Sussex statement. The impact is so dramatic, the economic losses from climate change are almost three times worse when the urban heat island effect is included in the model, as opposed to conventional models that don’t consider the effect.

The trio ran an analysis of the 1,692 largest cities in the world under several different future greenhouse gas concentration models, ultimately finding that the hardest-hit cities could lose almost 11% of their GDP by 2100 under the most extreme scenario, with average losses at about 5.6%. For a city like New York, which had a GDP of $1.33 trillion in 2012, an 11% loss could mean roughly $146 billion. For comparison’s sake, that’s almost double the city budget Mayor de Blasio proposed this year, or roughly what China spends on defense every year. The urban heat island effect would make any attempts to mitigate climate change on a global scale (say, through international treaties or large-scale efforts) way less effective. In short, if cities don’t start mitigating the urban heat island effect, they’ll be in big trouble economically very soon, and the rest of the world will suffer, too.

While that’s bad news for just about everyone involved, the economists point out a silver lining: Cities are more nimble and flexible to enact policy than hulking national or international governments. They modeled four different levels of policy that cities could make, and found that mitigating the urban heat island effect on a local level could have major benefits on a global scale. “And even when global efforts fail, we show that local policies can still have a positive impact, making them at least a useful insurance for bad climate outcomes on the international stage,” Tol added.

[Photo: Maxvis/iStock]
That includes green roofs and cool roofs, which reflect solar radiation with reflective paint or material, as well as cool pavements, which are made with reflective aggregate to bounce back the sun’s rays. (Expanding green spaces and increasing tree plantings are important, too, they add.)

Some cities are already enacting policy in line with their recommendations: Los Angeles made cool roofs a requirement in 2013, and just last month New York City released guidelines for resilient architecture that include cool roofs and cool pavement, as well as other heat-mitigation designs like bioswales ("...landscape elements designed to concentrate or remove silt and pollution from surface runoff water. They consist of a swaled drainage course with gently sloped sides (less than 6%) and filled with vegetation, compost and/or riprap..."). Meanwhile, many other cities are replacing parking lots with green space and parks. Architects in Phoenix are incorporating heat island-busting canopies into their designs.
Photo: Co.Design
It’s further proof that the battle for the planet will be fought in cities–and that architecture, infrastructure, and urban design will be important weapons against it. 

ABOUT THE AUTHOR
Kelsey Campbell-Dollaghan is Co.Design's deputy editor

ORIGINAL: FastCoDesign
05.31.17 

sábado, 3 de junio de 2017

We Could Build an Artificial Brain Right Now

Large-scale brainlike systems are possible with existing technology—if we’re willing to spend the money

Photo: Dan Saelinger



Brain-inspired computing is having a moment. Artificial neural network algorithms like deep learning, which are very loosely based on the way the human brain operates, now allow digital computers to perform such extraordinary feats as translating language, hunting for subtle patterns in huge amounts of data, and beating the best human players at Go.

But even as engineers continue to push this mighty computing strategy, the energy efficiency of digital computing is fast approaching its limits. Our data centers and supercomputers already draw megawatts—some 2 percent of the electricity consumed in the United States goes to data centers alone. The human brain, by contrast, runs quite well on about 20 watts, which represents the power produced by just a fraction of the food a person eats each day. If we want to keep improving computing, we will need our computers to become more like our brains.

Hence the recent focus on neuromorphic technology, which promises to move computing beyond simple neural networks and toward circuits that operate more like the brain’s neurons and synapses do. The development of such physical brainlike circuitry is actually pretty far along. Work at my lab and others around the world over the past 35 years has led to artificial neural components like synapses and dendrites that respond to and produce electrical signals much like the real thing.

So, what would it take to integrate these building blocks into a brain-scale computer? 
In 2013, Bo Marr, a former graduate student of mine at Georgia Tech, and I looked at the best engineering and neuroscience knowledge of the time and concluded that it should be possible to build a silicon version of the human cerebral cortex with the transistor technology then in production. What’s more, the resulting machine would take up less than a cubic meter of space and consume less than 100 watts, not too far from the human brain.

That is not to say creating such a computer would be easy. The system we envisioned would still require a few billion dollars to design and build, including some significant packaging innovations to make it compact. There is also the question of how we would program and train the computer. Neuromorphic researchers are still struggling to understand how to make thousands of artificial neurons work together and how to translate brainlike activity into useful engineering applications.

Still, the fact that we can envision such a system means that we may not be far off from smaller-scale chips that could be used in portable and wearable electronics. These gadgets demand low power consumption, and so a highly energy-efficient neuromorphic chip—even if it takes on only a subset of computational tasks, such as signal processing—could be revolutionary. Existing capabilities, like speech recognition, could be extended to handle noisy environments. We could even imagine future smartphones conducting real-time language translation between you and the person you’re talking to. Think of it this way: In the 40 years since the first signal-processing integrated circuits, Moore’s Law has improved energy efficiency by roughly a factor of 1,000. The most brainlike neuromorphic chips could dwarf such improvements, potentially driving down power consumption by another factor of 100 million. That would bring computations that would otherwise need a data center to the palm of your hand.

The ultimate brainlike machine will be one in which we build analogues for all the essential functional components of the brain
  • the synapses, which connect neurons and allow them to receive and respond to signals; 
  • the dendrites, which combine and perform local computations on those incoming signals; and 
  • the core, or soma, region of each neuron, which integrates inputs from the dendrites and transmits its output on the axon.
Simple versions of all these basic components have already been implemented in silicon. The starting point for such work is the same metal-oxide-semiconductor field-effect transistor, or MOSFET, that is used by the billions to build the logic circuitry in modern digital processors.

These devices have a lot in common with neurons. Neurons operate using voltage-controlled barriers, and their electrical and chemical activity depends primarily on channels in which ions move between the interior and exterior of the cell—a smooth, analog process that involves a steady buildup or decline instead of a simple on-off operation.

MOSFETs are also voltage controlled and operate by the movement of individual units of charge. And when MOSFETs are operated in the “subthreshold” mode, below the voltage threshold used to digitally switch between on and off, the amount of current flowing through the device is very small—less than a thousandth of what is seen in the typical switching of digital logic gates.

The notion that subthreshold transistor physics could be used to build brainlike circuitry originated with Carver Mead of Caltech, who helped revolutionize the field of very-large-scale circuit design in the 1970s. Mead pointed out that chip designers fail to take advantage of a lot of interesting behavior—and thus information—when they use transistors only for digital logic. The process, he wrote in 1990 [PDF], essentially involves “taking all the beautiful physics that is built into...transistors, mashing it down to a 1 or 0, and then painfully building it back up with AND and OR gates to reinvent the multiply.A more “physical” or “physics-based” computer could execute more computations per unit energy than its digital counterpart. Mead predicted such a computer would take up significantly less space as well.

In the intervening years, neuromorphic engineers have made all the basic building blocks of the brain out of silicon with a great deal of biological fidelity. The neuron’s dendrite, axon, and soma components can all be fabricated from standard transistors and other circuit elements. In 2005, for example, Ethan Farquhar, then a Ph.D. candidate, and I created a neuron circuit using a set of six MOSFETs and a handful of capacitors. Our model generated electrical pulses that very closely matched those in the soma part of a squid neuron, a long-standing experimental subject. What’s more, our circuit accomplished this feat with similar current levels and energy consumption to those in the squid’s brain. If we had instead used analog circuits to model the equations neuroscientists have developed to describe that behavior, we’d need on the order of 10 times as many transistors. Performing those calculations with a digital computer would require even more space.
Illustration: James Provost Synapses and Soma: The floating-gate transistor [top left], which can store differing amounts of charge, can be used to build a “crossbar” array of artificial synapses [bottom left]. Electronic versions of other neuron components, such as the soma region [right], can be made from standard transistors and other circuit components.
Emulating synapses is a little trickier. A device that behaves like a synapse must have the ability to remember what state it is in, respond in a particular way to an incoming signal, and adapt its response over time.

There are a number of potential approaches to building synapses. The most mature one by far is the single-transistor learning synapse (STLS), a device that my colleagues and I at Caltech worked on in the 1990s while I was a graduate student studying under Mead.

We first presented the STLS in 1994, and it became an important tool for engineers who were building modern analog circuitry, such as physical neural networks. In neural networks, each node in the network has a weight associated with it, and those weights determine how data from different nodes are combined. The STLS was the first device that could hold a variety of different weights and be reprogrammed on the fly. The device is also nonvolatile, which means that it remembers its state even when not in use—a capability that significantly reduces how much energy it needs.

The STLS is a type of floating-gate transistor, a device that is used to build memory cells in flash memory. In an ordinary MOSFET, a gate controls the flow of electricity through a current-carrying channel. A floating-gate transistor has a second gate that sits between this electrical gate and the channel. This floating gate is not directly connected to ground or any other component. Thanks to that electrical isolation, which is enhanced by high-quality silicon-insulator interfaces, charges remain in the floating gate for a long time. The floating gate can take on many different amounts of charge and so have many different levels of electrical response, an essential requisite for creating an artificial synapse capable of varying its response to stimuli.

My colleagues and I used the STLS to demonstrate the first crossbar network, a computational model currently popular with nanodevice researchers. In this two-dimensional array, devices sit at the intersection of input lines running north-south and output lines running east-west. This configuration is useful because it lets you program the connection strength of each “synapse” individually, without disturbing the other elements in the array.

Thanks in part to a recent Defense Advanced Research Projects Agency program called SyNAPSE, the neuromorphic engineering field has seen a surge of research into artificial synapses built from nanodevices such as

  • memristors
  • resistive RAM, and 
  • phase-change memories (as well as floating-gate devices). 
But it will be hard for these new artificial synapses to improve on our two-decade-old floating-gate arrays. Memristors and other novel memories come with programming challenges; some have device architectures that make it difficult to target a single specific device in a crossbar array. Others need a dedicated transistor in order to be programmed, adding significantly to their footprint. Because floating-gate memory is programmable over a wide range of values, it can be more easily fine-tuned to compensate for manufacturing variation from device to device than can many nanodevices. A number of neuromorphic research groups that tried integrating nanodevices into their designs have recently come around to using floating-gate devices.

So how will we put all these brainlike components together? 
In the human brain, of course, neurons and synapses are intermingled. Neuromorphic chip designers must take a more integrated approach as well, with all neural components on the same chip, tightly mixed together. This is not the case in many neuromorphic labs today: To make research projects more manageable, different building blocks may be placed in different areas. Synapses, for example, may be relegated to an off-chip array. Connections may be routed through another chip called a field-programmable gate array, or FPGA.

But as we scale up neuromorphic systems, we’ll need to take care that we don’t replicate the arrangement in today’s computers, which lose a significant amount of energy driving bits back and forth between logic, memory, and storage. Today, a computer can easily consume 10 times the energy to move the data needed for a multiple-accumulate operation—a common signal-processing computation—as it does to perform the calculation.

The brain, by contrast, minimizes the energy cost of communication by keeping operations highly local. The memory elements of the brain, such as synaptic strengths, are mixed in with the neural components that integrate signals. And the brain’s “wires”—the dendrites and axons that extend from neurons to transmit, respectively, incoming signals and outgoing pulses—are generally fairly short relative to the size of the brain, so they don’t require large amounts of energy to sustain a signal. From anatomical data, we know that more than 90 percent of neurons connect with only their nearest 1,000 or so neighbors.

Another big question for the builders of brainlike chips and computers is the algorithms we will run on them. Even a loosely brain-inspired system can have a big advantage over digital systems. In 2004, for example, my group used floating-gate devices to perform multiplications for signal processing with just 1/1,000 the energy and 1/100 the area of a comparable digital system. In the years since, other researchers and my group have successfully demonstrated neuromorphic approaches to many other kinds of signal-processing calculations.

But the brain is still 100,000 times as efficient as the systems in these demonstrations. That’s because while our current neuromorphic technology takes advantage of the neuronlike physics of transistors, it doesn’t consider the algorithms the brain uses to perform its operations.

Today, we are just beginning to discover these physical algorithms—that is, the processes that will allow brainlike chips to operate with more brainlike efficiency. Four years ago, my research group used silicon somas, synapses, and dendrites to perform a word-spotting algorithm that identifies words in a speech waveform. This physical algorithm exhibited a thousandfold improvement in energy efficiency over predicted analog signal processing. Eventually, by lowering the amount of voltage supplied to the chips and using smaller transistors, researchers should be able to build chips that parallel the brain in efficiency for a range of computations.

When I started in neuromorphic research 30 years ago, everyone believed tremendous opportunities would arise from designing systems that are more like the brain. And indeed, entire industries are now being built around brain-inspired AI and deep learning, with applications that promise to transform—among other things—our mobile devices, our financial institutions, and how we interact in public spaces.

And yet, these applications rely only slightly on what we know about how the brain actually works. The next 30 years will undoubtedly see the incorporation of more such knowledge. We already have much of the basic hardware we need to accomplish this neuroscience-to-computing translation. But we must develop a better understanding of how that hardware should behave—and what computational schemes will yield the greatest real-world benefits.

Consider this a call to action. We have come pretty far with a very loose model of how the brain works. But neuroscience could lead to far more sophisticated brainlike computers. And what greater feat could there be than using our own brains to learn how to build new ones?

This article appears in the June 2017 print issue as “A Road Map for the Artificial Brain.”

About the Author

Jennifer Hasler is a professor of electrical and computer engineering at the Georgia Institute of Technology.

ORIGINAL: IEEE Spectrum
By JENNIFER HASLER 
Posted 1 Jun 2017 | 19:00 GMT

martes, 16 de mayo de 2017

New Bioprinter Makes It Easier to Fabricate 3D Flesh and Bone

Photo: Danny Cabera
The ideal 3D bioprinter, says tissue engineering expert Y. Shrike Zhang, would resemble a breadmaker: “You’d have a few buttons on top, and you’d press a button to choose heart tissue or liver tissue.” Then Zhang would walk away from the machine while it laid down complex layers of cells and other materials. 

The technology isn’t quite there yet. But the new BioBot 2 printer seems a step in that direction. The tabletop device includes a suite of new features designed to give users easy control over a powerful device, including 
  • automated calibration; 
  • six print heads to extrude six different bioinks; 
  • placement of materials with 1-micrometer precision on the x, y, and z axes; and 
  • a user-friendly software interface that manages the printing process from beginning to end.
BioBots cofounder and CEO Danny Cabrera says the BioBot 2’s features are a result of collaboration with researchers who work in tissue engineering.“To push this work forward, we had to do more than just develop a new robot.”—Danny Cabrera, CEO of BioBots

We’ve been working closely with scientists over the past year and a half to understand what they need to push this work forward,” he says. “What we found is that they needed more than just a bioprinter—and we had to do more than just develop a new robot.

The company’s cloud-based software makes it easy for users to upload their printing parameters, which the system translates into protocols for the machine. After the tissue is printed, the system can use embedded cameras and computer-vision software to run basic analyses. For example, it can count the number of living versus dead cells in a printed tissue, or measure the length of axons in printed neurons. “This platform lets them measure how different printing parameters, like pressure or cellular resolution, affect the biology of the tissue,” Cabrera says.

The BioBot 1 hit the market in 2015 and sells for US $10,000. The company is now taking orders for the $40,000 BioBot 2, and plans to ship later this year. 
Photo: Danny Cabrera BioBots will soon begin selling a kit with all the materials necessary to print soft tissue, such as cartilage.
Each of the BioBot 2’s print heads can cool its bioink to 4 degrees Celsius or heat it to 200 degrees Celsius. The printbed is also temperature-controlled, and it’s equipped with visible and ultraviolet lights that trigger cross-linking in materials to give make printed forms more solid

Cabrera says the temperature controls make it easier to print collagen, a principal component of connective tissue and bone, because it cross-links at colder temperatures. “A lot of people were hacking their bioprinters to get collagen to print,” Cabrera says. “Some were printing in the refrigerator.

While some researchers won’t be interested in using the six print heads to make tissue composed of six different materials, Cabrera says the design also allows researchers to multiplex experiments. For example, if researchers are experimenting with the concentration of cells in a bioink, this setup allows them to simultaneously test six different versions. “That can save weeks if you have to wait for your cells to grow after each experiment,” Cabrera says.

And the machine can deposit materials not only on a petri dish, but also into a cell-culture plate with many small wells. With a 96-well plate, “you could have 96 lilttle experiments, says Cabrera. 
Photo: Danny Cabrera. Another kit will include the materials needed to print bone and other hard tissue.
One long-term goal of bioprinting is to give doctors the ability to press a button and print out a sheet of skin for a burn patient, or a precisely shaped bone graft for someone who’s had a disfiguring accident. Such things have been achieved in the lab, but they’re far from gaining regulatory approval. An even longer-term goal is to give doctors the power to print out entire replacement organs, thus ending the shortage of organs available for transplant, but that’s still in the realm of sci-fi. 

While we wait for those applications, however, 3D bioprinters are already finding plenty of uses in biomedical research. 

Zhang experimented with an early beta version of the BioBot 1 while working in the Harvard Medical School lab of Ali Khademhosseini. He used bioprinters to create organ-on-a-chip structures, which mimic the essential nature of organs like hearts, livers, and blood vessels with layers of the appropriate cell types laid down in careful patterns. These small chips can be used for drug screening and basic medical research. With the BioBot beta, Zhang made a “thrombosis-on-a-chip” where blood clots formed inside miniature blood vessels. 

Now an instructor of medicine and an associate bioengineer at Brigham and Women’s Hospital in Boston, Zhang says he’s intrigued by the BioBot 2. Its ability to print with multiple materials is enticing, he says, because he wants to reproduce complex tissues composed of different cell types. But he hasn’t decided yet whether he’ll order one. Like so much in science, “it depends on funding,” he says. 
Photo: EnvisionTec

The BioBot 2 is on the cheaper end of the bioprinter market.
The top-notch machines used by researchers who want nanometer-scale precision typically cost around $200,000—like the large 3D-Bioplotter from EnvisionTec. This machine was used in research announced just today, in which Northwestern University scientists 3D-printed a structure that resembled a mouse ovary. When they seeded it with immature egg cells and implanted it into a mouse, the animal gave birth to live pups. 
Photo: Cellink

But there are a few other bioprinters that compete with the BioBot machines on price. Most notably, a Swedish company called Cellink sells three desktop-sized bioprinters that range in price from $10,000 to $40,000

And a San Francisco startup called Aether just recently began sending beta units to researchers for testing and feedback; the company has promised to begin selling its Aether 1 this year for only $9000.
Photo: Aether
The biggest source of competition may not be other companies, but bioengineers’ innate propensity for tinkering. “We’ll often get some basic sort of printer and make our own print heads and bioinks,” Zhang says.

But for biology researchers who don’t have an engineering background, Zhang says, the BioBot 2 would provide a powerful boost in abilities. It would be almost like giving a kitchen-phobic individual the sudden capacity to bake a perfect loaf of whole wheat bread. 


ORIGINAL: IEEE Spectrum
Posted 16 May 2017

viernes, 21 de abril de 2017

Neil deGrasse Tyson Says His New Video May Contain His "Most Important Words" Yet

Neil deGrasse Tyson/Facebook

If you haven't already, please take 4 minutes out of your day to watch this.

Astrophysicist Neil deGrasse Tyson has released a video urging Americans to change how they relate to science.

Tyson posted the four-and-a-half minute video on his page, alongside this written note:

Dear fb Universe,

I offer this four-minute video on "Science in America" containing what may be the most important words I have ever spoken.

As always, but especially these days, keep looking up.

Neil deGrasse Tyson

Tyson's message in the video centres on what he sees as a worrisome decline in scientific literacy in the US.

"Science is a fundamental part of the country that we are," he says in the video. "But in this, the 21st century, when it comes time to make decisions about science, it seems that people have lost the ability to judge what is true and what is not."

That shift, he says, is a "recipe for the complete dismantling of our informed democracy".

Tyson's speech is interspersed with clips of political debates and news. The video cuts to a clip of Vice President Mike Pence, then a congressman, speaking on the floor of the House of Representatives.

"Let us demand that educators around America teach evolution not as fact, but as theory," Pence says in the clip.

The role of science, Tyson says, is to provide the factual grounding for politics. The role of politics is to decide what to do about those facts.

You can watch the video in full below.


ORIGINAL: ScienceAlert
RAFI LETZTER, BUSINESS INSIDER
21 APR 2017

jueves, 20 de abril de 2017

domingo, 16 de abril de 2017

With this new system, scientists never have to write a grant application again

Johan Bollen (left) and Marten Scheffer (right) say scientists should give each other money instead of writing and reviewing grants. Ingrid van de Leemput
AMSTERDAM—Almost every scientist agrees: Applying for research funding is a drag. Writing a good proposal can take months, and the chances of getting funded are often slim. Funding agencies, meanwhile, spend more and more time and money reviewing growing stacks of applications.

That’s why two researchers are proposing a radically different system that would do away with applications and reviews; instead scientists would just give each other money. “Self-organized fund allocation(SOFA), as it’s called, was developed by computer scientist Johan Bollen at Indiana University in Bloomington. When he first published about the idea in 2014, many people were skeptical. But interest appears to be growing, and thanks to the work of an enthusiastic advocate, ecologist Marten Scheffer of Wageningen University in the Netherlands, the Dutch parliament adopted a motion last year asking the country’s main funding agency, the Netherlands Organization for Scientific Research (NWO), to set up a SOFA pilot project.

Competition for funding has become too intense, especially for young scientists, Scheffer and Bollen say, and the current peer-review system is inefficient. It’s also unfair, they argue, because a few scientists get lots of grants—Scheffer is one of them—whereas many others get few or nothing. But when Scheffer explained his idea at an NWO workshop about “application pressure” here last week, the agency didn’t appear sold yet.

The duo says the numbers speak for themselves. At the U.S. National Institutes of Health, the overall success rate for grants applications has dropped from 30% in 2003 to 19.1% in 2016. In the latest round of European Research Council Starting Grants, the rate was a paltry 11.3%. At NWO, the success rate for grants for young scientists has dropped to 14%. A 2013 study estimated that writing and reviewing applications for €40 million worth of these grants costs €9.5 million annually.

In Bollen’s system, scientists no longer have to apply; instead, they all receive an equal share of the funding budget annually—some €30,000 in the Netherlands, and $100,000 in the United States—but they have to donate a fixed percentage to other scientists whose work they respect and find important. “Our system is not based on committees’ judgments, but on the wisdom of the crowd,” Scheffer told the meeting.

Bollen and his colleagues have tested their idea in computer simulations. If scientists allocated 50% of their money to colleagues they cite in their papers, research funds would roughly be distributed the way funding agencies currently do, they showed in a paper last year—but at much lower overhead costs.

Not everybody is convinced. At the meeting, some worried that scientists might give money mostly to their friends. Scheffer said an algorithm would prevent that, for instance by banning donations to people you have published with, but he acknowledged it would be a challenge in small research communities. SOFA might also result in a mismatch between what scientists need and what their colleagues donate, and a competition for donations could lead to a time-consuming and costly circus, comparable to an election campaign.

The way to find out, Scheffer and Bollen say, is a real-world test, and they say the Netherlands, a small country with short lines of communication between scientists, politicians, and funding agencies, is a good place for one. Last year, Scheffer convinced Eppo Bruins, a member of the Dutch House of Representatives, to submit a motion calling for a pilot program at NWO, which the parliament approved in June 2016. The money could be taken from a €150 million NWO pot currently distributed among consortia of innovative Dutch scientists, Bruins suggested.

But NWO is not obliged to carry out the proposal, and so far has shown little enthusiasm. “NWO is willing to explore together with scientists and other stakeholders how to improve allocation rates, but is still considering practicality and support” for SOFA, a spokesperson tells ScienceInsider. At last week’s meeting, NWO President Stan Gielen said the funds Bruins has in mind are distributed by NWO but are earmarked by the Ministry of Education, Culture and Science, which would have to give permission. Gielen added that any experiment should not come at the expense of existing funding.

Scheffer says he’s not giving up. It’s not a risky experiment, he says: “The money would not be wasted, after all, but just be given to other scientists.But he says he understands why NWO is not thrilled: If applied universally, the novel system would make the agency redundant. Perhaps it’s telling, Scheffer says, that he has not been invited to an international conference on applications and peer review that NWO is organizing in June.
DOI: 10.1126/science.aal1055

ORIGINAL: Science Magazine
Apr. 13, 2017 , 3:00 PM