MIT researchers have built a low-power chip specialized for automatic speech recognition. A cellphone running speech-recognition software might require about 1 watt of power; the new chip requires 100 times less power (between 0.2 and 10 milliwatts, depending on the number of words it has to recognize).
That could translate to a power savings of 90 to 99 percent, making voice control practical for wearables (especially watches, earbuds, and glasses, where speech recognition is essential) and other simple electronic devices, including ones that have to harvest energy from their environments or go months between battery charges, used in the “internet of things” (IoT), says Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science at MIT, whose group developed the new chip.
A voice-recognition network is too big to fit in a chip’s onboard memory, which is a problem because going off-chip for data is much more energy intensive than retrieving it from local stores. So the MIT researchers’ design concentrates on minimizing the amount of data that the chip has to retrieve from off-chip memory.
The new chip was presented last week at the International Solid-State Circuits Conference.
The research was funded through the Qmulus Project, a joint venture between MIT and Quanta Computer, and the chip was prototyped through the Taiwan Semiconductor Manufacturing Company’s University Shuttle Program.
Abstract of A Scalable Speech Recognizer with Deep-Neural-Network Acoustic Models and Voice-Activated Power Gating
The applications of speech interfaces, commonly used for search and personal assistants, are diversifying to include wearables, appliances, and robots. Hardware-accelerated automatic speech recognition (ASR) is needed for scenarios that are constrained by power, system complexity, or latency. Furthermore, a wakeup mechanism, such as voice activity detection (VAD), is needed to power gate the ASR and downstream system. This paper describes IC designs for ASR and VAD that improve on the accuracy, programmability, and scalability of previous work.
SpaceX has applied to the FCC to launch 11,943 satellites into low-Earth orbit, providing “ubiquitous high-bandwidth (up to 1Gbps per user, once fully deployed) broadband services for consumers and businesses in the U.S. and globally,” according to FCC applications.
Recent meetings with the FCC suggest that the plan now looks like “an increasingly feasible reality — particularly with 5G technologies just a few years away, promising new devices and new demand for data,” Verge reports.
Such a service will be particularly useful to rural areas, which have limited access to internet bandwidth.
Low-Earth orbit (at up to 2,000 kilometers, or 1,200 mi) ensures lower latency (communication delay between Earth and satellite) — making the service usable for voice communications via Skype, for example — compared to geosynchronous orbit (at 35,786 kilometers, or 22,000 miles), offered by Dish Network and other satellite ISP services.* The downside: it takes a lot more satellites to provide the coverage.
Boeing, Softbank-backed OneWeb (which hopes to “connect every school to the Internet by 2022″), Telesat, and others** have proposed similar services, possibly bringing the total number of satellites to about 20,000 in low and mid earth orbits in the 2020s, estimates Next Big Future.
* “SpaceX expects its latencies between 25 and 35ms, similar to the latencies measured for wired Internet services. Current satellite ISPs have latencies of 600ms or more, according to FCC measurements, notes Ars Technica.
** Audacy, Karousel, Kepler Communications, LeoSat, O3b, Space Norway,Theia Holdings, and ViaSat, according to Space News. The ITU [international counterpart of the FCC] has set rules preventing new constellations to interfere with established ground and satellite systems operating in the same frequencies. OneWeb, for example, has said it will basically switch off power as its satellites cross the equator so as not to disturb transmissions from geostationary-orbit satellites directly above and using Ku-band frequencies.
If you’re overweight and find it challenging to exercise regularly, now there’s good news: A less strenuous form of exercise known as whole-body vibration (WBV) can mimic the muscle and bone health benefits of regular exercise — at least in mice — according to a new study published in the Endocrine Society’s journal Endocrinology.
Lack of exercise is contributing to the obesity and diabetes epidemics, according to the researchers. These disorders can also increase the risk of bone fractures. Physical activity can help to decrease this risk and reduce the negative metabolic effects of these conditions.
But the alternative, WBV, can be experienced while sitting, standing, or even lying down on a machine with a vibrating platform. When the machine vibrates, it transmits energy to your body, and your muscles contract and relax multiple times during each second.
“Our study is the first to show that whole-body vibration may be just as effective as exercise at combating some of the negative consequences of obesity and diabetes,” said the study’s first author, Meghan E. McGee-Lawrence, Ph.D., of Augusta University in Georgia. “While WBV did not fully address the defects in bone mass of the obese mice in our study, it did increase global bone formation, suggesting longer-term treatments could hold promise for preventing bone loss as well.”
Just as effective as a treadmill
Glucose and insulin tolerance testing revealed that the genetically obese and diabetic mice showed similar metabolic benefits from both WBV and exercising on a treadmill. Obese mice gained less weight after exercise or WBV than obese mice in the sedentary group, although they remained heavier than normal mice. Exercise and WBV also enhanced muscle mass and insulin sensitivity in the genetically obese mice.
The findings suggest that WBV may be a useful supplemental therapy to combat metabolic dysfunction in individuals with morbid obesity. “These results are encouraging,” McGee-Lawrence said. “However, because our study was conducted in mice, this idea needs to be rigorously tested in humans to see if the results would be applicable to people.”
The authors included researchers at the National Institute of Health’s National Institute of Aging (NIA). Funding was provided by the American Diabetes Association, the National Institutes of Health’s National Institute of Diabetes and Digestive Kidney Diseases, and the National Institute on Aging.
Know a cheaper alternative to the Tranquility Pod? Sound off below!
* To conduct the study, researchers examined two groups of 5-week-old male mice. One group consisted of normal mice, while the other group was genetically unresponsive to the hormone leptin, which promotes feelings of fullness after eating. Mice from each group were assigned to sedentary, WBV or treadmill exercise conditions.
After a week-long period to grow used to the exercise equipment, the groups of mice began a 12-week exercise program. The mice in the WBV group underwent 20 minutes of WBV at a frequency of 32 Hz with 0.5g acceleration each day. Mice in the treadmill group walked for 45 minutes daily at a slight incline. For comparison, the third group did not exercise. Mice were weighed weekly during the study.Abstract of Whole-body vibration mimics the metabolic effects of exercise in male leptin receptor deficient mice
Whole-body vibration has gained attention as a potential exercise mimetic, but direct comparisons with the metabolic effects of exercise are scarce. To determine whether whole-body vibration recapitulates the metabolic and osteogenic effects of physical activity, we exposed male wildtype (Wt) and leptin receptor deficient (db/db) mice to daily treadmill exercise or whole-body vibration for three months. Body weights were analyzed and compared with Wt and db/db mice that remained sedentary. Glucose and insulin tolerance testing revealed comparable attenuation of hyperglycemia and insulin resistance in db/db mice following treadmill exercise or whole-body vibration. Both interventions reduced body weight in db/db mice and normalized muscle fiber diameter. Treadmill exercise and whole-body vibration also attenuated adipocyte hypertrophy in visceral adipose tissue and reduced hepatic lipid content in db/db mice. Although the effects of leptin receptor deficiency on cortical bone structure were not eliminated by either intervention, exercise and whole-body vibration increased circulating levels of osteocalcin in db/db mice. In the context of increased serum osteocalcin, the modest effects of TE and WBV on bone geometry, mineralization, and biomechanics may reflect subtle increases in osteoblast activity in multiple areas of the skeleton. Taken together, these observations indicate that whole-body vibration recapitulates the effects of exercise on metabolism in type 2 diabetes.
In a study published Tuesday Mar. 14 in the open-access journal eLife, researchers at Imperial College London found that applying transcranial alternating current stimulation (TACS) through the scalp helped to synchronize brain waves in different areas of the brain, enabling subjects to perform better on tasks involving short-term working memory.
The hope is that the approach could one day be used to bypass damaged areas of the brain and relay signals in people with traumatic brain injury, stroke, or epilepsy.
“What we observed is that people performed better when the two waves had the same rhythm and at the same time,” said Ines Ribeiro Violante, PhD, a neuroscientist in the Department of Medicine at Imperial, who led the research. The current gave a performance boost to the memory processes used when people try to remember names at a party, telephone numbers, or even a short grocery list.Keeping the beat
Violante and team targeted two brain regions — the middle frontal gyrus and the inferior parietal lobule — which are known to be involved in working memory.
Ten volunteers were asked to carry out a set of memory tasks of increasing difficulty while receiving theta frequency stimulation to the two brain regions at slightly different times (unsynchronised), at the same time (synchronous), or only a quick burst (sham) to give the impression of receiving full treatment.
In the working memory experiments, participants looked at a screen on which numbers flashed up and had to remember if a number was the same as the previous, or in the case of the harder trial, if the current number matched that of two-numbers previous.
Results showed that when the brain regions were stimulated in sync, reaction times on the memory tasks improved, especially on the harder of the tasks requiring volunteers to hold two strings of numbers in their minds.
“The classic behavior is to do slower on the harder cognitive task, but people performed faster with synchronized stimulation and as fast as on the simpler task,” said Violante.
Previous studies have shown that brain stimulation with electromagnetic waves or electrical current can have an effect on brain activity, the field has remained controversial due to a lack of reproducibility. But using functional MRI to image the brain enabled the team to show changes in activity occurring during stimulation.
“The results show that when the stimulation was in sync, there was an increase in activity in those regions involved in the task. When it was out of sync the opposite effect was seen,” Violante explained.
“The next step is to see if the brain stimulation works in patients with brain injury, in combination with brain imaging, where patients have lesions which impair long range communication in their brains,” said Violante. “The hope is that it could eventually be used for these patients, or even those who have suffered a stroke or who have epilepsy.
The researchers also plan to combine brain stimulation with cognitive training to see if it restores lost skills.
The research was funded by the Wellcome Trust.Abstract of Externally induced frontoparietal synchronization modulates network dynamics and enhances working memory performance
Cognitive functions such as working memory (WM) are emergent properties of large-scale network interactions. Synchronisation of oscillatory activity might contribute to WM by enabling the coordination of long-range processes. However, causal evidence for the way oscillatory activity shapes network dynamics and behavior in humans is limited. Here we applied transcranial alternating current stimulation (tACS) to exogenously modulate oscillatory activity in a right frontoparietal network that supports WM. Externally induced synchronization improved performance when cognitive demands were high. Simultaneously collected fMRI data reveals tACS effects dependent on the relative phase of the stimulation and the internal cognitive processing state. Specifically, synchronous tACS during the verbal WM task increased parietal activity, which correlated with behavioral performance. Furthermore, functional connectivity results indicate that the relative phase of frontoparietal stimulation influences information flow within the WM network. Overall, our findings demonstrate a link between behavioral performance in a demanding WM task and large-scale brain synchronization.
A team of engineers at the University of California San Diego and La Jolla-based startup Nanovision Biosciences Inc. have developed the first nanoengineered retinal prosthesis — a step closer to restoring the ability of neurons in the retina to respond to light.
The technology could help tens of millions of people worldwide suffering from neurodegenerative diseases that affect eyesight, including macular degeneration, retinitis pigmentosa, and loss of vision due to diabetes.
Despite advances in the development of retinal prostheses over the past two decades, the performance of devices currently on the market to help the blind regain functional vision is still severely limited — well under the acuity threshold of 20/200 that defines legal blindness.
The new prosthesis relies on two new technologies: implanted arrays of photosensitive nanowires and a wireless power/data system.
Implanted arrays of silicon nanowires
The new prosthesis uses arrays of nanowires that simultaneously sense light and electrically stimulate the retina. The nanowires provide higher resolution than anything achieved by other devices — closer to the dense spacing of photoreceptors in the human retina, according to the researchers.*
Existing retinal prostheses require a vision sensor (such as a camera) outside of the eye to capture a visual scene and then transform it into signals to sequentially stimulate retinal neurons (in a matrix). Instead, the silicon nanowires mimic the retina’s light-sensing cones and rods to directly stimulate retinal cells. The nanowires are bundled into a grid of electrodes, directly activated by light.
This direct, local translation of incident light into electrical stimulation makes for a much simpler — and scalable — architecture for a prosthesis, according to the researchers.
Wireless power and telemetry system
For the new device, power is delivered wirelessly, from outside the body to the implant, through an inductive powering telemetry system. Data to the nanowires is sent over the same wireless link at record speed and energy efficiency. The telemetry system is capable of transmitting both power and data over a single pair of inductive coils, one emitting from outside the body, and another on the receiving side in the eye.**
Three of the researchers have co-founded La Jolla-based Nanovision Biosciences, a partner in this study, to further develop and translate the technology into clinical use, with the goal of restoring functional vision in patients with severe retinal degeneration. Animal tests with the device are in progress, with clinical trials following.***
The research was described in a recent issue of the Journal of Neural Engineering. It was funded by Nanovision Biosciences, Qualcomm Inc., and the Institute of Engineering in Medicine and the Clinical and Translational Research Institute at UC San Diego.
* For visual acuity of 20/20, an electrode pixel size of 5 μm (micrometers) is required; 20/200 visual acuity requires 50 μm. The minimum number of electrodes required for pattern recognition or reading text is estimated to be about 600. The new nanoengineered silicon nanowire electrodes are 1 μm in diameter, and for the experiment, 2500 silicon nanowires were used.
** The device is highly energy efficient because it minimizes energy losses in wireless power and data transmission and in the stimulation process, recycling electrostatic energy circulating within the inductive resonant tank, and between capacitance on the electrodes and the resonant tank. Up to 90 percent of the energy transmitted is actually delivered and used for stimulation, which means less RF wireless power emitting radiation in the transmission, and less heating of the surrounding tissue from dissipated power.
*** For proof-of-concept, the researchers inserted the wirelessly powered nanowire array beneath a transgenic rat retina with rhodopsin P23H knock-in retinal degeneration. The degenerated retina interfaced in vitro with a microelectrode array for recording extracellular neural action potentials (electrical “spikes” from neural activity).
Abstract of Towards high-resolution retinal prostheses with direct optical addressing and inductive telemetry
Objective. Despite considerable advances in retinal prostheses over the last two decades, the resolution of restored vision has remained severely limited, well below the 20/200 acuity threshold of blindness. Towards drastic improvements in spatial resolution, we present a scalable architecture for retinal prostheses in which each stimulation electrode is directly activated by incident light and powered by a common voltage pulse transferred over a single wireless inductive link. Approach. The hybrid optical addressability and electronic powering scheme provides separate spatial and temporal control over stimulation, and further provides optoelectronic gain for substantially lower light intensity thresholds than other optically addressed retinal prostheses using passive microphotodiode arrays. The architecture permits the use of high-density electrode arrays with ultra-high photosensitive silicon nanowires, obviating the need for excessive wiring and high-throughput data telemetry. Instead, the single inductive link drives the entire array of electrodes through two wires and provides external control over waveform parameters for common voltage stimulation. Main results. A complete system comprising inductive telemetry link, stimulation pulse demodulator, charge-balancing series capacitor, and nanowire-based electrode device is integrated and validated ex vivo on rat retina tissue. Significance. Measurements demonstrate control over retinal neural activity both by light and electrical bias, validating the feasibility of the proposed architecture and its system components as an important first step towards a high-resolution optically addressed retinal prosthesis.
Warner Bros. is in the early stages of developing a relaunch of The Matrix, The Hollywood Reporter revealed today (March 14, Pi day, appropriately).
The Matrix, the iconic 1999 sci-fi movie, “is considered one of the most original films in cinematic history,” says THR.
The film “depicts a dystopian future in which reality as perceived by most humans is actually a simulated reality called ‘the Matrix,’ created by sentient machines to subdue the human population, while their bodies’ heat and electrical activity are used as an energy source,” Wikipedia notes. “Computer programmer ‘Neo’ learns this truth and is drawn into a rebellion against the machines, which involves other people who have been freed from the ‘dream world.’”
Keanu Reeves said he would be open to returning for another installment of the franchise if the Wachowskis were involved, according to THR (they are not currently involved).
A team of researchers at MIT and in Japan and Germany has found a way to greatly reduce the effects of metal fatigue by incorporating a laminated nanostructure into the steel. The layered structuring gives the steel a kind of bone-like resilience, allowing it to deform without allowing the spread of microcracks that can lead to fatigue failure.
Metal fatigue can lead to abrupt and sometimes catastrophic failures in parts that undergo repeated loading, or stress. It’s a major cause of failure in structural components of everything from aircraft and spacecraft to bridges and power plants. As a result, such structures are typically built with wide safety margins that add to costs.
The findings are described in a paper in the journal Science by C. Cem Tasan, the Thomas B. King Career Development Professor of Metallurgy at MIT; Meimei Wang, a postdoc in his group; and six others at Kyushu University in Japan and the Max Planck Institute in Germany.
“Loads on structural components tend to be cyclic,” Tasan says. For example, an airplane goes through repeated pressurization changes during every flight, and components of many devices repeatedly expand and contract due to heating and cooling cycles. While such effects typically are far below the kinds of loads that would cause metals to change shape permanently or fail immediately, they can cause the formation of microcracks, which over repeated cycles of stress spread a bit further and wider, ultimately creating enough of a weak area that the whole piece can fracture suddenly.
Tasan and his team were inspired by the way nature addresses the same kind of problem, making bones lightweight but very resistant to crack propagation. A major factor in bone’s fracture resistance is its hierarchical mechanical structure, with different patterns of voids and connections at many different length scales, and a lattice-like internal structure that combines strength with light weight.
So the team investigated microstructures that would mimic this in a metal alloy, developing a kind of steel that has three key characteristics, which combine to limit the spread of cracks that do form:
- A layered structure that tends to keep cracks from spreading beyond the layers where they start.
- Microstructural phases with different degrees of hardness, which complement each other, so when a crack starts to form, “every time it wants to propagate further, it needs to follow an energy-intensive path,” and the result is a great reduction in such spreading.
- A metastable composition — tiny areas within it are poised between different stable states, some more flexible than others, and their phase transitions can help absorb the energy of spreading cracks and even lead the cracks to close back up.
Next step, Tasan says, is to scale up the material to quantities that could be commercialized, and define which applications would benefit most.
The research was supported by the European Research Council and MIT’s Department of Materials Science and Engineering.Abstract of Bone-like crack resistance in hierarchical metastable nanolaminate steels
Fatigue failures create enormous risks for all engineered structures, as well as for human lives, motivating large safety factors in design and, thus, inefficient use of resources. Inspired by the excellent fracture toughness of bone, we explored the fatigue resistance in metastability-assisted multiphase steels. We show here that when steel microstructures are hierarchical and laminated, similar to the substructure of bone, superior crack resistance can be realized. Our results reveal that tuning the interface structure, distribution, and phase stability to simultaneously activate multiple micromechanisms that resist crack propagation is key for the observed leap in mechanical response. The exceptional properties enabled by this strategy provide guidance for all fatigue-resistant alloy design efforts.
Using extremely short pulses of teraHertz (THz) radiation instead of electrical currents could lead to future computers that run ten to 100,000 times faster than today’s state-of-the-art electronics, according to an international team of researchers, writing in the journal Nature Photonics.
In a conventional computer, electrons moving through a semiconductor occasionally run into other electrons, releasing energy in the form of heat and slowing them down. With the proposed “lightwave electronics” approach, electrons could be guided by ultrafast THz pulses (the part of the electromagnetic spectrum between microwaves and infrared light). That means the travel time can be so short that the electrons would be statistically unlikely to hit anything, according to senior author Rupert Huber, a professor of physics at the University of Regensburg who led the experiment.
In the experiment, the researchers shined THz pulses into a crystal of the semiconductor gallium selenide.* These pulses were ultra-short (less than 100 femtoseconds, or 100 quadrillionths of a second). Each pulse popped electrons in the semiconductor into a higher energy level — which meant that they were free to move around.
When the electrons emitted light as they came down from the higher energy level, they emitted much shorter pulses than the electromagnetic radiation going in — just a few femtoseconds long — quick enough to read and write information to electrons at ultra-high speed.
But first, researchers need to be able to control electrons in a semiconductor. This work takes a step toward this by mobilizing groups of electrons inside a semiconductor crystal.
Because femtosecond pulses are fast enough to trap an electron between being put into an excited state and coming down from that state, they can potentially also be used for quantum computations, using electrons in excited states as qubits. The researchers managed to launch one electron simultaneously via two excitation pathways, which is not classically possible.
An electron is small enough that it behaves like a wave as well as a particle, and when it is in an excited state, its wavelength changes. Because the electron was in two excited states at once, those two waves interfered with one another and left a fingerprint in the femtosecond pulse that the electron emitted.
The research is funded by the European Research Council and the German Research Foundation.* “We generated high harmonics by irradiating a 40-μm-thick crystal of gallium selenide with intense, multi-THz pulses. These pulses were obtained by difference frequency mixing of two phase-correlated near-infrared pulse trains from a dual optical parametric amplifier pumped by a titanium sapphire amplifier. … The centre frequency was tunable and set to 33 THz in the experiments.” — F. Langer et al./Nature Photonics
Abstract of Symmetry-controlled temporal structure of high-harmonic carrier fields from a bulk crystal
High-harmonic (HH) generation in crystalline solids marks an exciting development, with potential applications in high-efficiency attosecond sources, all-optical bandstructure reconstruction and quasiparticle collisions. Although the spectral and temporal shape of the HH intensity has been described microscopically, the properties of the underlying HH carrier wave have remained elusive. Here, we analyse the train of HH waveforms generated in a crystalline solid by consecutive half cycles of the same driving pulse. Extending the concept of frequency combs to optical clock rates, we show how the polarization and carrier-envelope phase (CEP) of HH pulses can be controlled by the crystal symmetry. For certain crystal directions, we can separate two orthogonally polarized HH combs mutually offset by the driving frequency to form a comb of even and odd harmonic orders. The corresponding CEP of successive pulses is constant or offset by π, depending on the polarization. In the context of a quantum description of solids, we identify novel capabilities for polarization- and phase-shaping of HH waveforms that cannot be accessed with gaseous sources.
University of Texas at Dallas researchers have created an atomic force microscope (AFM) on a chip, dramatically shrinking the size — and, hopefully, the price — of a microscope used to characterize material properties down to molecular dimensions.
“A standard atomic force microscope is a large, bulky instrument, with multiple control loops, electronics and amplifiers,” said Dr. Reza Moheimani, professor of mechanical engineering at UT Dallas. “We have managed to miniaturize all of the electromechanical components down onto a single small chip.”
Moheimani and his colleagues describe their prototype device in this month’s issue of the IEEE Journal of Microelectromechanical Systems.
An atomic force microscope (AFM) is a scientific tool that is used to create detailed three-dimensional images of the surfaces of materials, down to the nanometer scale — roughly on the scale of individual molecules.
“An AFM is a microscope that ‘sees’ a surface kind of the way a visually impaired person might, by touching. You can get a resolution that is well beyond what an optical microscope can achieve,” explained Moheimani, who holds the James Von Ehr Distinguished Chair in Science and Technology in the Erik Jonsson School of Engineering and Computer Science.
The MEMS version
The UT Dallas team created its prototype on-chip AFM using a microelectromechanical systems (MEMS) approach.
“A classic example of MEMS technology are the accelerometers and gyroscopes found in smartphones,” said Anthony Fowler, PhD, a research scientist in Moheimani’s Laboratory for Dynamics and Control of Nanosystems and one of the article’s co-authors. “These used to be big, expensive, mechanical devices, but using MEMS technology, accelerometers have shrunk down onto a single chip, which can be manufactured for just a few dollars apiece.”
The MEMS-based AFM is about 1 square centimeter in size, or a little smaller than a dime. It is attached to a small printed circuit board that contains circuitry, sensors, and other miniaturized components that control the movement and other aspects of the device.
Because conventional AFMs require lasers and other large components to operate, their use can be limited. They’re also expensive. “An educational version can cost about $30,000 or $40,000, and a laboratory-level AFM can run $500,000 or more,” Moheimani said. “Our MEMS approach to AFM design has the potential to significantly reduce the complexity and cost of the instrument.
“One of the attractive aspects about MEMS is that you can mass-produce them, building hundreds or thousands of them in one shot, so the price of each chip would only be a few dollars. As a result, you might be able to offer the whole miniature AFM system for a few thousand dollars.”
A reduced size and price tag also could expand the AFMs’ utility beyond current scientific applications.
“For example, the semiconductor industry might benefit from these small devices, in particular companies that manufacture the silicon wafers from which computer chips are made,” Moheimani said. “With our technology, you might have an array of AFMs to characterize the wafer’s surface to find micro-faults before the product is shipped out.”
The lab prototype is a first-generation device, Moheimani said, and the group is already working on ways to improve and streamline the fabrication of the device.
Moheimani’s research has been funded by UT Dallas startup funds, the Von Ehr Distinguished Chair, and the Defense Advanced Research Projects Agency.
Abstract of On-Chip Dynamic Mode Atomic Force Microscopy: A Silicon-on-Insulator MEMS Approach
The atomic force microscope (AFM) is an invaluable scientific tool; however, its conventional implementation as a relatively costly macroscale system is a barrier to its more widespread use. A microelectromechanical systems (MEMS) approach to AFM design has the potential to significantly reduce the cost and complexity of the AFM, expanding its utility beyond current applications. This paper presents an on-chip AFM based on a silicon-on-insulator MEMS fabrication process. The device features integrated xy electrostatic actuators and electrothermal sensors as well as an AlN piezoelectric layer for out-of-plane actuation and integrated deflection sensing of a microcantilever. The three-degree-of-freedom design allows the probe scanner to obtain topographic tapping-mode AFM images with an imaging range of up to 8μm×8μm in closed loop. [2016-0211]
Stanford chemical engineers have developed a soft, flexible plastic electrode that stretches like rubber but carries electricity like wires — ideal for brain interfaces and other implantable electronics, they report in an open-access March 10 paper in Science Advances.
Developed by Zhenan Bao, a professor of chemical engineering, and his team, the material is still a laboratory prototype, but the team hopes to develop it as part of their long-term focus on creating flexible materials that interface with the human body.
“One thing about the human brain that a lot of people don’t know is that it changes volume throughout the day,” says postdoctoral research fellow Yue Wang, the first author on the paper. “It swells and de-swells.” The current generation of electronic implants can’t stretch and contract with the brain, making it complicated to maintain a good connection.
To create this flexible electrode, the researchers began with a plastic (PEDOT:PSS) with high electrical conductivity and biocompatibility (could be safely brought into contact with the human body), but was brittle. So they added a “STEC” (stretchability and electrical conductivity) molecule similar to the kind of additives used to thicken soups in industrial kitchens.
This additive transformed the plastic’s chunky and brittle molecular structure into a fishnet pattern with holes in the strands to allow the material to stretch and deform. The resulting plastic remained very conductive even when stretched 800 percent its original length.
Scientists at SLAC National Accelerator Laboratory, UCLA, the Materials Science Institute of Barcelona, and Samsung Advanced Institute of Technology were also involved in the research, which was funded by Samsung Electronics and the Air Force Office of Science Research.
Stanford University School of Engineering | Stretchable electrodes pave way for flexible electronics
Abstract of A highly stretchable, transparent, and conductive polymer
Previous breakthroughs in stretchable electronics stem from strain engineering and nanocomposite approaches. Routes toward intrinsically stretchablemolecularmaterials remain scarce but, if successful,will enable simpler fabrication processes, such as direct printing and coating, mechanically robust devices, and more intimate contact with objects. We report a highly stretchable conducting polymer, realized with a range of enhancers that serve dual functions to changemorphology andas conductivity-enhancingdopants inpoly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS). The polymer films exhibit conductivities comparable to the best reported values for PEDOT:PSS, with higher than 3100 S/cm under 0% strain and higher than 4100 S/cm under 100% strain—among the highest for reported stretchable conductors. It is highly durable under cyclic loading,with the conductivitymaintained at 3600 S/cm even after 1000 cycles to 100% strain. The conductivity remained above 100 S/cm under 600% strain, with a fracture strain as high as 800%, which is superior to even the best silver nanowire– or carbon nanotube–based stretchable conductor films. The combination of excellent electrical andmechanical properties allowed it to serve as interconnects for field-effect transistor arrays with a device density that is five times higher than typical lithographically patterned wavy interconnects.
Brain has more than 100 times higher computational capacity than previously thought, say UCLA scientists
The brain has more than 100 times higher computational capacity than was previously thought, a UCLA team has discovered.
Obsoleting neuroscience textbooks, this finding suggests that our brains are both analog and digital computers and could lead to new approaches for treating neurological disorders and developing brain-like computers, according to the researchers.
Dendrites have been considered simple passive conduits of signals. But by working with animals that were moving around freely, the UCLA team showed that dendrites are in fact electrically active — generating nearly 10 times more spikes than the soma (neuron cell body).
Fundamentally changes our understanding of brain computation
The finding, reported in the March 9 issue of the journal Science, challenges the long-held belief that spikes in the soma are the primary way in which perception, learning and memory formation occur.
“Dendrites make up more than 90 percent of neural tissue,” said UCLA neurophysicist Mayank Mehta, the study’s senior author. “Knowing they are much more active than the soma fundamentally changes the nature of our understanding of how the brain computes information.”
“This is a major departure from what neuroscientists have believed for about 60 years,” said Mehta, a UCLA professor of physics and astronomy, of neurology and of neurobiology.
Because the dendrites are nearly 100 times larger in volume than the neuronal centers, Mehta said, the large number of dendritic spikes taking place could mean that the brain has more than 100 times the computational capacity than was previously thought.
Study with moving rats made discovery possible
Previous studies have been limited to stationary rats, because scientists have found that placing electrodes in the dendrites themselves while the animals were moving actually killed those cells. But the UCLA team developed a new technique that involves placing the electrodes near, rather than in, the dendrites.
Using that approach, the scientists measured dendrites’ activity for up to four days in rats that were allowed to move freely within a large maze. Taking measurements from the posterior parietal cortex, the part of the brain that plays a key role in movement planning, the researchers found far more activity in the dendrites than in the somas — approximately five times as many spikes while the rats were sleeping, and up to 10 times as many when they were exploring.
Looking at the soma to understand how the brain works has provided a framework for numerous medical and scientific questions — from diagnosing and treating diseases to how to build computers. But, Mehta said, that framework was based on the understanding that the cell body makes the decisions, and that the process is digital.
“What we found indicates that such decisions are made in the dendrites far more often than in the cell body, and that such computations are not just digital, but also analog,” Mehta said. “Due to technological difficulties, research in brain function has largely focused on the cell body. But we have discovered the secret lives of neurons, especially in the extensive neuronal branches. Our results substantially change our understanding of how neurons compute.”
Funding was provided by the University of California.
Abstract of Dynamics of cortical dendritic membrane potential and spikes in freely behaving rats
Neural activity in vivo is primarily measured using extracellular somatic spikes, which provide limited information about neural computation. Hence, it is necessary to record from neuronal dendrites, which generate dendritic action potentials (DAP) and profoundly influence neural computation and plasticity. We measured neocortical sub- and suprathreshold dendritic membrane potential (DMP) from putative distal-most dendrites using tetrodes in freely behaving rats over multiple days with a high degree of stability and sub-millisecond temporal resolution. DAP firing rates were several fold larger than somatic rates. DAP rates were modulated by subthreshold DMP fluctuations which were far larger than DAP amplitude, indicting hybrid, analog-digital coding in the dendrites. Parietal DAP and DMP exhibited egocentric spatial maps comparable to pyramidal neurons. These results have important implications for neural coding and plasticity.
An international team led by IBM has created the world’s smallest magnet, using a single atom of rare-earth element holmium, and stored one bit of data on it over several hours.
The achievement represents the ultimate limit of the classical approach to high-density magnetic storage media, according to a paper published March 8 in the journal Nature.
Currently, hard disk drives use about 100,000 atoms to store a single bit. The ability to read and write one bit on one atom may lead to significantly smaller and denser storage devices in the future. (The researchers are currently working in an ultrahigh vacuum at 1.2 K (a temperature near absolute zero.)
Using a scanning tunneling microscope* (STM), the researchers also showed that a device using two magnetic atoms could be written and read independently, even when they were separated by just one nanometer.
The researchers believe this tight spacing could eventually yield magnetic storage that is 1,000 times denser than today’s hard disk drives and solid state memory chips. So they could one day store 1,000 times more information in the same space. That means data centers, computers, and personal devices would be radically smaller and more powerful.
Researchers at EPFL in Switzerland, University of Chinese Academy of Sciences in Hong Kong, University of Göttingen in Germany, Universität Zürich in Switzerland, Institute of Basic Science, Center for Quantum Nanoscience in South Korea, and Ewha Womans University in South Korea were also on the research team.
* The STM was developed in 1981, earning its inventors, Gerd Binnig and Heinrich Rohrer (at IBM Zürich), the Nobel Prize in Physics in 1986. IBM is planning future scanning tunneling microscope studies to investigate the potential of performing quantum information processing using individual magnetic atoms. Earlier this week, IBM announced it will be building the world’s first commercial quantum computers for business and science.
IBM Research | IBM Research Created the World’s Smallest Magnet — an Atom
Two research teams are developing new ways to communicate with robots and shape them one day into the kind of productive workers featured in the current AMC TV show HUMANS (now in second season).
Programming robots to function in a real-world environment is normally a complex process. But now a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University is creating a system that lets people correct robot mistakes instantly by simply thinking.
In the initial experiment, the system uses data from an electroencephalography (EEG) helmet to correct robot performance on an object-sorting task. Novel machine-learning algorithms enable the system to classify brain waves within 10 to 30 milliseconds.
While the system currently handles relatively simple binary-choice activities, we may be able one day to control robots in much more intuitive ways. “Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button, or even say a word,” says CSAIL Director Daniela Rus. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven’t even invented yet.”
The team used a humanoid robot named “Baxter” from Rethink Robotics, the company led by former CSAIL director and iRobot co-founder Rodney Brooks.
MITCSAIL | Brain-controlled Robots
Intuitive human-robot interaction
The system detects brain signals called “error-related potentials” (generated whenever our brains notice a mistake) to determine if the human agrees with a robot’s decision.
“As you watch the robot, all you have to do is mentally agree or disagree with what it is doing,” says Rus. “You don’t have to train yourself to think in a certain way — the machine adapts to you, and not the other way around.” Or if the robot’s not sure about its decision, it can trigger a human response to get a more accurate answer.
The team believes that future systems could extend to more complex multiple-choice tasks. The system could even be useful for people who can’t communicate verbally: the robot could be controlled via a series of several discrete binary choices, similar to how paralyzed locked-in patients spell out words with their minds.
The project was funded in part by Boeing and the National Science Foundation. An open-access paper will be presented at the IEEE International Conference on Robotics and Automation (ICRA) conference in Singapore this May.
Here, robot, Fetch!
But what if the robot is still confused? Researchers in Brown University’s Humans to Robots Lab have an app for that.
“Fetching objects is an important task that we want collaborative robots to be able to do,” said computer science professor Stefanie Tellex. “But it’s easy for the robot to make errors, either by misunderstanding what we want, or by being in situations where commands are ambiguous. So what we wanted to do here was come up with a way for the robot to ask a question when it’s not sure.”
Tellex’s lab previously developed an algorithm that enables robots to receive speech commands as well as information from human gestures. But it ran into problems when there were lots of very similar objects in close proximity to each other. For example, on the table above, simply asking for “a marker” isn’t specific enough, and it might not be clear which one a person is pointing to if a number of markers are clustered close together.
“What we want in these situations is for the robot to be able to signal that it’s confused and ask a question rather than just fetching the wrong object,” Tellex explained.
The new algorithm does just that, enabling the robot to quantify how certain it is that it knows what a user wants. When its certainty is high, the robot will simply hand over the object as requested. When it’s not so certain, the robot makes its best guess about what the person wants, then asks for confirmation by hovering its gripper over the object and asking, “this one?”
David Whitney | Reducing Errors in Object-Fetching Interactions through Social Feedback
One of the important features of the system is that the robot doesn’t ask questions with every interaction; it asks intelligently.
And even though the system asks only a very simple question, it’s able to make important inferences based on the answer. For example, say a user asks for a marker and there are two markers on a table. If the user tells the robot that its first guess was wrong, the algorithm deduces that the other marker must be the one that the user wants, and will hand that one over without asking another question. Those kinds of inferences, known as “implicatures,” make the algorithm more efficient.
In future work, Tellex and her team would like to combine the algorithm with more robust speech recognition systems, which might further increase the system’s accuracy and speed. “Currently we do not consider the parse of the human’s speech. We would like the model to understand prepositional phrases (‘on the left,’ ‘nearest to me’). This would allow the robot to understand how items are spatially related to other items through language.”
Ultimately, Tellex hopes, systems like this will help robots become useful collaborators both at home and at work.
An open-access paper on the DARPA-funded research will also be presented at the International Conference on Robotics and Automation.Abstract of Correcting Robot Mistakes in Real Time Using EEG Signals
Communication with a robot using brain activity from a human collaborator could provide a direct and fast
feedback loop that is easy and natural for the human, thereby enabling a wide variety of intuitive interaction tasks. This paper explores the application of EEG-measured error-related potentials (ErrPs) to closed-loop robotic control. ErrP signals are particularly useful for robotics tasks because they are naturally occurring within the brain in response to an unexpected error. We decode ErrP signals from a human operator in real time to control a Rethink Robotics Baxter robot during a binary object selection task. We also show that utilizing a secondary interactive error-related potential signal generated during this closed-loop robot task can greatly improve classification performance, suggesting new ways in which robots can acquire human feedback. The design and implementation of the complete system is described, and results are presented for realtime
closed-loop and open-loop experiments as well as offline analysis of both primary and secondary ErrP signals. These experiments are performed using general population subjects that have not been trained or screened. This work thereby demonstrates the potential for EEG-based feedback methods to facilitate seamless robotic control, and moves closer towards the goal of real-time intuitive interaction.
Fetching items is an important problem for a social robot. It requires a robot to interpret a person’s language and gesture and use these noisy observations to infer what item to deliver. If the robot could ask questions, it would help the robot be faster and more accurate in its task. Existing approaches either do not ask questions, or rely on fixed question-asking policies. To address this problem, we propose a model that makes assumptions about cooperation between agents to perform richer signal extraction from observations. This work defines a mathematical framework for an itemfetching domain that allows a robot to increase the speed and accuracy of its ability to interpret a person’s requests by reasoning about its own uncertainty as well as processing implicit information (implicatures). We formalize the itemdelivery domain as a Partially Observable Markov Decision Process (POMDP), and approximately solve this POMDP in real time. Our model improves speed and accuracy of fetching tasks by asking relevant clarifying questions only when necessary. To measure our model’s improvements, we conducted a real world user study with 16 participants. Our method achieved greater accuracy and a faster interaction time compared to state-of-theart baselines. Our model is 2.17 seconds faster (25% faster) than a state-of-the-art baseline, while being 2.1% more accurate.
Here’s a radical new idea for creating new GMO (genetically modified organism) plants that may appeal to staunch organic-food consumers/farmers and even #NonGMOProjectVerified advocates: don’t insert a foreign gene in today’s domestic plants — delete already existing genes in semi-domesticated or even wild plants to make those plants more domestic, and reducing pesticide use in the process.
“All of the plants we eat today are mutants, but the crops we have now were selected for over thousands of years, and their mutations … such as reduced bitterness and those that facilitate easy harvest … arose by chance,” says Michael Palmgren, a botanist who heads an interdisciplinary think tank* called “Plants for a Changing World” at the University of Copenhagen. “With gene editing, we can create ‘biologically inspired organisms’ in that we don’t want to improve nature, we want to benefit from what nature has already created.”
Palmgen is senior author of an open-access review published March 2 in the journal Trends in Plant Science.
How to turn nitrogen in the atmosphere into fertilizer, reducing environmental damage
This strategy could also address problems from pesticide use and the damaging impact of large-scale agriculture on the environment. For example, runoff from excess nitrogen in fertilizers is a common pollutant; however, wild legumes, through symbiosis with bacteria, can turn nitrogen available in the atmosphere into their own fertilizer, he suggests.
Out of the more than 300,000 plant species in existence, fewer than 200 are commercially important, and only three species — rice, wheat, and maize — account for most of the plant matter that humans consume, partly because in the history of agriculture, mutations arose that made these crops the easiest to harvest, the reseachers note.
But with CRISPR technology, we don’t have to wait for nature to help us domesticate plants, argue the researchers. Instead, gene editing could make, for example, wild legumes, quinoa, or amaranth, which are already sustainable and nutritious, more farmable.
The approach has already been successful in accelerating domestication of undervalued crops using less precise gene-editing methods. For example, researchers used chemical mutagenesis to induce random mutations in weeping rice grass, an Australian wild relative of domestic rice, to make it more likely to hold onto its seeds after ripening. And in wild field cress, a type of weedy grass, scientists silenced genes with RNA interference involved with fatty acid synthesis, resulting in improved seed oil quality.
Palmgren’s group published a related open-access paper two years ago on using gene editing to make domesticated plants more “wild” and thus hardier for organic farmers.
While we’re at it, what about pharming (creating pharmaceuticals from plants) — using genetically modified wild plants?
* Supported by the University of Copenhagen Excellence Programme for Interdisciplinary Research.Abstract of Accelerating the Domestication of New Crops: Feasibility and Approaches
The domestication of new crops would promote agricultural diversity and could provide a solution to many of the problems associated with intensive agriculture. We suggest here that genome editing can be used as a new tool by breeders to accelerate the domestication of semi-domesticated or even wild plants, building a more varied foundation for the sustainable provision of food and fodder in the future. We examine the feasibility of such plants from biological, social, ethical, economic, and legal perspectives.
Japanese researchers have developed an amoeba-like shape-changing molecular robot — assembled from biomolecules such as DNA, proteins, and lipids — that could act as a programmable and controllable robot for treating live culturing cells or monitoring environmental pollution, for example.
This the first time a molecular robotic system can recognize signals and control its shape-changing function, and their molecular robots could in the near future function in a way similar to living organisms, according to the researchers.
Developed by a research group at Tohoku University and Japan Advanced Institute of Science and Technology, the molecular robot integrates molecular machines within an artificial cell membrane and is about one micrometer in diameter — similar in size to human cells. It can start and stop its shape-changing function in response to a specific DNA signal.
The movement force is generated by molecular actuators (microtubules) controlled by a molecular clutch (composed of DNA and kinesin — a “walker” that carries molecules along microtubules in the body). The shape of the robot’s body (artificial cell membrane, or liposome — a vesicle made from a lipid bilayer) is changed (from static to active) by the actuator, triggered by specific DNA signals activated by UV irradiation.
The realization of a molecular robot whose components are designed at a molecular level and that can function in a small and complicated environment, such as the human body, is expected to significantly expand the possibilities of robotics engineering, according to the researchers.*
“With more than 20 chemicals at varying concentrations, it took us a year and a half to establish good conditions for working our molecular robots,” says Associate Professor Shin-ichiro Nomura at Tohoku University’s Graduate School of Engineering, who led the study. “It was exciting to see the robot shape-changing motion through the microscope. It meant our designed DNA clutch worked perfectly, despite the complex conditions inside the robot.”
Programmable by DNA computing devices
The research results were published in an open-access paper in Science Robotics on March 1, 2017.
The authors say that “combining other molecular devices would lead to the realization of a molecular robot with advanced functions. For example, artificial nanopores, such as an artificial channel composed of DNA, could be used to sense signal molecules in the surrounding environments through the channel.
“In addition, the behavior of a molecular robot could be programmed by DNA computing devices, such as judging the condition of environments. These implementations could allow for the development of molecular robots capable of chemotaxis [movement in a direction corresponding to a gradient of increasing or decreasing concentration of a particular substance], [similar to] white blood cells, and beyond.”
The research was supported by the JSPS KAKENHI, AMED-CREST and Tohoku University-DIARE.
* In the current design, “there are still limitations in the functions of the robot. For example, the switching of robot behavior is not reversible. The shape change is not directional and as yet not possible for complex tasks, for example, locomotion. However, to the best of our knowledge, this is the first implementation of a molecular robot that can control its shape-changing behavior in response to specific signal molecules.” — Yusuke Sato et al./Science Robotics
Abstract of Micrometer-sized molecular robot changes its shape in response to signal molecules
Rapid progress in nanoscale bioengineering has allowed for the design of biomolecular devices that act as sensors, actuators, and even logic circuits. Realization of micrometer-sized robots assembled from these components is one of the ultimate goals of bioinspired robotics. We constructed an amoeba-like molecular robot that can express continuous shape change in response to specific signal molecules. The robot is composed of a body, an actuator, and an actuator-controlling device (clutch). The body is a vesicle made from a lipid bilayer, and the actuator consists of proteins, kinesin, and microtubules. We made the clutch using designed DNA molecules. It transmits the force generated by the motor to the membrane, in response to a signal molecule composed of another sequence-designed DNA with chemical modifications. When the clutch was engaged, the robot exhibited continuous shape change. After the robot was illuminated with light to trigger the release of the signal molecule, the clutch was disengaged, and consequently, the shape-changing behavior was successfully terminated. In addition, the reverse process—that is, initiation of shape change by input of a signal—was also demonstrated. These results show that the components of the robot were consistently integrated into a functional system. We expect that this study can provide a platform to build increasingly complex and functional molecular systems with controllable motility.
A research team led by the University of Minnesota has discovered a way to rewarm large-scale animal heart valves and blood vessels preserved at very low (cryogenic) temperatures without damaging the tissue. The discovery could one day lead to saving millions of human lives by creating cryogenic tissue and organ banks of organs and tissues for transplantation.
The research was published March 1 in an open-access paper in Science Translational Medicine.
Long-term preservation methods like vitrification cool biological samples to an ice-free glassy state, using very low temperatures between -160 and -196 degrees Celsius, but tissues larger than 1 milliliter (0.03 fluid ounce) often suffer major damage during the rewarming process, making them unusable for tissues.
In the new research, the researchers were able to restore 50 milliliters (1.7 fluid ounces) of tissue with warming at more than 130°C/minute without damage.
Radiofrequency inductive heating of iron nanoparticles
To achieve that, they developed a revolutionary new method using silica-coated iron-oxide nanoparticles dispersed throughout a cryoprotectant solution around the tissue. The nanoparticles act as tiny heaters around the tissue when they are activated using noninvasive radiofrequency inductive energy, rapidly and uniformly warming the tissue.
The results showed that none of the tissues displayed signs of harm — unlike control samples using vitrification and rewarmed slowly over ice or using convection warming. The researchers were also able to successfully wash away the iron oxide nanoparticles from the sample following the warming.
“This is the first time that anyone has been able to scale up to a larger biological system and demonstrate successful, fast, and uniform warming of hundreds of degrees Celsius per minute of preserved tissue without damaging the tissue,” said University of Minnesota mechanical engineering and biomedical engineering professor John Bischof, the senior author of the study.
Bischof said there is a strong possibility they could scale up to even larger systems, like organs. The researchers plan to start with rodent organs (such as rat and rabbit) and then scale up to pig organs and then, hopefully, human organs. The technology might also be applied beyond cryogenics, including delivering lethal pulses of heat to cancer cells.
The researchers’ goal is to eliminate transplant waiting lists. Currently, hearts and lungs donated for transplantation must be discarded because these tissues cannot be kept on ice for longer than a matter of hours, according to the researchers.*
It will be interesting to see if the technology can one day be extended to cryonics.
The research was funded by the National Science Foundation (NSF), National Institutes of Health (NIH), U.S. Army Medical Research and Materiel Command, Minnesota Futures Grant from the University of Minnesota, and the University of Minnesota Carl and Janet Kuhrmeyer Chair in Mechanical Engineering. Researchers at Carnegie Mellon University, Clemson University and Tissue Testing Technologies LLC were also involved in the study.
* “A major limitation of transplantation is the ischemic injury that tissue and organs sustain during the time between recovery from the donor and implantation in the recipient. The maximum tolerable organ preservation for transplantation by hypothermic storage is typically 4 hours for heart and lungs; 8 to 12 hours for liver, intestine, and pancreas; and up to 36 hours for kidney transplants. In many cases, such limits actually prevent viable tissue or organs from reaching recipients. For instance, more than 60% of donor hearts and lungs are not used or transplanted partly because their maximum hypothermic preservation times have been exceeded. Further, if only half of these discarded organs were transplanted, then it has been estimated that wait lists for these organs could be extinguished within 2 to 3 years.” — Navid Manuchehrabadi et al./Science Translational MedicineAbstract of Improved tissue cryopreservation using inductive heating of magnetic nanoparticles
Vitrification, a kinetic process of liquid solidification into glass, poses many potential benefits for tissue cryopreservation including indefinite storage, banking, and facilitation of tissue matching for transplantation. To date, however, successful rewarming of tissues vitrified in VS55, a cryoprotectant solution, can only be achieved by convective warming of small volumes on the order of 1 ml. Successful rewarming requires both uniform and fast rates to reduce thermal mechanical stress and cracks, and to prevent rewarming phase crystallization. We present a scalable nanowarming technology for 1- to 80-ml samples using radiofrequency-excited mesoporous silica–coated iron oxide nanoparticles in VS55. Advanced imaging including sweep imaging with Fourier transform and microcomputed tomography was used to verify loading and unloading of VS55 and nanoparticles and successful vitrification of porcine arteries. Nanowarming was then used to demonstrate uniform and rapid rewarming at >130°C/min in both physical (1 to 80 ml) and biological systems including human dermal fibroblast cells, porcine arteries and porcine aortic heart valve leaflet tissues (1 to 50 ml). Nanowarming yielded viability that matched control and/or exceeded gold standard convective warming in 1- to 50-ml systems, and improved viability compared to slow-warmed (crystallized) samples. Last, biomechanical testing displayed no significant biomechanical property changes in blood vessel length or elastic modulus after nanowarming compared to untreated fresh control porcine arteries. In aggregate, these results demonstrate new physical and biological evidence that nanowarming can improve the outcome of vitrified cryogenic storage of tissues in larger sample volumes.
Imagine a single flexible polymer fiber 200 micrometers across — about the width of a human hair — that can deliver a combination of optical, electrical, and chemical signals between different brain regions, with the softness and flexibility of brain tissue — allowing neuroscientists to leave implants in place and have them retain their functions over much longer periods than is currently possible with typical stiff, metallic fibers.
That’s what a team of MIT scientists has reported in the journal Nature Neuroscience. (Previous research efforts in neuroscience have generally relied on separate devices: needles to inject viral vectors for optogenetics, optical fibers for light delivery, and arrays of electrodes for recording, adding complication and the need for tricky alignments among the different devices.)
For example, in tests with lab mice, the researchers were able to inject viral vectors that carried genes called opsins (which sensitize neurons to light) through one of two fluid channels in the fiber. They waited for the opsins to take effect, then sent a pulse of light through the optical waveguide in the center, and recorded the resulting neuronal activity, using six electrodes to pinpoint specific reactions. All of this was done through a single flexible fiber.
“It can deliver the virus [containing the opsins] straight to the cell, and then stimulate the response and record the activity — and [the fiber] is sufficiently small and biocompatible so it can be kept in for a long time,” says Polina Anikeeva, a professor in the MIT Department of Materials Science and Engineering.
Since each fiber is so small, “potentially, we could use many of them to observe different regions of activity,” she says. In their initial tests, the researchers placed probes in two different brain regions at once, varying which regions they used from one experiment to the next, and measuring how long it took for responses to travel between them.
The key ingredient that made this multifunctional fiber possible was the development of conductive “wires” that maintained the needed flexibility while also carrying electrical signals well. The team engineered a composite of conductive polyethylene doped with graphite flakes. The polyethylene was initially formed into layers, sprinkled with graphite flakes, then compressed; then another pair of layers was added and compressed, and then another, and so on.
The team aims to reduce the width of the fibers further, to make their properties even closer to those of the neural tissue and use material that is even softer to match the adjacent tissue.
The research team included members of MIT’s Research Laboratory of Electronics, Department of Electrical Engineering and Computer Science, McGovern Institute for Brain Research, Department of Chemical Engineering, and Department of Mechanical Engineering, as well as researchers at Tohuku University in Japan and Virginia Polytechnic Institute. It was supported by the National Institute of Neurological Disorders and Stroke, the National Science Foundation, the MIT Center for Materials Science and Engineering, the Center for Sensorimotor Neural Engineering, and the McGovern Institute for Brain Research.
Abstract of One-step optogenetics with multifunctional flexible polymer fibers
Optogenetic interrogation of neural pathways relies on delivery of light-sensitive opsins into tissue and subsequent optical illumination and electrical recording from the regions of interest. Despite the recent development of multifunctional neural probes, integration of these modalities in a single biocompatible platform remains a challenge. We developed a device composed of an optical waveguide, six electrodes and two microfluidic channels produced via fiber drawing. Our probes facilitated injections of viral vectors carrying opsin genes while providing collocated neural recording and optical stimulation. The miniature (<200 μm) footprint and modest weight (<0.5 g) of these probes allowed for multiple implantations into the mouse brain, which enabled opto-electrophysiological investigation of projections from the basolateral amygdala to the medial prefrontal cortex and ventral hippocampus during behavioral experiments. Fabricated solely from polymers and polymer composites, these flexible probes minimized tissue response to achieve chronic multimodal interrogation of brain circuits with high fidelity.
Futurists worldwide plan to celebrate March 1 as World Future Day with a 24-hour conversation about the world’s potential futures, challenges, and opportunities.
At 12 noon your local time on March 1, you can click on a Google hangout at goo.gl/4hCJq3 and join the conversation* (log in with a Google account). It starts at 12 noon (midnight in New York) in Auckland, New Zealand and moves across the world, ending in Honolulu at 12 noon Honolulu time.
The World Futures Studies Federation, Association of Professional Futurists, and Humanity+ have joined forces with The Millennium Project** to invite their members and the public to participate.
“This is an open discussion about the future,“ says Jerome Glenn, CEO of The Millennium Project. “People will be encouraged to share their ideas about how to build a better future.”
This is the fourth year The Millennium Project has done this. Previous World Future Days have discussed issues like:
- Has the world become too complex to understand and manage?
- Can collective intelligence and smart cities anticipate and manage such complexity?
- Will there be a phase shift of global attitudes in the near future about what is important about the future?
- Can new concepts of employment be created to prevent increasing unemployment caused by the acceleration of technological changes?
- Can self-organization on the Internet reduce dependence on ill-informed politicians?
- Can virtual currencies work without supporting organized crime?
- How can we break free from mental constraints preventing truly innovative valuable ideas and understand how our brains might sabotage us (rational vs. irrational fear, traumatic memories, and defense mechanisms)?
- How can we connect our brains to become more intelligent?
* If you join the video conference and see that the limit of interactive video participation has been reached, you will still be able to see and hear, as well as type in the chat box, but your video will not be seen until some leave the conversation. As people drop out, new video slots will open up. You can also tweet a comment to @millenniumproj and facilitators will read it live in the video conference.
** The Millennium Project is an independent non-profit global participatory futures research think tank of futurists, scholars, business planners, and policy makers who work for international organizations, governments, corporations, non-governmental organizations, and universities. It produces the annual “State of the Future” reports, the “Futures Research Methodology” series, the Global Futures Intelligence System (GFIS), and special studies.
Drexel University biomedical engineers and Princeton University psychologists have used a wearable brain-imaging device called functional near-infrared spectroscopy (fNIRS) to measure brain synchronization when humans interact. fNIRS uses light to measure neural activity in the cortex of the brain (based on blood-oxygenation changes) during real-life situations and can be worn like a headband.
(KurzweilAI recently covered research with a fNIRS brain-computer interface that allows completely locked-in patients to communicate.)
Mirroring the speaker’s brain activity
The researchers found that a listener’s brain activity (in brain areas associated with speech comprehension) mirrors the speaker’s brain when he or she is telling a story about a real-life experience, with about a five-second delay. They also found that higher coupling is associated with better understanding.
The researchers believe the system can be used to offer important information about how to better communicate in many different environments, such as how people learn in classrooms and how to improve business meetings and doctor-patient communication. They also mentioned uses in analyzing political rallies and how people handle cable news.
“We now have a tool that can give us richer information about the brain during everyday tasks — such as person-to-person communication — that we could not receive in artificial lab settings or from single brain studies,” said Hasan Ayaz, PhD, an associate research professor in Drexel’s School of Biomedical Engineering, Science and Health Systems, who led the research team.
Traditional brain imaging methods like fMRI have limitations. In particular, fMRI requires subjects to lie down motionlessly in a noisy scanning environment. With this kind of setup, it’s not possible to simultaneously scan the brains of multiple individuals who are speaking face-to-face. Which is why the Drexel researchers turned to a portable fNIRS system, which could probe brain-to-brain coupling question in natural settings.
For their study, a native English speaker and two native Turkish speakers told an unrehearsed, real-life story in their native language. Their stories were recorded and their brains were scanned using fNIRS. Fifteen English speakers then listened to the recording, in addition to a story that was recorded at a live storytelling event.
The researchers targeted the prefrontal and parietal areas of the brain, which include cognitive and higher order areas that are involved in a person’s capacity to discern beliefs, desires, and goals of others. They hypothesized that a listener’s brain activity would correlate with the speaker’s only when listening to a story they understood (the English version). A second objective of the study was to compare the fNIRS results with data from a similar study that had used fMRI to compare the two methods.
They found that when the fNIRS measured the oxygenation and deoxygenation of blood cells in the test subject’s brains, the listeners’ brain activity matched only with the English speakers.* These results also correlated with the previous fMRI study.
The researchers believe the new research supports fNIRS as a viable future tool to study brain-to-brain coupling during social interaction. One can also imagine possible invasive uses in areas such as law enforcement and military interrogation.
The research was published in open-access Scientific Reports on Monday, Feb. 27.
* “During brain-to-brain coupling, activity in areas of prefrontal [in the speaker] and parietal cortex [in the listeners] previously reported to be involved in sentence comprehension were robustly correlated across subjects, as revealed in the inter-subject correlation analysis. As these are task-related (active listening) activation periods (not resting, etc.), the correlations reflect modulation of these regions by the time-varying content of the narratives, and comprise linguistic, conceptual and affective processing.” — Yichuan Liu et al./Scientific Reports)Abstract of Measuring speaker–listener neural coupling with functional near infrared spectroscopy
The present study investigates brain-to-brain coupling, defined as inter-subject correlations in the hemodynamic response, during natural verbal communication. We used functional near-infrared spectroscopy (fNIRS) to record brain activity of 3 speakers telling stories and 15 listeners comprehending audio recordings of these stories. Listeners’ brain activity was significantly correlated with speakers’ with a delay. This between-brain correlation disappeared when verbal communication failed. We further compared the fNIRS and functional Magnetic Resonance Imaging (fMRI) recordings of listeners comprehending the same story and found a significant relationship between the fNIRS oxygenated-hemoglobin concentration changes and the fMRI BOLD in brain areas associated with speech comprehension. This correlation between fNIRS and fMRI was only present when data from the same story were compared between the two modalities and vanished when data from different stories were compared; this cross-modality consistency further highlights the reliability of the spatiotemporal brain activation pattern as a measure of story comprehension. Our findings suggest that fNIRS can be used for investigating brain-to-brain coupling during verbal communication in natural settings.
Billionaire Softbank Group Chairman and CEO Masayoshi Son revealed Monday (Feb. 27) at Mobile World Congress his plan to invest in singularity. “In next 30 years [the singularity] will become a reality,” he said, Tech Crunch reports.
“If superintelligence goes inside the moving device then the world, our lifestyle dramatically changes,” he said. “There will be many kinds. Flying, swimming, big, micro, run, 2 legs, 4 legs, 100 legs,” referring to robots. “I truly believe it’s coming, that’s why I’m in a hurry — to aggregate the cash, to invest.”
“Son said his personal conviction in the looming rise of billions of superintelligent robots both explains his acquisition of UK chipmaker ARM last year, and his subsequent plan to establish the world’s biggest VC fund,” noted TechCrunch — a new $100BN fund called the Softbank Vision Fund, announced last October.
TechCrunch said that despite additional contributors including Foxconn, Apple, Qualcomm and Oracle co-founder Larry Ellison’s family office, the fund has evidently not yet hit Son’s target of $100BN, so he used the keynote as a sales pitch for additional partners.
Addressing existential threats
“Son said his haste is partly down to a belief that superintelligent AIs can be used for ‘the goodness of humanity,’ going on to suggest that only AI has the potential to address some of the greatest threats to humankind’s continued existence — be it climate change or nuclear annihilation,” said Tech Crunch.
“It will be so much more capable than us — what will be our job? What will be our life? We have to ask philosophical questions,” Son said. “Is it good or bad? “I think this superintelligence is going to be our partner. If we misuse it it’s a risk. If we use it in good spirits it will be our partner for a better life. So the future can be better predicted, people will live healthier, and so on.”
“With the coming of singularity, I believe we will benefit from new ideas and wisdom that people were previously incapable of thanks to big data and other analytics,” Son said on the Softbank Group website. “At some point I am sure we will see the birth of a ‘Super-intelligence’ that will contribute to humanity. This paradigm shift has only accelerated in recent years as both a worldwide and irreversible trend.”