The aperture (diameter) of any telescope is fundamentally limited by its lens or mirror. The bigger the mirror or lens, the more light it gathers, allowing astronomers to detect fainter objects, and to observe them more clearly. Other factors affecting image quality are noise and atmospheric distortion.
The Swiss study uses “generative adversarial network” (GAN) machine-learning technology (see this KurzweilAI article) to go beyond this limit by using two neural networks that compete with each other to create a series of more realistic images. The researchers first train the neural network to “see” what galaxies look like (using blurred and sharp images of the same galaxy), and then ask it to automatically fix the blurred images of a galaxy, converting them to sharp ones.
The trained neural networks were able to recognize and reconstruct features that the telescope could not resolve, such as star-forming regions and dust lanes in galaxies. The scientists checked the reconstructed images against the original high-resolution images to test its performance, finding it better able to recover features than anything used to date.
“We can start by going back to sky surveys made with telescopes over many years, see more detail than ever before, and, for example, learn more about the structure of galaxies,” said lead author Prof. Kevin Schawinski of ETH Zurich in Switzerland. “There is no reason why we can’t then apply this technique to the deepest images from Hubble, and the coming James Webb Space Telescope, to learn more about the earliest structures in the Universe.”
ETH Zurich is hosting this work on the space.ml cross-disciplinary astrophysics/computer-science initiative, where the code is available to the general public.
Abstract of Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit
Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon–Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.
Stanford University and Sandia National Laboratories researchers have developed an organic artificial synapse based on a new memristor (resistive memory device) design that mimics the way synapses in the brain learn. The new artificial synapse could lead to computers that better recreate the way the human brain processes information. It could also one day directly interface with the human brain.
The new artificial synapse is an electrochemical neuromorphic organic device (dubbed “ENODe”) — a mixed ionic/electronic design that is fundamentally different from existing and other proposed resistive memory devices, which are limited by noise, required high write voltage, and other factors*, the researchers note in a paper published online Feb. 20 in Nature Materials.
Like a neural path in a brain being reinforced through learning, the artificial synapse is programmed by discharging and recharging it repeatedly. Through this training, the researchers have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, remain at that state.
“The working mechanism of ENODes is reminiscent of that of natural synapses, where neurotransmitters diffuse through the cleft, inducing depolarization due to ion penetration in the postsynaptic neuron,” the researchers explain in the paper. “In contrast, other memristive devices switch by melting materials at relatively high temperatures (PCMs) or by voltage-induced breakdown/filament formation and ion diffusion in dense oxide layers (FFMOs).”
The ENODe achieves significant energy savings** in two ways:
- Unlike a conventional computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts. Traditional computing requires separately processing information and then storing it into memory. Here, the processing creates the memory.
- When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.
“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and co-senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”
A future brain-like computer with 500 states
Only one artificial synapse has been produced so far, but researchers at Sandia used 15,000 measurements to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.
This artificial synapse may one day be part of a brain-like computer, which could be especially useful for processing visual and auditory signals, as in voice-controlled interfaces and driverless cars, but without energy-consuming computer hardware.
This device is also well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs to move data from the processing unit to the memory.
However, this is still about 10,000 times as much energy as the minimum a biological synapse needs in order to fire**. The researchers hope to attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.
Linking to live organic neurons
This new artificial synapse may one day be part of a brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these depend on energy-consuming traditional computer hardware.
Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The switching voltages applied to train the artificial synapse (about 0.5 mV) are also the same as those that move through human neurons — about 1,000 times lower than the “write” voltage for a typical memristor.
That means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments.
This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.
* “A resistive memory device has not yet been demonstrated with adequate electrical characteristics to fully realize the efficiency and performance gains of a neural architecture. State-of-the-art memristors suffer from excessive write noise, write non-linearities, and high write voltages and currents. Reducing the noise and lowering the switching voltage significantly below 0.3 V (~10 kT) in a two-terminal device without compromising long-term data retention has proven difficult.” … Organic memristive devices have been recently proposed, but are limited by “the slow kinetics of ion diffusion through a polymer to retain their states or on charge storage in metal nanoparticles, which inherently limits performance and stability.” — Yoeri van de Burgt et al., Nature Materials** ENODe switches at low voltage and energy (< 10 pJ for 1000-square-micrometer devices), compared to an estimated ∼ 1–100 fJ per synaptic event for the human brain.
Abstract of A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing
The brain is capable of massively parallel information processing while consuming only ~1–100 fJ per synaptic event. Inspired by the efficiency of the brain, CMOS-based neural architectures and memristors are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10 pJ for 103 μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1 V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.
Eating 800 grams a day (about ten portions*) of fruit or vegetables could reduce your chance of heart attack, stroke, cancer, and early death, scientists from Imperial College London conclude from a meta-analysis of 95 studies on fruit and vegetable intake.
The study, published in an open-access paper in the International Journal of Epidemiology, included 2 million people worldwide and assessed up to 43,000 cases of heart disease, 47,000 cases of stroke, 81,000 cases of cardiovascular disease, 112,000 cancer cases and 94,000 deaths.
About 7.8 million premature deaths worldwide could be potentially prevented yearly if people followed this protocol, the researchers say.
Compared to not eating any fruits and vegetables, a daily intake of 200 grams (two and a half portions) was associated with a 16% reduced risk of heart disease, an 18% reduced risk of stroke, a 13% reduced risk of cardiovascular disease, a 4% reduced risk in cancer risk, and a 15% reduction in the risk of premature death.
However, a higher intake of fruits and vegetables of 800 grams a day was associated with 24% reduced risk of heart disease, a 33% reduced risk of stroke, a 28% reduced risk of cardiovascular disease, a 13% reduced risk of total cancer** and a 31% reduction in dying prematurely.***
The current UK guidelines suggest you eat at least five portions or 400 grams per day, but fewer than one in three UK adults are thought to even meet this target. The U.S. Health and Human Services/USDA guidelines use a different metric: “The recommended amount of vegetables in the Healthy U.S.-Style Eating Pattern at the 2,000-calorie level is 2½ cup-equivalents of vegetables per day and 2 cup-equivalents of fruit per day.
Foods that are best at disease prevention, according to the study
To prevent heart disease, stroke, cardiovascular disease, and early death: apples, pears, citrus fruits, salads, and green leafy vegetables such as spinach, lettuce and chicory, and cruciferous vegetables such as broccoli, cabbage and cauliflower.
To reduce cancer risk: green vegetables, such as spinach or green beans, yellow vegetables, such as peppers and carrots, and cruciferous vegetables.
Reasons for health benefits
So why do fruit and vegetables have such profound health benefits? According to Dagfinn Aune, PhD, lead author of the research, from the School of Public Health at Imperial: “Fruit and vegetables have been shown to reduce cholesterol levels, blood pressure, and to boost the health of our blood vessels and immune system. This may be due to the complex network of nutrients they hold. For instance they contain many antioxidants, which may reduce DNA damage, and lead to a reduction in cancer risk.”
He also noted that compounds called glucosinolates in cruciferous vegetables, such as broccoli, activate enzymes that may help prevent cancer. And fruit and vegetables may also have a beneficial effect on the naturally-occurring bacteria in our gut.
Most beneficial compounds can’t be easily replicated in a pill, he said: “Most likely it is the whole package of beneficial nutrients you obtain by eating fruits and vegetables that is crucial is health.
“This is why it is important to eat whole plant foods to get the benefit, instead of taking antioxidant or vitamin supplements, which have not been shown to reduce disease risk.”
In the paper, the researchers qualify these statements, noting that they assume the observed associations are causal (there could be other causes of improved health). The team, however, took into account some other factors, such as a person’s weight, smoking, physical activity levels, and overall diet.
“We need further research into the effects of specific types of fruits and vegetables and preparation methods of fruit and vegetables,” Aune suggested. “We also need more research on the relationship between fruit and vegetable intake with causes of death other than cancer and cardiovascular disease. However, it is clear from this work that a high intake of fruit and vegetables hold tremendous health benefits, and we should try to increase their intake in our diet.”
This project was funded by Olav og Gerd Meidel Raagholt’s Stiftelse for Medisinsk Forskning, the Liaison Committee between the Central Norway Regional Health Authority (RHA) and the Norwegian University of Science and Technology (NTNU), and the Imperial College National Institute of Health Research (NIHR) Biomedical Research Centre (BRC).
* A portion (80 grams) of fruit equals approximately one small banana, apple, pear or large mandarin; three heaped tablespoons of cooked vegetables such as spinach, peas, broccoli or cauliflower count as one portion.
** For cancer, no further reductions in risk were observed above 600 grams per day.
*** The team was not able to investigate intakes greater than 800 g a day. The team also did not find significant differences between raw and cooked vegetables in relation to early death, and they noted that that other specific fruits and vegetables as well as preparation methods may also play a role.
Abstract of Fruit and vegetable intake and the risk of cardiovascular disease, total cancer and all-cause mortality–a systematic review and dose-response meta-analysis of prospective studies
Background: Questions remain about the strength and shape of the dose-response relationship between fruit and vegetable intake and risk of cardiovascular disease, cancer and mortality, and the effects of specific types of fruit and vegetables. We conducted a systematic review and meta-analysis to clarify these associations.
Methods: PubMed and Embase were searched up to 29 September 2016. Prospective studies of fruit and vegetable intake and cardiovascular disease, total cancer and all-cause mortality were included. Summary relative risks (RRs) were calculated using a random effects model, and the mortality burden globally was estimated; 95 studies (142 publications) were included.
Results: For fruits and vegetables combined, the summary RR per 200 g/day was 0.92 [95% confidence interval (CI): 0.90–0.94, I2 = 0%, n = 15] for coronary heart disease, 0.84 (95% CI: 0.76–0.92, I2 = 73%, n = 10) for stroke, 0.92 (95% CI: 0.90–0.95, I2 = 31%, n = 13) for cardiovascular disease, 0.97 (95% CI: 0.95–0.99, I2 = 49%, n = 12) for total cancer and 0.90 (95% CI: 0.87–0.93, I2 = 83%, n = 15) for all-cause mortality. Similar associations were observed for fruits and vegetables separately. Reductions in risk were observed up to 800 g/day for all outcomes except cancer (600 g/day). Inverse associations were observed between the intake of apples and pears, citrus fruits, green leafy vegetables, cruciferous vegetables, and salads and cardiovascular disease and all-cause mortality, and between the intake of green-yellow vegetables and cruciferous vegetables and total cancer risk. An estimated 5.6 and 7.8 million premature deaths worldwide in 2013 may be attributable to a fruit and vegetable intake below 500 and 800 g/day, respectively, if the observed associations are causal.
Conclusions: Fruit and vegetable intakes were associated with reduced risk of cardiovascular disease, cancer and all-cause mortality. These results support public health recommendations to increase fruit and vegetable intake for the prevention of cardiovascular disease, cancer, and premature mortality.
Brain-computer interface advance allows paralyzed people to type almost as fast as some smartphone users
Stanford University researchers have developed a brain-computer interface (BCI) system that can enable people with paralysis* to type (using an on-screen cursor) at speeds and accuracy levels of about three times faster than reported to date.
Simply by imagining their own hand movements, one participant was able to type 39 correct characters per minute (about eight words per minute); the other two participants averaged 6.3 and 2.7 words per minute, respectively — all without auto-complete assistance (so it could be much faster).
Those are communication rates that people with arm and hand paralysis would also find useful, the researchers suggest. “We’re approaching the speed at which you can type text on your cellphone,” said Krishna Shenoy, PhD, professor of electrical engineering, a co-senior author of the study, which was published in an open-access paper online Feb. 21 in eLife.
Braingate and beyond
The three study participants used a brain-computer interface called the “BrainGate Neural Interface System.” On KurzweilAI, we first discussed Braingate in 2011, followed by a 2012 clinical trial that allowed a paralyzed patient to control a robot.
The new research, led by Stanford, takes the Braingate technology way further**. Participants can now move a cursor (by just thinking about a hand movement) on a computer screen that displays the letters of the alphabet, and they can “point and click” on letters, computer-mouse-style, to type letters and sentences.
The new BCI uses a tiny silicon chip, just over one-sixth of an inch square, with 100 electrodes that penetrate the brain to about the thickness of a quarter and tap into the electrical activity of individual nerve cells in the motor cortex.
As the participant thinks of a specific hand-to-mouse movement (pointing at or clicking on a letter), neural electrical activity is recorded using 96-channel silicon microelectrode arrays implanted in the hand area of the motor cortex. These signals are then filtered to extract multiunit spiking activity and high-frequency field potentials, then decoded (using two algorithms) to provide “point-and-click” control of a computer cursor.
The team next plans is to adapt the system so that brain-computer interfaces can control commercial computers, phones and tablets — perhaps extending out to the internet.
Beyond that, Shenoy predicted that a self-calibrating, fully implanted wireless BCI system with no required caregiver assistance and no “cosmetic impact” would be available in five to 10 years from now (“closer to five”).
Perhaps a future wireless, noninvasive version could let anyone simply think to select letters, words, ideas, and images — replacing the mouse and finger touch — along the lines of Elon Musk’s neural lace concept?
* Millions of people with paralysis reside in the U.S.
** The study’s results are the culmination of the long-running multi-institutional BrainGate consortium, which includes scientists at Massachusetts General Hospital, Brown University, Case Western University, and the VA Rehabilitation Research and Development Center for Neurorestoration and Neurotechnology in Providence, Rhode Island. The study was funded by the National Institutes of Health, the Stanford Office of Postdoctoral Affairs, the Craig H. Neilsen Foundation, the Stanford Medical Scientist Training Program, Stanford BioX-NeuroVentures, the Stanford Institute for Neuro-Innovation and Translational Neuroscience, the Stanford Neuroscience Institute, Larry and Pamela Garlick, Samuel and Betsy Reeves, the Howard Hughes Medical Institute, the U.S. Department of Veterans Affairs, the MGH-Dean Institute for Integrated Research on Atrial Fibrillation and Stroke and Massachusetts General Hospital.
Stanford | Stanford researchers develop brain-controlled typing for people with paralysis
Abstract of High performance communication by people with paralysis using an intracortical brain-computer interface
Brain-computer interfaces (BCIs) have the potential to restore communication for people with tetraplegia and anarthria by translating neural activity into control signals for assistive communication devices. While previous pre-clinical and clinical studies have demonstrated promising proofs-of-concept (Serruya et al., 2002; Simeral et al., 2011; Bacher et al., 2015; Nuyujukian et al., 2015; Aflalo et al., 2015; Gilja et al., 2015; Jarosiewicz et al., 2015; Wolpaw et al., 1998; Hwang et al., 2012; Spüler et al., 2012; Leuthardt et al., 2004; Taylor et al., 2002; Schalk et al., 2008; Moran, 2010; Brunner et al., 2011; Wang et al., 2013; Townsend and Platsko, 2016; Vansteensel et al., 2016; Nuyujukian et al., 2016; Carmena et al., 2003; Musallam et al., 2004; Santhanam et al., 2006; Hochberg et al., 2006; Ganguly et al., 2011; O’Doherty et al., 2011; Gilja et al., 2012), the performance of human clinical BCI systems is not yet high enough to support widespread adoption by people with physical limitations of speech. Here we report a high-performance intracortical BCI (iBCI) for communication, which was tested by three clinical trial participants with paralysis. The system leveraged advances in decoder design developed in prior pre-clinical and clinical studies (Gilja et al., 2015; Kao et al., 2016; Gilja et al., 2012). For all three participants, performance exceeded previous iBCIs (Bacher et al., 2015; Jarosiewicz et al., 2015) as measured by typing rate (by a factor of 1.4–4.2) and information throughput (by a factor of 2.2–4.0). This high level of performance demonstrates the potential utility of iBCIs as powerful assistive communication devices for people with limited motor function.
NASA will hold a news conference at 1 p.m. EST Wednesday, Feb. 22, to present new findings on exoplanets — planets that orbit stars other than our sun. As of Feb. 21, NASA has discovered and confirmed 3,440 exoplanets.
The briefing participants are Thomas Zurbuchen, associate administrator of the Science Mission Directorate at NASA Headquarters in Washington; Michael Gillon, astronomer at the University of Liege in Belgium; Sean Carey, manager of NASA’s Spitzer Science Center at Caltech/IPAC, Pasadena, California; Nikole Lewis, astronomer at the Space Telescope Science Institute in Baltimore; and Sara Seager, professor of planetary science and physics at Massachusetts Institute of Technology, Cambridge. Details of the findings are embargoed by the journal Nature until 1 p.m.
Interestingly, Seager, who studies bio signatures in exoplanet atmospheres, has suggested that two inhabited planets could reasonably turn up during the next decade, based on her modified version of the Drake equation, Space.com notes. Her equation focuses on the search for planets with biosignature gases — gases produced by life that can accumulate in a planet atmosphere to levels that can be detected with remote space telescopes.
“If we can identify another Earth-like planet, it comes full circle, from thinking that everything revolves around our planet to knowing that there are lots of other Earths out there,” she has stated.
The event will air live on NASA Television and will be live-streamed. The public may ask questions during the briefing on Twitter using the hashtag #askNASA. A Reddit AMA (Ask Me Anything) about exoplanets will be held following the briefing at 3 p.m., with scientists available to answer questions in English and Spanish.
Imagine a hybrid silicon-molecular computer that uses one thousand times less energy or a cell phone battery that lasts weeks at a time.
University of Alberta scientists, headed by University of Alberta physics professor Robert Wolkow, have taken a major step in that direction by visualizing and geometrically patterning silicon at the atomic level — using an innovative atomic-force microscopy* (AFM) technique. The goal: chip technology that performs dramatically better than today’s CMOS architecture.
Visualizing bonds in atoms at atomic resolution was first achieved by IBM Zurich scientists in 2009, when they imaged the pentacene molecule on copper. But imaging silicon is a problem: the sharp tip damages the fragile silicon molecules, the researchers note in an open-access paper published in the February 13, 2017 issue of Nature Communications.
To avoid damaging the silicon surface, the researchers created the first hydrogen-covered AFM tip, making it possible to manipulate silicon atoms. It was “a bit like Goldilocks,” PhD student and co-author Taleana Huff explained to KurzweilAI. “There is a sweet-spot region where you are probing the surface without interacting with it. Getting close enough to the surface with just the right parameters allows you to see these bonds materialize.
“If you get too close though, you end up transferring atoms to the surface or, conversely, to the tip, ruining the experiment. A lot of tech and knowledge goes into getting all these settings just right, including a powerful new computational approach that analyzes and verifies the identity of the atoms and bonds.”
Hydrogen-terminated silicon for ultra-fast, ultra-low-power technology
“We see hydrogen-terminated silicon as the platform for a whole new paradigm of efficient and fast silicon-based electronics,” Huff said. “Now that we understand the surface intimately and have these powerful tools and the experience, the next step is to start using the AFM to look at computational elements made using quantum dots [nanoscale semiconductor particles], which we create by removing hydrogen atoms from the silicon surface. When we cleverly pattern them geometrically, these atomic silicon quantum dots can be used to make very fast and incredibly low-power computational patterns.”
The long-term goal is making ultra-fast and ultra-low-power silicon-based circuits that potentially consume one thousand times less power than what is currently on the market, according to the researchers, along with novel quantum applications.
* Typical atomic force microscope (AFM) setup
Wolkow Lab | An animation illustrating patterning and imagining electronic circuits at the atomic level. It shows the tip and surface atoms’ relaxation during calculations of a part of the image simulation at small tip-surface distance. The bending and rotation of bonds is visible, giving a sense of the interactions and atomic relaxations involved.
UAlbertaScience | Less is more for atomic-scale manufacturing
This animation represents an electrical current being switched on and off. Remarkably, the current is confined to a channel that is just one atom wide. Also, the switch is made of just one atom. When the atom in the center feels an electric field tugging at it, it loses its electron. Once that electron is lost, the many electrons in the body of the silicon (to the left) have a clear passage to flow through. When the electric field is removed, an electron gets trapped in the central atom, switching the current off. This represents the latest work out of Robert Wolkow’s lab at the University of Alberta.Abstract of Indications of chemical bond contrast in AFM images of a hydrogen-terminated silicon surface
The origin of bond-resolved atomic force microscope images remains controversial. Moreover, most work to date has involved planar, conjugated hydrocarbon molecules on a metal substrate thereby limiting knowledge of the generality of findings made about the imaging mechanism. Here we report the study of a very different sample; a hydrogen-terminated silicon surface. A procedure to obtain a passivated hydrogen-functionalized tip is defined and evolution of atomic force microscopy images at different tip elevations are shown. At relatively large tip-sample distances, the topmost atoms appear as distinct protrusions. However, on decreasing the tip-sample distance, features consistent with the silicon covalent bonds of the surface emerge. Using a density functional tight-binding-based method to simulate atomic force microscopy images, we reproduce the experimental results. The role of the tip flexibility and the nature of bonds and false bond-like features are discussed.
For the past several years, researchers at the University of Illinois at Urbana-Champaign have reverse-engineered native biological tissues and organs — creating tiny walking “bio-bots” powered by muscle cells and controlled with electrical and optical pulses.
Now, in an open-access cover paper in Nature Protocols, the researchers are sharing a protocol with engineering details for their current generation of millimeter-scale soft robotic bio-bots*.
Using 3D-printed skeletons, these devices would be coupled to tissue-engineered skeletal muscle actuators to drive locomotion across 2D surfaces, and could one day be used for studies of muscle development and disease, high-throughput drug testing, and dynamic implants, among other applications.
The future of bio-bots
The researchers envision future generations of bio-bots as biological building blocks that lead to the machines of the future. The bio-bots would integrate multiple cell and tissue types, including neuronal networks for sensing and processing, and vascular networks for delivery of nutrients and other biochemical factors. They might also have some of the higher-order properties of biological materials, such as self-organization and self-healing.
“These next iterations of biohybrid machines could, for example, be designed to sense chemical toxins, locomote toward them, and neutralize them through cell-secreted factors. Such a functionality could have broad relevance in medical diagnostics and targeted therapeutics in vivo, or even be extended to environmental use as a method of cleaning pathogens from public water supplies,” the research note in the paper.
“This protocol is essentially intended to be a one-stop reference for any scientist around the world who wants to replicate the results we showed in our PNAS 2016 and PNAS 2014 papers, and give them a framework for building their own bio-bots for a variety of applications,” said Bioengineering Professor Rashid Bashir**, who heads the bio-bots research group.
Bashir’s group has been a pioneer in designing and building bio-bots, less than a centimeter in size, made of flexible 3D printed hydrogels and living cells. In 2012, the group demonstrated bio-bots that could “walk” on their own, powered by beating heart cells from rats. In 2014, they switched to muscle cells controlled with electrical pulses, giving researchers unprecedented command over their function.
Abstract of A modular approach to the design, fabrication, and characterization of muscle-powered biological machines
NewsAtIllinois | Light illuminates the way for bio-bots
Biological machines consisting of cells and biomaterials have the potential to dynamically sense, process, respond, and adapt to environmental signals in real time. As a first step toward the realization of such machines, which will require biological actuators that can generate force and perform mechanical work, we have developed a method of manufacturing modular skeletal muscle actuators that can generate up to 1.7 mN (3.2 kPa) of passive tension force and 300 μN (0.56 kPa) of active tension force in response to external stimulation. Such millimeter-scale biological actuators can be coupled to a wide variety of 3D-printed skeletons to power complex output behaviors such as controllable locomotion. This article provides a comprehensive protocol for forward engineering of biological actuators and 3D-printed skeletons for any design application. 3D printing of the injection molds and skeletons requires 3 h, seeding the muscle actuators takes 2 h, and differentiating the muscle takes 7 d.
Hiroshima University researchers and associates have developed a terahertz* (THz) transmitter capable of transmitting digital data over a single channel at a speed of 105 gigabits per second (Gbps), and demonstrated the technology at the International Solid-State Circuits Conference (ISSCC) 2017 conference last week.
For perspective, that’s more than 100 times faster than the fastest (1 Gbps) internet connection in the U.S. or more than 3,000 times faster than the 31 Mbps available to the average U.S. household in 2014, according to an FCC report. It’s also ten times or more faster than the fastest rate expected to be offered by fifth-generation mobile networks (5G) for metropolitan areas around 2020.
Major uses: faster in-flight and metropolitan internet, high-frequency trading
Applications of this forthcoming THz technology include higher-speed in-flight network connection speeds via satellite, fast download of videos and other large files for mobile devices, and ultrafast wireless links between base stations, according to Hiroshima University professor Minoru Fujishima.
An important business application is faster high-frequency trading, which requires minimal latency (delay). Recently, the time it takes to execute these trades has gone from milliseconds (thousandths of a second) to microseconds (millionths of a second), as KurzweilAI has explained. However, long-distance fiber optics connections (currently used for long-distance trading) have significant latency because light (as radio waves) travels 50% faster in a vacuum than through glass fiber, while microwaves traveling in air have a less than a 1% speed reduction.
The National Institute of Information and Communications Technology and Panasonic Corporation are also partners in this research.
* Terahertz, a frequency range that is 1,000 times higher than gigahertz, or 1012 Hz, actually starts at 100 GHz or .1 THz. The researchers transmitted in this unregulated THz band — a vast new frequency resource expected to be used for future ultrahigh-speed wireless communications — using the frequency range from 290 GHz to 315 GHz. The full range of frequencies in the THz band (275 GHz to 450 GHz) are currently unallocated, but are expected to be discussed at the World Radiocommunication Conference 2019.
Abstract of A 105Gb/s 300GHz CMOS Transmitter
“High speed” in communications often means “high data-rate” and fiber-optic technologies have long been ahead of wireless technologies in that regard. However, an often overlooked definite advantage of wireless links over fiber-optic links is that waves travel at the speed of light c, which is about 50% faster than in optical fibers as shown in Fig. 17.9.1 (top left). This “minimum latency” is crucial for applications requiring real-time responses over a long distance, including high-frequency trading . Further opportunities and new applications might be created if the absolute minimum latency and fiber-optic data-rates are put together. (Sub-)THz frequencies have an extremely broad atmospheric transmission window with manageable losses as shown in Fig. 17.9.1 (top right) and will be ideal for building light-speed links supporting fiber-optic data-rates. This paper presents a 105Gb/s 300GHz transmitter (TX) fabricated using a 40nm CMOS process.
Abstract of A 300GHz 40nm CMOS transmitter with 32-QAM 17.5Gb/s/ch capability over 6 channels
The vast unallocated frequency band lying above 275GHz offers enormous potential for ultrahigh-speed wireless communication. An overall bandwidth that could be allocated for multi-channel communication can easily be several times the 60GHz unlicensed bandwidth of 9GHz. We present a 300GHz transmitter (TX) in 40nm CMOS, capable of 32-quadrature amplitude modulation (QAM) 17.5Gb/s/ch signal transmission. It can cover the frequency range from 275 to 305GHz with 6 channels as shown at the top of Fig. 20.1.1. Figure 20.1.1 also lists possible THz TX architectures, based on recently reported above-200GHz TXs. The choice of architecture depends very much on the transistor unity-power-gain frequency fmax. If the fmax is sufficiently higher than the carrier frequency, the ordinary power amplifier (PA)-last architecture (Fig. 20.1.1, top row of the table) is possible and preferable [1-3], although the presence of a PA is, of course, not a requirement [4,5]. If, on the other hand, the fmax is comparable to or lower than the carrier frequency as in our case, a PA-less architecture must be adopted. A typical such architecture is the frequency multiplier-last architecture (Fig. 20.1.1, middle row of the table). For example, a 260GHz quadrupler-last on-off keying (OOK) TX  and a 434GHz tripler-last amplitude-shift keying (ASK) TX  were reported. A drawback of this architecture is the inefficient bandwidth utilization due to signal bandwidth spreading. Another drawback is that the use of multibit digital modulation is very difficult, if not impossible. An exception to this is the combination of quadrature phase-shift keying (QPSK) and frequency tripling. When a QPSK-modulated intermediate frequency (IF) signal undergoes frequency tripling, the resulting signal constellation remains that of QPSK with some symbol permutation. Such a tripler-last 240GHz QPSK TX was reported . However, a 16-QAM constellation, for example, would suffer severe distortion by frequency tripling. If the 300GHz band is to be seriously considered for a platform for ultrahigh-speed wireless communication, QAM-capability will be a requisite.
York University scientists have created the first in vitro (lab) 3D heart tissue made from three different types of cardiac cells that beat in synchronized harmony. It may lead to better understanding of cardiac health and improved treatments.*
The researchers constructed the heart tissue from three free-beating rat cell types: contractile cardiac muscle cells, connective tissue cells, and vascular cells. No external scaffold was used and the cells were the only building blocks of the generated cardiac tissue. The researchers believe this is the first 3D in vitro cardiac tissue with three cell types that can beat together as one entity, rather than at different intervals, with high cell density and efficient cell contacts, and without the requirement of external electrical stimulation.
The substance used to stick the cells together (ViaGlue) may also provide researchers with tools to create and test 3D in vitro cardiac tissue in their own labs to study heart disease and issues with transplantation.
“This breakthrough will allow better and earlier drug testing, and potentially eliminate harmful or toxic medications sooner,” said York U chemistry Professor Muhammad Yousaf.
For 2D or 3D cardiac tissue to be functional it needs the same high cellular density and the cells must be in contact to facilitate synchronized beating, according to the researchers. The 3D cardiac tissue was created at a millimeter scale, but larger versions could be made, said Yousaf, who has created a start-up company, OrganoLinX, to commercialize the ViaGlue reagent and to provide custom 3D tissues on demand.
“Production of 3-dimensional artificial cardiac tissues for fundamental studies of heart disease, transplantation, and evaluation of drug toxicity is an important and intense area of research,” the researchers note in a paper in open-access Nature Scientific Reports.
* Cardiovascular-associated diseases are the leading cause of death globally and are responsible for 40 per cent of deaths in North America, according to a 2011 report from the American Heart Association.
Abstract of Scaffold Free Bio-orthogonal Assembly of 3-Dimensional Cardiac Tissue via Cell Surface Engineering
York University | York U makes a 3D heart beat as one
There has been tremendous interest in constructing in vitro cardiac tissue for a range of fundamental studies of cardiac development and disease and as a commercial system to evaluate therapeutic drug discovery prioritization and toxicity. Although there has been progress towards studying 2-dimensional cardiac function in vitro, there remain challenging obstacles to generate rapid and efficient scaffold-free 3-dimensional multiple cell type co-culture cardiac tissue models. Herein, we develop a programmed rapid self-assembly strategy to induce specific and stable cell-cell contacts among multiple cell types found in heart tissue to generate 3D tissues through cell-surface engineering based on liposome delivery and fusion to display bio-orthogonal functional groups from cell membranes. We generate, for the first time, a scaffold free and stable self assembled 3 cell line co-culture 3D cardiac tissue model by assembling cardiomyocytes, endothelial cells and cardiac fibroblast cells via a rapid inter-cell click ligation process. We compare and analyze the function of the 3D cardiac tissue chips with 2D co-culture monolayers by assessing cardiac specific markers, electromechanical cell coupling, beating rates and evaluating drug toxicity.
A new set of machine-learning algorithms developed by researchers at the University of Toronto Scarborough can generate 3D structures of nanoscale protein molecules that could not be achieved in the past. The algorithms may revolutionize the development of new drug therapies for a range of diseases and may even lead to better understand how life works at the atomic level, the researchers say.
Drugs work by binding to a specific protein molecule and changing the protein’s 3D shape, which alters the way the drug works once inside the body. The ideal drug is designed in a shape that will only bind to a specific protein or group of proteins that are involved in a disease, while eliminating side effects that occur when drugs bind to other proteins in the body.
A significant computational problem
Since proteins are tiny — about 1 to 100 nanometers — even smaller than the shortest wavelength of visible light, they can’t be seen directly without using sophisticated techniques like electron cryomicroscopy (cryo-EM). Cryo-EM uses high-power microscopes to take tens of thousands of low-resolution images of a frozen protein sample from different positions.
The computational problem is to then piece together the correct high-resolution 3D structure from these 2D images.
Existing techniques take several days or even weeks to generate a 3D structure on a cluster of computers, requiring as much as 500,000 CPU hours, according to the researchers. Also, existing techniques often generate incorrect structures unless an expert user provides an accurate guess of the molecule being studied.
New high-speed, deep-learning algorithms
That’s where the new set of algorithms* comes in. It reconstructs 3D structures of protein molecules using these images. “Our approach solves some of the major problems in terms of speed and number of structures you can determine,” says Professor David Fleet, chair of the Computer and Mathematical Sciences Department at U of Toronto Scarborough.
The algorithms could significantly aid in the development of new drugs because they provide a faster, more efficient means at arriving at the correct protein structure.
The new approach, called cryoSPARC, developed by the team’s startup, Structura Biotechnology Inc., eliminates the need for that prior knowledge and can make the computations possible in minutes on a single computer, using a standalone graphics processing unit (GPU) accelerated software package, according to the researchers.
The research was published in the current edition of the journal Nature Methods. It received funding from the Natural Sciences and Engineering Research Council of Canada (NSERC). The new cryo-EM platform is already being used in labs across North America, the researchers note.
* “We use an SGD [stochastic gradient descent] optimization scheme to quickly identify one or several low-resolution 3D structures that are consistent with a set of observed images. This algorithm allows for ab initio heterogeneous structure determination with no prior model of the molecule’s structure. Once approximate structures are determined, a branch-and-bound algorithm for image alignment helps rapidly refine structures to high resolution. The speed and robustness of these approaches allow structure determination in a matter of minutes or hours on a single inexpensive desktop workstation. … SGD was popularized as a key tool in deep learning for the optimization of nonconvex functions, and it results in near human-level performance in tasks like image and speech recognition.” — Ali Punjani et al./Nature Methods
University of Toronto Scarborough | New algorithms may revolutionize drug discoveries and our understanding of life
Single-particle electron cryomicroscopy (cryo-EM) is a powerful method for determining the structures of biological macromolecules. With automated microscopes, cryo-EM data can often be obtained in a few days. However, processing cryo-EM image data to reveal heterogeneity in the protein structure and to refine 3D maps to high resolution frequently becomes a severe bottleneck, requiring expert intervention, prior structural knowledge, and weeks of calculations on expensive computer clusters. Here we show that stochastic gradient descent (SGD) and branch-and-bound maximum likelihood optimization algorithms permit the major steps in cryo-EM structure determination to be performed in hours or minutes on an inexpensive desktop computer. Furthermore, SGD with Bayesian marginalization allows ab initio 3D classification, enabling automated analysis and discovery of unexpected structures without bias from a reference map. These algorithms are combined in a user-friendly computer program named cryoSPARC