Updates from August, 2019 Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:11 pm on August 22, 2019 Permalink | Reply
    Tags: "Revealing the Intimate Lives of MASSIVE Galaxies", , , , ,   

    From Gemini Observatory: “Revealing the Intimate Lives of MASSIVE Galaxies” 

    NOAO

    Gemini Observatory
    From Gemini Observatory

    August 22, 2019

    Every galaxy has a story, and every galaxy has been many others in the past (unlike for humans, this is not purely metaphorical, as galaxies grow via hierarchical assembly). Generally speaking, the most massive galaxies have led the most interesting lives, often within teeming galactic metropolises where they are subject to frequent interactions with assorted neighbors. These interactions influence the structure and motions of the stars, gas, and dark matter that make up the galaxies. They also affect the growth of the supermassive black holes at the galaxies’ centers.

    Although the detailed life stories of most galaxies will remain forever uncertain, the key thematic elements may be surmised in various ways. A particularly powerful probe of a galaxy’s dynamical structure is called integral field spectroscopy (IFS), which dissects a galaxy’s light at each point within the spectrograph’s field of view. In this way, it is possible to construct a map of the motions of the stars within the galaxy and infer the distribution of the mass, both visible and invisible. IFS observations of the outskirts of a galaxy can provide insight into its global dynamics and past interactions, while IFS data on the innermost region can measure the mass of the supermassive black hole and the motions of the stars in its vicinity.

    The MASSIVE Galaxy Survey, led by Chung-Pei Ma of the University of California, Berkeley, is a major effort to uncover the internal structures and formation histories of the most massive galaxies within 350 million light years of our Milky Way. A recent study by the MASSIVE team presents high angular resolution IFS observations of 20 high-mass galaxies obtained with GMOS at Gemini North, combined with wide-field IFS data on the same galaxies from the 2.7-meter Harlan J Smith 2.7-meter Telescope telescope at McDonald Observatory in Texas.

    GEMINI/North GMOS

    NOAO Gemini North on MaunaKea, Hawaii, USA, Altitude 4,213 m (13,822 ft)

    U Texas at Austin McDonald Observatory Harlan J Smith 2.7-meter Telescope , Altitude 2,026 m (6,647 ft)

    The study, led by Berkeley graduate student Irina Ene, appears in the June issue of The Astrophysical Journal.

    The accompanying figure shows example maps of four indicators, or “moments” (called v, σ, h3 , and h4), of the stellar motions within two galaxies in the MASSIVE survey. The maps, based on the GMOS IFS data, cover the central regions of the galaxies. The figure also shows graphs of how these indicators vary with distance from the centers of these galaxies. Although both galaxies exhibit ordered central rotation, they are strikingly different in how the motions of the stars vary within the galaxy. Interestingly, for galaxies in the MASSIVE Survey, the directions of the motions of the stars in the central regions are often unaligned with the motions at large radius. This indicates complex and diverse merger histories.

    3
    Figure caption. Example distributions of the first four velocity “moments” (called v, σ, h3 and h4 ) measured from the GMOS-N IFS data for two of the MASSIVE survey galaxies. For each galaxy, the top row shows two-dimensional maps, while the bottom row shows two-sided radial profiles from Gemini/GMOS-N (magenta circles) and McDonald Observatory (green squares) data. For more information, see the study by Berkeley graduate student Irina Ene.

    As a proof of concept, the new study performs detailed dynamical modeling of the IFS data for NGC 1453, the galaxy in the sample with the fastest rotation rate. The team’s analysis reveals the amount of dark matter in this galaxy and shows how the shapes of the stars’ orbits change with radius. In addition, the team found an impressively large mass for the central black hole, more than three billion times the mass of our Sun. The MASSIVE Survey team is currently performing detailed modeling for all the rest of the galaxies in the sample. The results will provide further insight into the assembly histories of the largest galaxies in the local Universe and refine our understanding of the coevolution of galaxies and their central black holes up to the most extreme masses.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    NOAO Gemini North on MaunaKea, Hawaii, USA, Altitude 4,213 m (13,822 ft)


    Gemini South telescope, Cerro Tololo Inter-American Observatory (CTIO) campus near La Serena, Chile, at an altitude of 7200 feet


    Gemini’s mission is to advance our knowledge of the Universe by providing the international Gemini Community with forefront access to the entire sky.

    The Gemini Observatory is an international collaboration with two identical 8-meter telescopes. The Frederick C. Gillett Gemini Telescope is located on Mauna Kea, Hawai’i (Gemini North) and the other telescope on Cerro Pachón in central Chile (Gemini South); together the twin telescopes provide full coverage over both hemispheres of the sky. The telescopes incorporate technologies that allow large, relatively thin mirrors, under active control, to collect and focus both visible and infrared radiation from space.

    The Gemini Observatory provides the astronomical communities in six partner countries with state-of-the-art astronomical facilities that allocate observing time in proportion to each country’s contribution. In addition to financial support, each country also contributes significant scientific and technical resources. The national research agencies that form the Gemini partnership include: the US National Science Foundation (NSF), the Canadian National Research Council (NRC), the Chilean Comisión Nacional de Investigación Cientifica y Tecnológica (CONICYT), the Australian Research Council (ARC), the Argentinean Ministerio de Ciencia, Tecnología e Innovación Productiva, and the Brazilian Ministério da Ciência, Tecnologia e Inovação. The observatory is managed by the Association of Universities for Research in Astronomy, Inc. (AURA) under a cooperative agreement with the NSF. The NSF also serves as the executive agency for the international partnership.

     
  • richardmitnick 1:38 pm on August 22, 2019 Permalink | Reply
    Tags: "Quantum computing race needs ‘global effort’ says Provost", , The UK has a decades-long head start in quantum technologies.   

    From Imperial College London: “Quantum computing race needs ‘global effort’, says Provost” 

    Imperial College London
    From Imperial College London

    21 August 2019
    Andrew Scheuber

    1
    NQIT https://nqit.ox.ac.uk/

    The race for a viable quantum computer – “the most exciting in science today” – needs enormous collaborations, Professor Ian Walmsley argues.

    Writing in today’s Financial Times, Imperial’s Provost notes that “The complexity of some of the hurdles are arguably more challenging than those that were solved at the Large Hadron Collider, the world’s most powerful atom smasher. Disparate networks of researchers, entrepreneurs, capital and governments will have to compete and collaborate all over the world.

    “Yet too much commentary, especially in the UK and Europe, fixates on where quantum innovation and commercialisation is happening.”

    This so-called “brain drain” argument is “nonsense”, he writes. “It misunderstands the global nature of science and innovation, and underplays the UK’s exceptional strengths in quantum technology.”

    Welcoming competition

    He argues that “We should welcome, not fear, competition, as well as being open to collaboration. From lunar exploration to cancer research, it’s how the best science and innovation comes to life.”

    Professor Walmsley, a quantum physicist, also serves as Director of the UK’s Networked Quantum Information Technologies Hub.

    He observes that “the UK has a decades-long head start in quantum technologies. Consistent support from research councils and university departments have spurred crucial breakthroughs. These leaps in fundamental science — all from British laboratories — are the foundation of today’s global industry. It is what has drawn pioneers in quantum metrology such as Ed Hinds back to the UK from the US.

    “The British government’s foresight in founding the National Quantum Technologies Programme six years ago accelerated research and development, and stimulated private investment. Total UK government investment has now reached £1bn.”

    The full opinion piece can be read in the Financial Times.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Imperial College London

    Imperial College London is a science-based university with an international reputation for excellence in teaching and research. Consistently rated amongst the world’s best universities, Imperial is committed to developing the next generation of researchers, scientists and academics through collaboration across disciplines. Located in the heart of London, Imperial is a multidisciplinary space for education, research, translation and commercialisation, harnessing science and innovation to tackle global challenges.

     
  • richardmitnick 1:00 pm on August 22, 2019 Permalink | Reply
    Tags: , , , , , ,   

    From Symmetry: “Holography class gives students new perspective” 

    Symmetry Mag
    From Symmetry

    [I must say, nothing in this article tells me why this is An important subject for Symmetry.]

    08/22/19
    Bailey Bedford

    A holography class at the Ohio State University combines art and physics to provide a more complete picture of how we understand the world around us.

    Art and science are often seen as incompatible lenses through which to view the world. Science provides one perspective, characterized by detachment and certainty, and art provides another, characterized by emotion and unpredictability, and never the twain shall meet.

    But sometimes you need more than one perspective to understand the whole picture. Harris Kagan, an Ohio State University physics professor and collaborator on the ATLAS experiment at the Large Hadron Collider at CERN, proves this in his classes about the art and science of holography.

    The word “holography” derives from two Greek words that together mean “entire picture.” A hologram is essentially a 3-D picture that is designed to provide a complete image including different perspectives and parallax—the way an object’s position appears to vary for different lines of sight.

    In physics terms, each part of a hologram records an interference pattern to recreate the light that was emitted or reflected from the subject of the image. This method allows the viewer to move around and see the object from different angles like they could if the object were on the opposite side of a window.

    “My philosophy is that art and science are really the same thing,” he says. “The techniques you use to create a new idea in science are very, very similar [to the ones used in art]. To create a new idea in art, you’re using different tools, maybe different fundamentals, but the goals are the same; the honesty is the same.”

    3
    Courtesy of Harris Kagan

    4
    Courtesy of Harris Kagan

    Marrying art and science

    Kagan has been teaching holography classes since the mid-1980s. When OSU art professor Susan Dallas-Swan saw a hologram that he had produced for display using equipment from a laboratory class he taught, she arranged for Kagan to work with an art graduate student using the medium.

    The success with the graduate student led the pair of professors to set the blueprint for the classes. Some of Kagan’s classes have been in the physics department and some in the art department, with students from a variety of backgrounds mixed together in each. Kagan teaches beginner, advanced and honors undergraduate holography courses as well as a graduate course.

    Students in the class are not required to have any background in art or physics. The classes are meant to help students explore both subjects and how they intersect with math and visual perception. They include elements usually associated with science classes, such as unsupervised time in the lab working with lasers, and elements usually associated with art classes, such as artistic critiques of the students’ work. The students perform a series of projects culminating in an original piece for an art show.

    One point the critique process drove home was that the students’ art for the class should be concept-driven, says Shreyas Muralidharan, who participated as an undergraduate majoring in electrical and computer engineering and physics. By that, Kagan meant “that you need to really be able to clearly define what you want to achieve with this piece of art,” Muralidharan says. “From a physics and more scientific background, I haven’t really been exposed to [that idea].”

    Muralidharan, now a graduate student, says that Kagan would often challenge students to simplify the language in their explanations of their pieces and processes. Asking the students to explain concepts in simple terms ensured they actually understood them—a practice that he says remains useful in giving scientific presentations.

    Muralidharan says that idea encouraged him to think outside the box in his science classes as well. “A lot of the time, you can get stuck in the method of thinking in math,” he says. “We think of integrals, numbers, probability. And you kind of step back, and you realize that maybe you don’t have a good intuition for what’s actually happening.”

    Both art students and physics students benefited from the class, Muralidharan says. “I think talking to each other across that bridge helped solidify concepts.”

    Beyond the classroom

    Kagan estimates that between 2000 and 3000 students have gone through his classes. Those students have gone on to a wide variety of careers.

    “What comes with these lessons is a perspective with which to do art or to do science—a perspective with which you understand your role in the universe,” Kagan says.

    Jeff Hazelden, who took Kagan’s classes as a photography major, says Kagan’s classes introduced him to characteristics of light that are still useful in his career as a photographer and art teacher. He says he also uses parts of Kagan’s structured format for artistic critiques with his students that are new to the critique process.

    Katherine Hanlon, another former photography major, now works as a medical imaging specialist. She helps identify skin diseases by taking specialized photos using lasers and 3-D modeling. Kagan’s class introduced her to important aspects of those techniques.

    “I look back and realize that a lot of what I ended up doing in my career and my skill level and knowledge level was influenced specifically by this class,” Hanlon says. “I think it was easily the most important class I ever took in any of my education.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.


    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 12:38 pm on August 22, 2019 Permalink | Reply
    Tags: , , , , , ,   

    From ALMA: “ALMA Shows What’s Inside Jupiter’s Storms” 

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    From ALMA

    22 August, 2019

    Nicolás Lira
    Education and Public Outreach Coordinator
    Joint ALMA Observatory, Santiago – Chile
    Phone: +56 2 2467 6519
    Cell phone: +56 9 9445 7726
    Email: nicolas.lira@alma.cl

    Iris Nijman
    Public Information Officer
    National Radio Astronomy Observatory Charlottesville, Virginia – USA
    Cell phone: +1 (434) 249 3423
    Email: alma-pr@nrao.edu

    1
    Radio image of Jupiter made with ALMA. Bright bands indicate high temperatures and dark bands low temperatures. The dark bands correspond to the zones on Jupiter, which are often white at visible wavelengths. The bright bands correspond to the brown belts on the planet. This image contains over 10 hours of data, so fine details are smeared by the planet’s rotation. Credit: ALMA (ESO/NAOJ/NRAO), I. de Pater et al.; NRAO/AUI NSF, S. Dagnello

    Swirling clouds, big colorful belts, giant storms. The beautiful and incredibly turbulent atmosphere of Jupiter has been showcased many times. But what is going on below the clouds? What is causing the many storms and eruptions that we see on the ‘surface’ of the planet? However, to study this, visible light is not enough. We need to study Jupiter using radio waves.

    New radio wave images made with the Atacama Large Millimeter/submillimeter Array (ALMA) provide a unique view of Jupiter’s atmosphere down to fifty kilometers below the planet’s visible (ammonia) cloud deck.

    “ALMA enabled us to make a three-dimensional map of the distribution of ammonia gas below the clouds. And for the first time, we were able to study the atmosphere below the ammonia cloud layers after an energetic eruption on Jupiter,” said Imke de Pater of the University of California, Berkeley (EE. UU.).

    The atmosphere of giant Jupiter is made out of mostly hydrogen and helium, together with trace gases of methane, ammonia, hydrosulfide, and water. The top-most cloud layer is made up of ammonia ice. Below that is a layer of solid ammonia hydrosulfide particles, and deeper still, around 80 kilometers below the upper cloud deck, there likely is a layer of liquid water. The upper clouds form the distinctive brown belts and white zones seen from Earth.

    Many of the storms on Jupiter take place inside those belts. They can be compared to thunderstorms on Earth and are often associated with lightning events. Storms reveal themselves in visible light as small bright clouds, referred to as plumes. These plume eruptions can cause a major disruption of the belt, which can be visible for months or years.

    The ALMA images were taken a few days after amateur astronomers observed an eruption in Jupiter’s South Equatorial Belt in January 2017. A small bright white plume was visible first, and then a large-scale disruption in the belt was observed that lasted for weeks after the eruption.

    De Pater and her colleagues used ALMA to study the atmosphere below the plume and the disrupted belt at radio wavelengths and compared these to UV-visible light and infrared images made with other telescopes at approximately the same time.

    “Our ALMA observations are the first to show that high concentrations of ammonia gas are brought up during an energetic eruption,” said de Pater. “The combination of observations simultaneously at many different wavelengths enabled us to examine the eruption in detail. Wich led us to confirm the current theory that energetic plumes are triggered by moist convection at the base of water clouds, which are located deep in the atmosphere. The plumes bring up ammonia gas from deep in the atmosphere to high altitudes, well above the main ammonia cloud deck,” she added.

    “These ALMA maps at millimeter wavelengths complement the maps made with the National Science Foundation’s Very Large Array in centimeter wavelengths,” said Bryan Butler of the National Radio Astronomy Observatory.

    NRAO/Karl V Jansky Expanded Very Large Array, on the Plains of San Agustin fifty miles west of Socorro, NM, USA, at an elevation of 6970 ft (2124 m)

    “Both maps probe below the cloud layers seen at optical wavelengths and show ammonia-rich gases rising into and forming the upper cloud layers (zones), and ammonia-poor air sinking down (belts).”

    “The present results show superbly what can be achieved in planetary science when an object is studied with various observatories and at various wavelengths”. Explains Eric Villard, an ALMA astronomer part of the research team. “ALMA, with its unprecedented sensitivity and spectral resolution at radio wavelengths, worked together successfully with other major observatories around the world, to provide the data to allow a better understanding of the atmosphere of Jupiter.”

    3
    Flat map of Jupiter in radio waves with ALMA (top) and visible light with the Hubble Space Telescope (bottom). The eruption in the South Equatorial Belt is visible in both images. Credit: ALMA (ESO/NAOJ/NRAO), I. de Pater et al.; NRAO/AUI NSF, S. Dagnello; NASA/Hubble

    Science paper:
    First ALMA Millimeter Wavelength Maps of Jupiter, with a Multi-Wavelength Study of Convection
    https://arxiv.org/pdf/1907.11820.pdf

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Atacama Large Millimeter/submillimeter Array (ALMA), an international astronomy facility, is a partnership of Europe, North America and East Asia in cooperation with the Republic of Chile. ALMA is funded in Europe by the European Organization for Astronomical Research in the Southern Hemisphere (ESO), in North America by the U.S. National Science Foundation (NSF) in cooperation with the National Research Council of Canada (NRC) and the National Science Council of Taiwan (NSC) and in East Asia by the National Institutes of Natural Sciences (NINS) of Japan in cooperation with the Academia Sinica (AS) in Taiwan.

    ALMA construction and operations are led on behalf of Europe by ESO, on behalf of North America by the National Radio Astronomy Observatory (NRAO), which is managed by Associated Universities, Inc. (AUI) and on behalf of East Asia by the National Astronomical Observatory of Japan (NAOJ). The Joint ALMA Observatory (JAO) provides the unified leadership and management of the construction, commissioning and operation of ALMA.

    NRAO Small
    ESO 50 Large

     
  • richardmitnick 12:04 pm on August 22, 2019 Permalink | Reply  

    From University College London: “Maximum mass of lightest neutrino revealed using astronomical big data” 

    UCL bloc

    From University College London

    22 August 2019

    The mass of the lightest neutrino, an abundant ‘ghost’ particle found throughout the universe, has been calculated to be at least six million times lighter than the mass of an electron in a new UCL-led study.

    1

    Neutrinos come in three flavours made up of a mix of three neutrino masses. While the differences between the masses are known, little information was available about the mass of the lightest species until now.

    It’s important to better understand neutrinos and the processes through which they obtain their mass as they could reveal secrets about astrophysics, including how the universe is held together, why it is expanding and what dark matter is made of.

    First author, Dr Arthur Loureiro (UCL Physics & Astronomy), said: “A hundred billion neutrinos fly through your thumb from the Sun every second, even at night. These are very weakly interactive ghosts that we know little about. What we do know is that as they move, they can change between their three flavours, and this can only happen if at least two of their masses are non-zero.”

    “The three flavours can be compared to ice cream where you have one scoop containing strawberry, chocolate and vanilla. Three flavours are always present but in different ratios, and the changing ratio–and the weird behaviour of the particle–can only be explained by neutrinos having a mass.”

    The concept that neutrinos have mass is a relatively new one with the discovery in 1998 earning Professor Takaaki Kajita and Professor Arthur B. McDonald the 2015 Nobel Prize in Physics. Even so, the Standard Model used by modern physics has yet to be updated to assign neutrinos a mass.

    The study, published today in Physical Review Letters by researchers from UCL, Universidade Federal do Rio de Janeiro, Institut d’Astrophysique de Paris and Universidade de Sao Paulo, sets an upper limit for the mass of the lightest neutrino for the first time. The particle could technically have no mass as a lower limit is yet to be determined.

    The team used an innovative approach to calculate the mass of neutrinos by using data collected by both cosmologists and particle physicists. This included using data from 1.1 million galaxies from the Baryon Oscillation Spectroscopic Survey (BOSS) to measure the rate of expansion of the universe, and constraints from particle accelerator experiments.

    BOSS Supercluster Baryon Oscillation Spectroscopic Survey (BOSS)

    “We used information from a variety of sources including space- and ground-based telescopes observing the first light of the Universe (the cosmic microwave background radiation), exploding stars, the largest 3D map of galaxies in the Universe, particle accelerators, nuclear reactors, and more,” said Dr Loureiro.

    LBNL BOSS

    CMB per ESA/Planck

    Cosmic Background Radiation per Planck

    ESA/Planck 2009 to 2013

    “As neutrinos are abundant but tiny and elusive, we needed every piece of knowledge available to calculate their mass and our method could be applied to other big questions puzzling cosmologists and particle physicists alike.”

    The researchers used the information to prepare a framework in which to mathematically model the mass of neutrinos and used UCL’s supercomputer, Grace, to calculate the maximum possible mass of the lightest neutrino to be 0.086 eV (95% CI), which is equivalent to 1.5 x 10-37 Kg. They calculated that three neutrino flavours together have an upper bound of 0.26 eV (95% CI).

    Second author, PhD student Andrei Cuceu (UCL Physics & Astronomy), said: “We used more than half a million computing hours to process the data; this is equivalent to almost 60 years on a single processor. This project pushed the limits for big data analysis in cosmology.”

    The team say that understanding how neutrino mass can be estimated is important for future cosmological studies such as DESI and Euclid, which both involve teams from across UCL.

    LBNL/DESI spectroscopic instrument on the Mayall 4-meter telescope at Kitt Peak National Observatory starting in 2018

    NOAO/Mayall 4 m telescope at Kitt Peak, Arizona, USA, Altitude 2,120 m (6,960 ft)

    ESA/Euclid spacecraft

    The Dark Energy Spectroscopic Instrument (DESI) will study the large scale structure of the universe and its dark energy and dark matter contents to a high precision. Euclid is a new space telescope being developed with the European Space Agency to map the geometry of the dark Universe and evolution of cosmic structures.

    Professor Ofer Lahav (UCL Physics & Astronomy), co-author of the study and chair of the UK Consortiums of the Dark Energy Survey and DESI said: “It is impressive that the clustering of galaxies on huge scales can tell us about the mass of the lightest neutrino, a result of fundamental importance to physics. This new study demonstrates that we are on the path to actually measuring the neutrino masses with the next generation of large spectroscopic galaxy surveys, such as DESI, Euclid and others.”

    The research was funded by National Council for Scientific and Technological Development (CNPq) Science without Borders (Brazil), the Royal Astronomical Society, the UK Science and Technology Facilities Council (STFC), the Royal Society and the European Research Council.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    UCL campus

    UCL was founded in 1826 to open up higher education in England to those who had been excluded from it – becoming the first university in England to admit women students on equal terms with men in 1878.

    Academic excellence and research that addresses real-world problems inform our ethos to this day and are central to our 20-year strategy.

     
  • richardmitnick 11:36 am on August 22, 2019 Permalink | Reply
    Tags: "Artificial intelligence could help data centers run far more efficiently", Data centers can contain tens of thousands of servers which constantly run data-processing tasks from developers and users.,   

    From MIT News: “Artificial intelligence could help data centers run far more efficiently” 

    MIT News

    From MIT News

    August 21, 2019
    Rob Matheson

    MIT system “learns” how to optimally allocate workloads across thousands of servers to cut costs, save energy.

    1
    A novel system developed by MIT researchers automatically “learns” how to schedule data-processing operations across thousands of servers — a task traditionally reserved for imprecise, human-designed algorithms. Doing so could help today’s power-hungry data centers run far more efficiently.

    Data centers can contain tens of thousands of servers, which constantly run data-processing tasks from developers and users. Cluster scheduling algorithms allocate the incoming tasks across the servers, in real-time, to efficiently utilize all available computing resources and get jobs done fast.

    Traditionally, however, humans fine-tune those scheduling algorithms, based on some basic guidelines (“policies”) and various tradeoffs. They may, for instance, code the algorithm to get certain jobs done quickly or split resource equally between jobs. But workloads — meaning groups of combined tasks — come in all sizes. Therefore, it’s virtually impossible for humans to optimize their scheduling algorithms for specific workloads and, as a result, they often fall short of their true efficiency potential.

    The MIT researchers instead offloaded all of the manual coding to machines. In a paper [ https://arxiv.org/pdf/1810.01963.pdf ] being presented at SIGCOMM, they describe a system that leverages “reinforcement learning” (RL), a trial-and-error machine-learning technique, to tailor scheduling decisions to specific workloads in specific server clusters.

    To do so, they built novel RL techniques that could train on complex workloads. In training, the system tries many possible ways to allocate incoming workloads across the servers, eventually finding an optimal tradeoff in utilizing computation resources and quick processing speeds. No human intervention is required beyond a simple instruction, such as, “minimize job-completion times.”

    Compared to the best handwritten scheduling algorithms, the researchers’ system completes jobs about 20 to 30 percent faster, and twice as fast during high-traffic times. Mostly, however, the system learns how to compact workloads efficiently to leave little waste. Results indicate the system could enable data centers to handle the same workload at higher speeds, using fewer resources.

    “If you have a way of doing trial and error using machines, they can try different ways of scheduling jobs and automatically figure out which strategy is better than others,” says Hongzi Mao, a PhD student in the Department of Electrical Engineering and Computer Science (EECS). “That can improve the system performance automatically. And any slight improvement in utilization, even 1 percent, can save millions of dollars and a lot of energy in data centers.”

    “There’s no one-size-fits-all to making scheduling decisions,” adds co-author Mohammad Alizadeh, an EECS professor and researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “In existing systems, these are hard-coded parameters that you have to decide up front. Our system instead learns to tune its schedule policy characteristics, depending on the data center and workload.”

    Joining Mao and Alizadeh on the paper are: postdocs Malte Schwarzkopf and Shaileshh Bojja Venkatakrishnan, and graduate research assistant Zili Meng, all of CSAIL.

    RL for scheduling

    Typically, data processing jobs come into data centers represented as graphs of “nodes” and “edges.” Each node represents some computation task that needs to be done, where the larger the node, the more computation power needed. The edges connecting the nodes link connected tasks together. Scheduling algorithms assign nodes to servers, based on various policies.

    But traditional RL systems are not accustomed to processing such dynamic graphs. These systems use a software “agent” that makes decisions and receives a feedback signal as a reward. Essentially, it tries to maximize its rewards for any given action to learn an ideal behavior in a certain context. They can, for instance, help robots learn to perform a task like picking up an object by interacting with the environment, but that involves processing video or images through an easier set grid of pixels.

    To build their RL-based scheduler, called Decima, the researchers had to develop a model that could process graph-structured jobs, and scale to a large number of jobs and servers. Their system’s “agent” is a scheduling algorithm that leverages a graph neural network, commonly used to process graph-structured data. To come up with a graph neural network suitable for scheduling, they implemented a custom component that aggregates information across paths in the graph — such as quickly estimating how much computation is needed to complete a given part of the graph. That’s important for job scheduling, because “child” (lower) nodes cannot begin executing until their “parent” (upper) nodes finish, so anticipating future work along different paths in the graph is central to making good scheduling decisions.

    To train their RL system, the researchers simulated many different graph sequences that mimic workloads coming into data centers. The agent then makes decisions about how to allocate each node along the graph to each server. For each decision, a component computes a reward based on how well it did at a specific task — such as minimizing the average time it took to process a single job. The agent keeps going, improving its decisions, until it gets the highest reward possible.

    Baselining workloads

    One concern, however, is that some workload sequences are more difficult than others to process, because they have larger tasks or more complicated structures. Those will always take longer to process — and, therefore, the reward signal will always be lower — than simpler ones. But that doesn’t necessarily mean the system performed poorly: It could make good time on a challenging workload but still be slower than an easier workload. That variability in difficulty makes it challenging for the model to decide what actions are good or not.

    To address that, the researchers adapted a technique called “baselining” in this context. This technique takes averages of scenarios with a large number of variables and uses those averages as a baseline to compare future results. During training, they computed a baseline for every input sequence. Then, they let the scheduler train on each workload sequence multiple times. Next, the system took the average performance across all of the decisions made for the same input workload. That average is the baseline against which the model could then compare its future decisions to determine if its decisions are good or bad. They refer to this new technique as “input-dependent baselining.”

    That innovation, the researchers say, is applicable to many different computer systems. “This is general way to do reinforcement learning in environments where there’s this input process that effects environment, and you want every training event to consider one sample of that input process,” he says. “Almost all computer systems deal with environments where things are constantly changing.”

    Aditya Akella, a professor of computer science at the University of Wisconsin at Madison, whose group has designed several high-performance schedulers, found the MIT system could help further improve their own policies. “Decima can go a step further and find opportunities for [scheduling] optimization that are simply too onerous to realize via manual design/tuning processes,” Akella says. “The schedulers we designed achieved significant improvements over techniques used in production in terms of application performance and cluster efficiency, but there was still a gap with the ideal improvements we could possibly achieve. Decima shows that an RL-based approach can discover [policies] that help bridge the gap further. Decima improved on our techniques by a [roughly] 30 percent, which came as a huge surprise.”

    Right now, their model is trained on simulations that try to recreate incoming online traffic in real-time. Next, the researchers hope to train the model on real-time traffic, which could potentially crash the servers. So, they’re currently developing a “safety net” that will stop their system when it’s about to cause a crash. “We think of it as training wheels,” Alizadeh says. “We want this system to continuously train, but it has certain training wheels that if it goes too far we can ensure it doesn’t fall over.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 10:47 am on August 22, 2019 Permalink | Reply
    Tags: A triplet of Earth-sized planet candidates orbiting a star just 12 light-years away a new study has found., , , , , , , M dwarf star GJ 1061   

    From European Southern Observatory via Discover: “Three New Exoplanets Have Been Discovered Around a Nearby Star” 

    ESO 50 Large

    From European Southern Observatory

    via

    DiscoverMag

    Discover Magazine

    August 21, 2019
    Mara Johnson-Groh

    There is a triplet of Earth-sized planet candidates orbiting a star just 12 light-years away, a new study has found. And one appears to be in the habitable zone.

    All three candidates are thought to be at least 1.4 to 1.8 times the mass of Earth, and orbit the star every three to 13 days, which would put the entire system well within Mercury’s 88 day orbit of the Sun. The planet orbiting the star every 13 days, dubbed planet d, is most interesting to scientists — it falls within the star’s habitable zone where liquid water could exist on the surface.

    Exploring Our Neighborhood

    “We are now one step closer [to] getting a census of rocky planets in the solar neighborhood,” said Ignasi Ribas, co-author on the new paper [MNRAS] and researcher at the Institute of Space Sciences in Barcelona, Spain.

    The planets’ host is GJ 1061, a type of low-mass star called an M dwarf that is the 20th nearest star to the Sun. The star is similar to Proxima Centauri, the star closest to Earth, which was discovered to host a planet in 2016. GJ 1061, however, shows less violent stellar activity, suggesting that it might currently provide a safer environment for life than Proxima Centauri.

    But to assess habitability, a star’s whole history needs to be accounted for and M dwarf stars could have had stronger activity levels in the past and also have much longer lifetimes than Sun-like stars. This means that a close-orbit planet, like planet d, may have spent many millions of years being blasted by intense radiation from its star, so it may not retain a life-sustaining atmosphere.

    The new planets were discovered with the radial velocity method — a technique that uses tiny wobbles in a star’s orbit to revel the gravitational presence of exoplanets.

    Radial Velocity Method-Las Cumbres Observatory

    Radial velocity Image via SuperWasp http:// http://www.superwasp.org/exoplanets.htm

    This technique typically reveals giant exoplanets close to their host star, but increasingly, this method is being used in long-term campaigns to reveal smaller exoplanets.

    Using the HARPS instrument on the 3.6-meter telescope at the European Southern Observatory in La Silla, Chile [below], astronomers observed the star over 54 nights from July to September in 2018.

    ESO/HARPS at La Silla


    ESO 3.6m telescope & HARPS at Cerro LaSilla, Chile, 600 km north of Santiago de Chile at an altitude of 2400 metres.

    The star was one target of a larger campaign called the Red Dot project, which since 2017 has surveyed small nearby stars to look for terrestrial planets like Earth.

    ESO Pale Red Dot project

    The data showed the signatures of three, and possibly four, candidate planets. The scientists suspect the fourth signal is just stellar activity — not a real planet. But after calculating the remaining three planets’ orbits, the scientists could not rule out an additional, unseen fourth planet. This undiscovered planet would have a much longer orbit, so further observations would be need to determine if there really is a fourth planet farther out.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.


    Stem Education Coalition

    Visit ESO in Social Media-

    Facebook

    Twitter

    YouTube

    ESO Bloc Icon

    ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 16 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom, along with the host state of Chile. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is a major partner in ALMA, the largest astronomical project in existence. And on Cerro Armazones, close to Paranal, ESO is building the 39-metre EEuropean Extremely Large Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

    ESO/Cerro LaSilla, 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO VLT at Cerro Paranal in the Atacama Desert

    ESO VLT 4 lasers on Yepun

    Glistening against the awesome backdrop of the night sky above ESO_s Paranal Observatory, four laser beams project out into the darkness from Unit Telescope 4 UT4 of the VLT.

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    ESO/E-ELT,to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).

    A novel gamma ray telescope under construction on Mount Hopkins, Arizona. a large project known as the Cherenkov Telescope Array, composed of hundreds of similar telescopes to be situated in the Canary Islands and Chile. The telescope on Mount Hopkins will be fitted with a prototype high-speed camera, assembled at the University of Wisconsin–Madison, and capable of taking pictures at a billion frames per second. Credit: Vladimir Vassiliev

     
  • richardmitnick 10:09 am on August 22, 2019 Permalink | Reply
    Tags: "A super-secure quantum internet just took another step closer to reality", , Quantum … what?,   

    From MIT Technology Review: “A super-secure quantum internet just took another step closer to reality” 

    MIT Technology Review
    From MIT Technology Review

    1
    Scientists have managed to send a record-breaking amount of data in quantum form, using a strange unit of quantum information called a qutrit.

    The news: Quantum tech promises to allow data to be sent securely over long distances. Scientists have already shown it’s possible to transmit information both on land and via satellites using quantum bits, or qubits. Now physicists at the University of Science and Technology of China and the University of Vienna in Austria have found a way to ship even more data using something called quantum trits, or qutrits.

    Qutrits? Oh, come on, you’ve just made that up: Nope, they’re real. Conventional bits used to encode everything from financial records to YouTube videos are streams of electrical or photonic pulses than can represent either a 1 or a 0. Qubits, which are typically electrons or photons, can carry more information because they can be polarized in two directions at once, so they can represent both a 1 and a 0 at the same time. Qutrits, which can be polarized in three different dimensions simultaneously, can carry even more information. In theory, this can then be transmitted using quantum teleportation.

    Quantum … what? Quantum teleportation is a method for shipping data that relies on an almost-mystical phenomenon called entanglement. Entangled quantum particles can influence one another’s state, even if they are continents apart. In teleportation, a sender and receiver each receive one of a pair of entangled qubits. The sender measures the interaction of their qubit with another one that holds data they want to send. By applying the results of this measurement to the other entangled qubit, the receiver can work out what information has been transmitted. (For a more detailed look at quantum teleportation, see our explainer here.)

    Measuring progress: Getting this to work with qubits isn’t easy—and harnessing qutrits is even harder because of that extra dimension. But the researchers, who include Jian-Wei Pan, a Chinese pioneer of quantum communication, say they have cracked the problem by tweaking the first part of the teleportation process so that senders have more measurement information to pass on to receivers. This will make it easier for the latter to work out what data has been teleported over. The research was published in the journal Physical Review Letters.

    Deterring hackers: This might seem rather esoteric, but it has huge implications for cybersecurity. Hackers can snoop on conventional bits flowing across the internet without leaving a trace. But interfering with quantum units of information causes them to lose their delicate quantum state, leaving a telltale sign of hacking. If qutrits can be harnessed at scale, they could form the backbone of an ultra-secure quantum internet that could be used to send highly sensitive government and commercial data.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 9:52 am on August 22, 2019 Permalink | Reply
    Tags: "Don’t ban new technologies – experiment with them carefully", Autonomous cars, Environmental pollution, Lime scooter company, San Francisco’s ban on municipal use of facial recognition technologies,   

    From The Conversation: “Don’t ban new technologies – experiment with them carefully” 

    Conversation
    From The Conversation

    August 22, 2019
    Ryan Muldoon, SUNY Buffalo

    For many years, Facebook’s internal slogan was “move fast and break things.” And that’s what the company did – along with most other Silicon Valley startups and the venture capitalists who fund them. Their general attitude is one of asking for forgiveness after the fact, rather than for permission in advance. Though this can allow for some bad behavior, it’s probably the right attitude, philosophically speaking.

    It’s true that the try-first mindset has frustrated the public. Take the Lime scooter company, for instance.

    2
    Lime scooter company

    The company launched its scooter sharing service in multiple cities without asking permission from local governments. Its electric scooters don’t need base stations or parking docks, so the company and its customers can leave them anywhere for the next person to pick up – even if that’s in the middle of a sidewalk. This general disruption has led to calls to ban the scooters in cities around the country.

    Scooters are not alone. Ridesharing services, autonomous cars, artificial intelligence systems and Amazon’s cashless stores have also all been targets of bans (or proposed bans) in different states and municipalities before they’ve even gotten off the ground.

    4
    Autonomous cars. The Conversation

    What these efforts have in common is what philosophers like me call the “precautionary principle,” the idea that new technologies, behaviors or policies should be banned until their supporters can demonstrate that they will not result in any significant harms. It’s the same basic idea Hippocrates had in ancient Greece: Doctors should “do no harm” to patients.

    The precautionary principle entered the political conversation in the 1980s in the context of environmental protection. Damage to the environment is hard – if not impossible – to reverse, so it’s prudent to seek to prevent harm from happening in the first place. But as I see it, that’s not the right way to look at most new technologies. New technologies and services aren’t creating irreversible damage, even though they do generate some harms.

    2
    Environmental pollution is so harmful and hard to clean up that precautions are useful. imrankadir/Shutterstock.com

    Precaution has its place

    As a general concept, the precautionary principle is essentially conservative. It allows existing technologies, even if new ones – the ones that face preemptive bans – are safer overall.

    This approach also runs counter to the most basic idea of liberalism, in which people are broadly allowed to do what they want, unless there’s a rule against it. This is limited only when our right to free action interferes with someone else’s rights. The precautionary principle reverses this, banning people from doing what they want, unless it is specifically allowed.

    The precautionary principle makes sense when people are talking about some issues, like the environment or public health. It’s easier to avoid the problems of air pollution or dumping trash in the ocean than trying to clean up afterward. Similarly, giving children drinking water that’s contaminated with lead has effects that aren’t reversible. The children simply must deal with the health effects of their exposure for the rest of their lives.

    But as much of a nuisance as dockless scooters might be, they aren’t the same as poisoned water.

    Managing the effects

    Of course, dockless scooters, autonomous cars and a whole host of new technologies do generate real harms. A Consumer Reports investigation in early 2019 found more than 1,500 injuries from electric scooters since the dockless companies were founded. That’s in addition to the more common nuisance of having to step over scooters carelessly left in the middle of the sidewalk – and the difficulties people using wheelchairs, crutches, strollers or walkers may have in getting around them.

    Those harms are not nothing, and can help motivate arguments for banning scooters. After all, they can’t hurt anyone if they’re not allowed. What’s missing from those figures, however, is how many of those people riding scooters would have gotten into a car instead. Cars are far more dangerous and far worse for the environment.

    Yet the precautionary principle isn’t right for cars, either. As the number of autonomous cars on the road climbs, they’ll be involved in an increasing number of crashes, which will no doubt get lots of media attention.

    It is worth keeping in mind that autonomous cars will have been a wild technology success even if they are in millions of crashes every year, so long as they improve on the 6.5 million crashes and 1.9 million people who were seriously injured in a car crash in 2017.


    A look at the precautionary principle in environmental regulation.

    Disruption brings benefits too

    It may also be helpful to remember that dockless scooters and ridesharing apps and any other technology that displaces existing methods can really only become a nuisance if a lot of people use them – that is, if many people find them valuable. Injuries from scooters, and the number of scooters left lying around, have increased because the number of people using them has skyrocketed. Those 1,500 reported injuries are from 38.5 million rides.

    This is not, of course, to say that these technologies and the firms that produce them should go unregulated. Indeed, a number of these firms have behaved quite poorly, and have legitimately created some harms, which should be regulated.

    But instead of preemptively banning things, I suggest continuing to rely on the standard approach in the liberal tradition: See what kinds of harms arise, handle the early cases via the court system, and then consider whether a pattern of harms emerges that would be better handled upfront by a new or revised regulation. The Consumer Product Safety Commission, which looks out for dangerous consumer goods and holds manufacturers to account, is an example of this.

    Indeed, laws and regulations already cover littering, abandoned vehicles, negligence and assault. New technologies may just introduce new ways of generating the same old harms, ones that are already reasonably well regulated. Genuinely new situations can of course arise: San Francisco’s ban on municipal use of facial recognition technologies may well be sensible, as people quite reasonably can democratically decide that the state shouldn’t be able to track their every move. People might well decide that companies shouldn’t be able to either.

    Silicon Valley’s CEOs aren’t always sympathetic characters. And “disruption” really can be disruptive. But liberalism is about innovation and experimentation and finding new solutions to humanity’s problems. Banning new technologies – even ones as trivial as dockless scooters – embodies a conservatism that denies that premise. A lot of new ideas aren’t great. A handful are really useful. It’s hard to tell which is which until we try them out a bit.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 8:51 am on August 22, 2019 Permalink | Reply
    Tags: , , , ,   

    Woods Hole Oceanographic Institute via COSMOS: ” Geology creates chemical energy” 

    From Woods Hole Oceanographic Institute

    22 August 2019

    Origin of a massive methane reservoir discovered.

    1
    The manipulator arm of the remotely operated vehicle Jason samples a stream of fluid from a hydrothermal vent.
    Chris German/WHOI/NSF, NASA/ROV Jason 2012 / Woods Hole Oceanographic Institution

    Scientists know methane is released from deep-sea vents, but its source has long been a mystery.

    Now a team from Woods Hole Oceanographic Institution, US, may have the answer. Analysis of 160 rock samples from across the world’s oceans provides evidence, they say, of the formation and abundance of abiotic methane – methane formed by chemical reactions that don’t involve organic matter.

    Nearly every sample contained an assemblage of minerals and gases that form when seawater, moving through the deep oceanic crust, is trapped in magma-hot olivine, a rock-forming mineral, the researchers write in a paper published in Proceedings of the National Academy of Science.
    .

    As the mineral cools, the water trapped inside undergoes a chemical reaction, a process called serpentinisation, which forms hydrogen and methane.

    “Here’s a source of chemical energy that’s being created by geology,” says co-author Jeffrey Seewald.

    On Earth, deep-sea methane might have played a critical role for the evolution of primitive organisms living at hydrothermal vents on the seafloor, Seewald adds. And elsewhere in the solar system, methane produced through the same process could provide an energy source for basic life forms.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Woods Hole Oceanographic Institute

    Vision & Mission

    The ocean is a defining feature of our planet and crucial to life on Earth, yet it remains one of the planet’s last unexplored frontiers. For this reason, WHOI scientists and engineers are committed to understanding all facets of the ocean as well as its complex connections with Earth’s atmosphere, land, ice, seafloor, and life—including humanity. This is essential not only to advance knowledge about our planet, but also to ensure society’s long-term welfare and to help guide human stewardship of the environment. WHOI researchers are also dedicated to training future generations of ocean science leaders, to providing unbiased information that informs public policy and decision-making, and to expanding public awareness about the importance of the global ocean and its resources.
    Mission Statement

    The Woods Hole Oceanographic Institution is dedicated to advancing knowledge of the ocean and its connection with the Earth system through a sustained commitment to excellence in science, engineering, and education, and to the application of this knowledge to problems facing society.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: