Updates from richardmitnick Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:13 pm on July 24, 2017 Permalink | Reply
    Tags: , “Our geological record from a cave illustrates that we still cannot predict when the next earthquake will happen.”, , , Looking for tsunami records in a sea cave,   

    From Rutgers: “Sea Cave Preserves 5,000-Year Snapshot of Tsunamis” 

    Rutgers University
    Rutgers University

    July 19, 2017
    Ken Branson

    Record tells us we don’t know much about predicting earthquakes that cause tsunamis.

    An international team of scientists digging in a sea cave in Indonesia has discovered the world’s most pristine record of tsunamis, a 5,000-year-old sedimentary snapshot that reveals for the first time how little is known about when earthquakes trigger massive waves.

    “The devastating 2004 Indian Ocean tsunami caught millions of coastal residents and the scientific community off-guard,” says co-author Benjamin Horton, a professor in the Department of Marine and Coastal Sciences at Rutgers University-New Brunswick.“Our geological record from a cave illustrates that we still cannot predict when the next earthquake will happen.”

    “Tsunamis are not evenly spaced through time,” says Charles Rubin, the study’s lead author and a professor at the Earth Observatory of Singapore, part of Nanyang Technological University. “Our geological record from a cave illustrates that we still cannot predict when the next earthquake will happen.” There can be long periods between tsunamis, but you can also get major tsunamis that are separated by just a few decades.”

    The discovery, reported in the current issue of Nature Communications, logs a number of firsts: the first record of ancient tsunami activity found in a sea cave; the first record for such a long time period in the Indian Ocean; and the most pristine record of tsunamis anywhere in the world.

    1
    The stratigraphy of the sea cave in Sumatra excavated by scientists from the Earth Observatory of Singapore, Rutgers and other institutions. The lighter bands are sand deposited by tsunamis over a period of 5,000 years; the darker bands are organic material. Photo: Earth Observatory of Singapore.

    The discovery was made in a sea cave on the west coast of Sumatra in Indonesia, just south of the city of Banda Aceh, which was devastated by the tsunami of December 2004. The stratigraphic record reveals successive layers of sand, bat droppings and other debris laid down by tsunamis between 7,900 and 2,900 years ago. The stratigraphy since 2,900 years ago was washed away by the 2004 tsunami.

    The L-shaped cave had a rim of rocks at the entrance that trapped successive layers of sand inside. The researchers dug six trenches and analyzed the alternating layers of sand and debris using radio carbon dating. The researchers define “pristine” as stratigraphic layers that are distinct and easy to read. “You have a layer of sand and a layer of organic material that includes bat droppings, so simply it is a layer of sand and a layer of bat crap, and so on, going back for 5,000 years,” Horton says.

    The record indicates that 11 tsunamis were generated during that period by earthquakes along the Sunda Megathrust, the 3,300-mile-long fault running from Myanmar to Sumatra in the Indian Ocean. The researchers found there were two tsunami-free millennia during the 5,000 years, and one century in which four tsunamis struck the coast. In general, the scientists report, smaller tsunamis occur relatively close together, followed by long dormant periods, followed by great quakes and tsunamis, such as the one that struck in 2004.

    2
    Using flourescent lights, Kerry Sieh and Charles Rubin of the Earth Observatory of Singapore look for charcoal and shells for radiocarbon dating. Photo: Earth Observatory of Singapore.

    Rubin, Horton and their colleagues were studying the seismic history of the Sunda Megathrust, which was responsible for the 2004 earthquake that triggered the disastrous tsunami. They were looking for places to take core samples that would give them a good stratigraphy. This involves looking for what Horton calls “depositional places” – coastal plains, coastal lake bottoms, any place to plunge a hollow metal cylinder six or seven meters down and produce a readable sample. But for various reasons, there was no site along the southwest coast of Sumatra that would do the job. But Patrick Daly, an archaeologist at EOS who had been working on a dig in the coastal cave, told Rubin and Horton about it and suggested it might be the place they were looking for.

    Looking for tsunami records in a sea cave was not something that would have occurred to Horton, and he says Daly’s professional generosity – archaeologists are careful about who gets near their digs – and his own and Rubin’s openness to insights from other disciplines made the research possible. Horton says this paper may be the most important in his career for another reason.

    “A lot of (the research) I’ve done is incremental,” he says. “I have a hypothesis, and I do deductive science to test the hypothesis. But this is really original, and original stuff doesn’t happen all that often.”

    See the full article here .

    Follow Rutgers Research here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    rutgers-campus

    Rutgers, The State University of New Jersey, is a leading national research university and the state’s preeminent, comprehensive public institution of higher education. Rutgers is dedicated to teaching that meets the highest standards of excellence; to conducting research that breaks new ground; and to providing services, solutions, and clinical care that help individuals and the local, national, and global communities where they live.

    Founded in 1766, Rutgers teaches across the full educational spectrum: preschool to precollege; undergraduate to graduate; postdoctoral fellowships to residencies; and continuing education for professional and personal advancement.

    Rutgers smaller
    Please give us back our original beautiful seal which the University stole away from us.
    As a ’67 graduate of University college, second in my class, I am proud to be a member of

    Alpha Sigma Lamda, National Honor Society of non-tradional students.

     
  • richardmitnick 1:54 pm on July 24, 2017 Permalink | Reply
    Tags: , , , , Sgr A*, The Story of a Boring Encounter with a Black Hole   

    From AAS NOVA: “The Story of a Boring Encounter with a Black Hole” 

    AASNOVA

    American Astronomical Society

    24 July 2017
    Susanna Kohler

    1
    Many simulations from before G2’s encounter with Sgr A* (like the one shown here, from a group in Europe) predicted an exciting show! So why was the approach so uneventful? [ESO/S. Gillessen/MPE/Marc Schartmann.]

    Remember the excitement three years ago before the gas cloud G2’s encounter with the supermassive black hole at the center of our galaxy, Sgr A*?

    SGR A* NASA’s Chandra X-Ray Observatory

    Did you notice that not much was said about it after the fact? That’s because not much happened — and a new study suggests that this isn’t surprising.

    An Anticipated Approach

    G2, an object initially thought to be a gas cloud, was expected to make its closest approach to the 4.6-million-solar-mass Sgr A* in 2014. At the pericenter of its orbit, G2 was predicted to pass as close as 36 light-hours from the black hole.

    2
    Log-scale column density plots from one of the authors’ simulations, showing the cloud at a time relative to periapsis (t=0) of −5, −1, 0, 1, 5, and 10 yr (left to right, top to bottom). [Morsony et al. 2017]

    This close brush with such a massive black hole was predicted to tear G2 apart, causing much of its material to accrete onto Sgr A*. It was thought that this process would temporarily increase the accretion rate onto the black hole relative to its normal background accretion rate, causing Sgr A*’s luminosity to increase for a time.

    Instead, Sgr A* showed a distinct lack of fireworks, with very minimal change to its brightness after G2’s closest approach. This “cosmic fizzle” has raised questions about the nature of G2: was it really a gas cloud? What else might it have been instead? Now, a team of scientists led by Brian Morsony (University of Maryland and University of Wisconsin-Madison) have run a series of simulations of the encounter to try to address these questions.

    No Fireworks

    Morsony and collaborators ran three-dimensional hydrodynamics simulations using the FLASH code. They used a range of different simulation parameters, like cloud structure, background structure, background density, grid resolution, and accretion radius, in order to better understand how these factors might have affected the accretion rate and corresponding luminosity of Sgr A*.

    3
    Accretion rate vs. time for two of the simulations, one with a wind background and one with no wind background. The accretion rate in both cases displays no significant increase when G2 reaches periapsis. [Morsony et al. 2017]

    Based on their simulations, the authors showed that we actually shouldn’t expect G2’s encounter to have caused a significant change in Sgr A*’s accretion rate relative to its normal rate from background accretion: with the majority of the simulation parameters used, only 3–21% of the material Sgr A* accreted from 0–5 years after periapsis is from the cloud, and only 0.03–10% of the total cloud mass is accreted.

    Not Just a Cloud?

    By comparing their simulations to observations of G2 after its closest approach, Morsony and collaborators find that to fit the observations, G2 cannot be solely a gas cloud. Instead, two components are likely needed: an extended, cold, low-mass gas cloud responsible for most of the emission before G2 approached pericenter, and a very compact component such as a dusty stellar object that dominates the emission observed since pericenter.

    The authors argue that any future emission detected should no longer be from the cloud, but only from the compact core or dusty stellar object. Future observations should help us to confirm this model — but in the meantime these simulations give us a better sense of why G2’s encounter with Sgr A* was such a fizzle.

    Citation

    Brian J. Morsony et al 2017 ApJ 843 29. doi:10.3847/1538-4357/aa773d

    Related Journal Articles
    See the full article for further references with links.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    1

    AAS Mission and Vision Statement

    The mission of the American Astronomical Society is to enhance and share humanity’s scientific understanding of the Universe.

    The Society, through its publications, disseminates and archives the results of astronomical research. The Society also communicates and explains our understanding of the universe to the public.
    The Society facilitates and strengthens the interactions among members through professional meetings and other means. The Society supports member divisions representing specialized research and astronomical interests.
    The Society represents the goals of its community of members to the nation and the world. The Society also works with other scientific and educational societies to promote the advancement of science.
    The Society, through its members, trains, mentors and supports the next generation of astronomers. The Society supports and promotes increased participation of historically underrepresented groups in astronomy.
    The Society assists its members to develop their skills in the fields of education and public outreach at all levels. The Society promotes broad interest in astronomy, which enhances science literacy and leads many to careers in science and engineering.

    Adopted June 7, 2009

     
  • richardmitnick 11:33 am on July 24, 2017 Permalink | Reply
    Tags: , , , ,   

    From CSIRO: “Extreme El Niño events to stay despite stabilisation” 

    CSIRO bloc

    Commonwealth Scientific and Industrial Research Organisation

    25 Jul 2017
    Chris Gerbing
    Communication Manager, Oceans And Atmosphere
    Phone +61 3 9545 2312
    Chris.Gerbing@csiro.au

    The frequency of extreme El Niño events is projected to increase for a further century after global mean temperature is stabilised at 1.5°C above pre-industrial levels.

    1
    Yale.

    Research published today in Nature Climate Change by an international team shows that if warming was halted to the aspirational 1.5°C target from the Paris Agreement, the frequency of extreme El Niño events could continue to increase, due to a continuation of faster warming in the eastern equatorial Pacific.

    CSIRO researcher and lead author Dr Guojian Wang said the growing risk of extreme El Niño events did not stabilise in a stabilised climate.

    “Currently the risk of extreme El Niño events is around five events per 100 years,” Dr Wang said.

    “This doubles to approximately 10 events per 100 years by 2050, when our modelled emissions scenario (RCP 2.6) reaches a peak of 1.5°C warming.

    “After this, as faster warming in the eastern equatorial Pacific persists, the risk of extreme El Niño continues upwards to about 14 events per 100 years by 2150.

    “This result is unexpected and shows that future generations will experience greater climate risks associated with extreme El Niño events than seen at 1.5°C warming.”

    The research was based on five climate models that provided future scenarios past the year 2100.

    The models were run using the Intergovernmental Panel on Climate Change’s lowest emissions scenario (RCP2.6), which requires negative emissions late in the century.

    Director of the Centre for Southern Hemisphere Oceans Research and report co-author, Dr Wenju Cai, said that this research continues important work on the impacts of climate change on the El Niño-Southern Oscillation which is a significant driver of global climate.

    “The most severe previous extreme El Niño events occurred in 1982/83, 1997/98 and 2015/16, years associated with worldwide climate extremes,” Dr Cai said.

    “Extreme El Niño events occur when the usual El Niño Pacific rainfall centre is pushed eastward toward South America, sometimes up to 16,000 kilometres, causing massive changes in the climate. The further east the centre moves, the more extreme the El Niño.

    “This pulls rainfall away from Australia bringing conditions that have commonly resulted in intense droughts across the nation. During such events, other countries like India, Ecuador, and China have experienced extreme events with serious socio-economic consequences.”

    Dr Cai added that while previous research suggested that extreme La Niña events would double under a 4.5°C warming scenario, results here indicated that under a scenario of climate stabilisation (i.e. 1.5°C warming) there was little or no change to these La Niña events.

    The research was conducted by researchers at the Hobart based Centre for Southern Hemisphere Oceans Research, an international collaboration between CSIRO, Qingdao National Laboratory for Marine Science and Technology, the University of New South Wales, and the University of Tasmania.

    The National Environmental Science Programme’s Earth System and Climate Change Hub co-funded this research.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    CSIRO campus

    CSIRO, the Commonwealth Scientific and Industrial Research Organisation, is Australia’s national science agency and one of the largest and most diverse research agencies in the world.

     
  • richardmitnick 11:03 am on July 24, 2017 Permalink | Reply
    Tags: , , , Dell EMC, Supercomputing in Australia   

    From CSIRO: “CSIRO powers bionic vision research with new Dell EMC PowerEdge based artificial intelligence capability” 

    CSIRO bloc

    Commonwealth Scientific and Industrial Research Organisation

    18 Jul 2017
    Andrew Warren
    Communications Advisor
    Phone +61 7 3833 5666
    Mobile +61 416 277 695
    Andrew.Warren@csiro.au

    1
    Dell EMC Bracewell super computer (artist’s impression). Photographer: Brian Davis.

    CSIRO has chosen Dell EMC to build a new scientific computing capability, kicking off a new generation of research.

    News highlights

    Dell EMC partners with CSIRO to build its new artificial intelligence system, which went live early July 2017.
    The system will help Data61’s Computer Vision group use large data sets for its bionic vision solution, helping it process more images and provide contextual meaning for recipients.
    Named after Australian astronomer and engineer Ronald N. Bracewell, it is a turn-key system built on Dell EMC’s PowerEdge platform with partner GPUs for computation and InfiniBand networking.

    CSIRO will partner with Dell EMC to build a new large scale scientific computing system to expand CSIRO’s capability in deep learning, a key approach to furthering progress towards artificial intelligence.

    The new system is named ‘Bracewell’ after Ronald N Bracewell, an Australian astronomer and engineer who worked in the CSIRO Radiophysics Laboratory during World War II, and whose work led to fundamental advances in medical imaging.

    In addition to artificial intelligence, the system provides capability for research in areas as diverse as virtual screening for therapeutic treatments, traffic and logistics optimisation, modelling of new material structures and compositions, machine learning for image recognition and pattern analysis.

    CSIRO requested tenders in November 2016 to build the new system with a $4 million budget, and following Dell EMC’s successful proposal, the new system was installed in five days across May and June 2017. The system is now live and began production in early July 2017.

    Greater scale and processing power enables richer, more realistic vision solution.

    One of the first research teams to benefit from the new processing power will be Data61’s Computer Vision group, led by Associate Professor Nick Barnes. His team developed the software for a bionic vision solution that aims to restore sight for those with profound vision loss, through new computer vision processing that uses large scale image datasets to optimise and learn more effective processing.

    Bracewell will help the research team scale their software to tackle new and more advanced challenges, and deliver a richer and more robust visual experience for the profoundly vision impaired.

    “When we conducted our first human trial, participants had to be fully supervised and were mostly limited to the laboratory, but for our next trial we’re aiming to get participants out of the lab and into the real world, controlling the whole system themselves,” Associate Professor Barnes said.

    With access to this new computing capability, Professor Barnes and his team will be able to use much larger data sets to help train the software to recognise and process more images, helping deliver a greater contextual meaning to the recipient.

    “To make this a reality, we need to build vision processing systems that show accurate visualisations of the world in a broad variety of scenarios. These will enable people to visualise the world through their bionic vision in a way that enables them to safely and effectively interact in challenging visual environments,” Professor Barnes said.

    “This new system will provide greater scale and processing power we need to build our computer vision systems by optimisation of processing over broader scenarios, represented by much larger sets of images, to help train the software to understand and represent the world. We’ll be able to take our computer vision research to the next level, solving problems through leveraging large scale image data that most labs around the world aren’t able to.”

    Turnkey installation speeds time to results

    Bracewell is a turn-key system built on Dell EMC’s PowerEdge platform, with partner technology including GPUs for computation and InfiniBand networking, which pieces all the compute nodes together in a low latency and high bandwidth solution faster than traditional networking.

    Dell EMC ANZ High-Performance Computing Lead, Andrew Underwood, said the installation process was streamlined and optimised for deep learning applications, with Bright Cluster Manager technology helping put these frameworks in place faster.

    “Our turn-key system removes the complexity from the installation, management and use of artificial intelligence frameworks, and has enabled CSIRO to speed up its time to market for scientific outcomes, which will in turn boost Australia’s competitiveness in the global economy,” Mr Underwood said.

    The system includes:

    114 x PowerEdge C4130 with NVIDIA P100 GPUs, NVLINK, dual Intel Xeon CPU and 100Gbps EDR InfiniBand
    Totaling;
    1,634,304 CUDA Compute Cores
    3192 Xeon Compute Cores
    29TB RAM
    13 x 100Gbps 36p EDR InfiniBand switch fabric
    Bright Cluster Manager Software 8.0

    Doubling the aggregate computational power available to researchers.

    CSIRO Deputy Chief Information Officer, and Head of Scientific Computing, Angus Macoustra, said the system is crucial to the organisation’s work in identifying and solving emerging science problems.

    “This is a critical enabler for CSIRO science, engineering and innovation. As a leading global research organisation, it’s important to sustain our global competitiveness by maintaining the currency and performance of our computing and data infrastructures,” Mr Macoustra said.

    “The power of this new system is that it allows our researchers to tackle challenging workloads and ultimately enable CSIRO research to solve real-world issues. The system will nearly double the aggregate computational power available to CSIRO researchers, and will help transform the way we do scientific research and development.”

    Dell EMC ANZ Senior Vice President, Commercial and Public Sector, Angela Fox said “Dell EMC we’re committed to creating technologies that drive human progress.

    “CSIRO’s research will change the way we live and work in the future for the better,” Ms Fox said. “We’re proud to play a part in evolving that work, and look forward to enabling scientific progress for years to come.”

    The system builds on Dell EMC’s work in the high-performance computing space, with the Pearcey system installed in 2016 and numerous other systems for Australian universities such as the University of Melbourne ‘Spartan’, Monash University ‘MASSIVE3’ and the University of Sydney ‘Artemis’.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    CSIRO campus

    CSIRO, the Commonwealth Scientific and Industrial Research Organisation, is Australia’s national science agency and one of the largest and most diverse research agencies in the world.

     
  • richardmitnick 10:29 am on July 24, 2017 Permalink | Reply
    Tags: 4D camera opens panoramas, , New camera designed by Stanford researchers could improve robot vision and virtual reality, , , Stahford Computational Imaging Lab,   

    From Stanford: “New [4D] camera designed by Stanford researchers could improve robot vision and virtual reality” 

    Stanford University Name
    Stanford University

    July 21, 2017
    Taylor Kubota

    1
    Assistant Professor Gordon Wetzstein, left, and postdoctoral research fellow Donald Dansereau with a prototype of the monocentric camera that captured the first single-lens panoramic light fields. (Image credit: L.A. Cicero)

    A new camera that builds on technology first described by Stanford researchers more than 20 years ago could generate the kind of information-rich images that robots need to navigate the world. This camera, which generates a four dimensional image, can also capture nearly 140 degrees of information.

    “We want to consider what would be the right camera for a robot that drives or delivers packages by air. We’re great at making cameras for humans but do robots need to see the way humans do? Probably not,” said Donald Dansereau, a postdoctoral fellow in electrical engineering.

    With robotics in mind, Dansereau and Gordon Wetzstein, assistant professor of electrical engineering, along with colleagues from the University of California, San Diego have created the first-ever single-lens, wide field of view, light field camera, which they are presenting at the computer vision conference CVPR 2017 on July 23.

    As technology stands now, robots have to move around, gathering different perspectives, if they want to understand certain aspects of their environment, such as movement and material composition of different objects. This camera could allow them to gather much the same information in a single image. The researchers also see this being used in autonomous vehicles and augmented and virtual reality technologies.

    “It’s at the core of our field of computational photography,” said Wetzstein. ”It’s a convergence of algorithms and optics that’s facilitating unprecedented imaging systems.”

    From a peephole to a window

    The difference between looking through a normal camera and the new design is like the difference between looking through a peephole and a window, the scientists said.

    “A 2D photo is like a peephole because you can’t move your head around to gain more information about depth, translucency or light scattering,” Dansereau said. “Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess.”

    That additional information comes from a type of photography called light field photography, first described in 1996 by Stanford professors Marc Levoy and Pat Hanrahan. Light field photography captures the same image as a conventional 2D camera plus information about the direction and distance of the light hitting the lens, creating what’s known as a 4D image. A well-known feature of light field photography is that it allows users to refocus images after they are taken because the images include information about the light position and direction. Robots might use this to see through rain and other things that could obscure their vision.

    2
    3
    4
    Two 138° light field panoramas and a depth estimate of the second panorama. (Image credit: Stanford Computational Imaging Lab and Photonic Systems Integration Laboratory at UC San Diego.)

    The extremely wide field of view, which encompasses nearly a third of the circle around the camera, comes from a specially designed spherical lens. However, this lens also produced a significant hurdle: how to translate a spherical image onto a flat sensor. Previous approaches to solving this problem had been heavy and error prone, but combining the optics and fabrication expertise of UCSD and the signal processing and algorithmic expertise of Wetzstein’s lab resulted in a digital solution to this problem that not only leads to the creation of these extra-wide images but enhances them.

    Robotics up close

    This camera system’s wide field of view, detailed depth information and potential compact size are all desirable features for imaging systems incorporated in wearables, robotics, autonomous vehicles and augmented and virtual reality.

    5
    Postdoctoral research fellow Donald Dansereau holds a spherical lens like the one which is at the heart of the panoramic light field camera, capturing rich light field information over a wide field of view. (Image credit: L.A. Cicero)

    “It could enable various types of artificially intelligent technology to understand how far away objects are, whether they’re moving and what they’ve made of,” said Wetzstein. “This system could be helpful in any situation where you have limited space and you want the computer to understand the entire world around it.”

    Although it can also work like a conventional camera at far distances, this camera is designed to improve close-up images. Examples where it would be particularly useful include robots that have to navigate through small areas, landing drones and self-driving cars. As part of an augmented or virtual reality system, its depth information could result in more seamless renderings of real scenes and support better integration between those scenes and virtual components.

    The camera is currently a proof-of-concept and the team is planning to create a compact prototype next. That version would hopefully be small enough and light enough to test on a robot. A camera that humans could wear could be soon to follow.

    “Many research groups are looking at what we can do with light fields but no one has great cameras. We have off-the-shelf cameras that are designed for consumer photography,” said Dansereau. “This is the first example I know of a light field camera built specifically for robotics and augmented reality. I’m stoked to put it into peoples’ hands and to see what they can do with it.”

    Additional information about this camera system is available here. Additional co-authors of the paper are Glenn Schuster and Joseph Ford of UCSD. Wetzstein is also a professor, by courtesy of computer science, a member of Stanford Bio-X and a member of the Stanford Neurosciences Institute.

    This research was funded by the NSF/Intel Partnership on Visual and Experiential Computing and DARPA.

    The Wetzstein lab is also presenting work on reconstructing transient images from single-photon sensors at the picosecond scale – one trillion frames per second – at CVPR 2017. That paper is available here with additional information here.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 10:03 am on July 24, 2017 Permalink | Reply
    Tags: , , , , , Where are the IceCube neutrinos coming from? (part 2)   

    From astrobites: “Where are the IceCube neutrinos coming from? (part 2)” 

    Astrobites bloc

    Astrobites

    Jul 24, 2017

    Title: Constraints on Galactic Neutrino Emission with Seven Years of IceCube Data.
    Authors: The IceCube Collaboration

    U Wisconsin ICECUBE neutrino detector at the South Pole


    Status: Submitted to The Astrophysical Journal, [open access]

    Back in 2013, the IceCube Collaboration published a paper [Science November 22nd] announcing their discovery of astrophysical neutrinos, i.e. ones that have an origin outside our Solar System (Astrobites coverage). Since this discovery, scientists have been busily working to develop theories as to the origin of these neutrinos. The original paper noted some clustering in the area of the center of our Galaxy, but it was not statistically significant. Since then, both Galactic and extragalactic origins have been proposed. Star-forming galaxies have been suggested as one possible origin, which Astrobites has covered papers arguing both for and against (here and here). Other theories involve radio galaxies, transients, and dark matter.

    In today’s paper, the IceCube Collaboration has analyzed more of their data and set limits on the percentage of the diffuse neutrino flux that can come from Galactic sources. Theoretically, some neutrinos should be created in the Galactic plane: we know that this area emits gamma rays from pion decay, and neutrinos are created in the same types of interactions that create the gamma rays.

    The collaboration used an unbinned maximum likelihood method as the main analysis technique in this paper. This is a standard technique used in astrophysics; it takes a model and finds the values of all the parameters of that model that give the best likelihood of getting the data that has been observed. (A second, separate technique was used as a cross-check). They used five different catalogs of Galactic sources expected to emit neutrinos to determine where to search. Sources included pulsar wind nebulae and supernovae interacting with molecular clouds. The upper limits on the flux from our galaxy can be seen below.

    1
    Figure 1: Upper limits on the neutrino flux from the Galaxy, assuming a three-flavor neutrino flux and a certain emission model known as the KRA-gamma model. The red limits are from this paper (with the grey showing how the limits change if other emission models are used); the blue are from ANTARES, which is another neutrino experiment. For comparison, the measured overall neutrino flux is also shown (black data points and the yellow band). The green band is from the data, but only data from the northern sky is used. IceCube is more sensitive in the Northern hemisphere. (Source: Figure 2 of the paper.)

    It turns out that, under these assumptions, Galactic contributions can’t be more than 14% of the diffuse neutrino flux. However, the authors note that there are still scenarios where the flux could originate in/near the Galaxy. This paper focused on emission in the Galactic plane, but cosmic ray interactions in a gas halo far from the plane, and/or dark matter annihilation or decay would change the emission templates that were used here. They also mention that the limits could be made stronger by doing a joint analysis with ANTARES.

    Anteres Neutrino Telescope Underwater, a large area water Cherenkov detector in the deep Mediterranean Sea, 40 km off the coast of Toulon, Fr

    Since IceCube and ANATARES are located in different hemispheres, they are most sensitive in different areas of the sky. The mystery continues…

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What do we do?

    Astrobites is a daily astrophysical literature journal written by graduate students in astronomy. Our goal is to present one interesting paper per day in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.
    Why read Astrobites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.
    Our goal is to solve this problem, one paper at a time. In 5 minutes a day reading Astrobites, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in a new area of astronomy.

     
  • richardmitnick 9:28 am on July 24, 2017 Permalink | Reply
    Tags: , , , , Native mass spectrometry, Signaling islands in cells: targets for precision drug design, U Washington Health Beat   

    From U Washington: “Signaling islands in cells: targets for precision drug design” 

    U Washington

    University of Washington

    07.13.2017
    Leila Gray

    1
    A critical component of the cell signaling system, anchored protein kinase A, has some flexible molecular parts, allowing it to both contract and stretch, with floppy arms that can reach out to find appropriate targets. John Scott Lab.

    Research results reported in the journal Science overturn long-held views on a basic messaging system within living cells.

    The findings suggest new approaches to designing precisely targeted drugs for cancer and other serious diseases.

    Dr. John D. Scott, professor and chair of pharmacology at the University of Washington School of Medicine and a Howard Hughes Medical Institute Investigator, along with Dr. F. Donelson Smith of the UW and HHMI, led this study, which also involved Drs. Claire and Patrick Eyers and their group at the University of Liverpool. Visit the Scott lab web site, Cell Signaling in Space and Time.

    The researchers explained that key cellular communication machinery is more regionally constrained inside the cell than was previously thought. Communication via this vital system is akin to social networking on your Snapchat account.

    Within a cell, the precise positioning of such messaging components allows hormones, the body’s chief chemical communicators, to transmit information to exact places inside the cell. Accurate and very local activation of the enzyme that Scott and his group study helps assure a correct response occurs in the right place and at the right time.

    “The inside of a cell is like a crowded city,” said Scott, “It is a place of construction and tearing down, goods being transported and trash being recycled, countless messages, (such as the ones we have discovered), assembly lines flowing, and packages moving. Strategically switching on signaling enzyme islands allows these biochemical activities to keep the cell alive and is important to protect against the onset of chronic diseases such as diabetes, heart disease and certain cancers.”

    Advances in electron microscopy and native mass spectrometry enabled the researchers to determine that a critical component of the signaling system, anchored protein kinase A, remains intact during activation. Parts of the molecule are flexible, allowing it to both contract and stretch, with floppy arms that can reach out to find appropriate targets.

    Still, where the molecule performs its act, space is tight. The distance is, in fact, about the width of two proteins inside the cell.

    2
    Green, circled area show where the enzyme in the signalling study is active in mitochondria, the powerhouses of living cells. John D. Scott.

    “We realize that in designing drugs to reach such targets that they will have to work within very narrow confines, ” Scott said.

    One of his group’s collective goals is figuring out how to deliver precision drugs to the right address within this teeming cytoplasmic metropolis.

    “Insulating the signal so that the drug effect can’t happen elsewhere in the cell is an equally important aspect of drug development because it could greatly reduce side effects,” Scott said.

    An effort to take this idea of precision medicine a step further is part of the Institute for Targeted Therapeutics at UW Medicine in Seattle. The institute is being set up by Scott and his colleagues in the UW Department of Pharmacology.

    The scientists are collaborating with cancer researchers to better understand the molecular causes — and possible future treatments — for a certain liver malignancy. This particular liver cancer arises from a mutation that produces an abnormal form of the enzyme that is the topic of this current work, protein kinase A, and alters the enzyme’s role in cell signaling.

    Other advances that gave the researchers a clearer view of the signaling mechanisms reported in Science include CRISPR gene editing, live-cell imaging techniques, and more powerful ways to look at all components of a protein complex.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    u-washington-campus
    The University of Washington is one of the world’s preeminent public universities. Our impact on individuals, on our region, and on the world is profound — whether we are launching young people into a boundless future or confronting the grand challenges of our time through undaunted research and scholarship. Ranked number 10 in the world in Shanghai Jiao Tong University rankings and educating more than 54,000 students annually, our students and faculty work together to turn ideas into impact and in the process transform lives and our world. For more about our impact on the world, every day.
    So what defines us —the students, faculty and community members at the University of Washington? Above all, it’s our belief in possibility and our unshakable optimism. It’s a connection to others, both near and far. It’s a hunger that pushes us to tackle challenges and pursue progress. It’s the conviction that together we can create a world of good. Join us on the journey.

     
  • richardmitnick 7:57 am on July 24, 2017 Permalink | Reply
    Tags: , , , , , Gamma ray telescopes, How non-optical telescopes see the universe, Infrared telescopes, , Optical telescopes, Pair production telescope, , Ultraviolet telescopes, X-ray telescopes   

    From COSMOS: “How non-optical telescopes see the universe” 

    Cosmos Magazine bloc

    COSMOS Magazine

    24 July 2017
    Jake Port

    The human eye can only see a tiny band of the electromagnetic spectrum. That tiny band is enough for most day-to-day things you might want to do on Earth, but stars and other celestial objects radiate energy at wavelengths from the shortest (high-energy, high-frequency gamma rays) to the longest (low-energy, low-frequency radio waves).

    1
    The electromagnetic spectrum is made up of radiation of all frequencies and wavelengths. Only a tiny range is visible to the human eye. NASA.

    Beyond the visible spectrum

    To see what’s happening in the distant reaches of the spectrum, astronomers use non-optical telescopes. There are several varieties, each specialised to catch radiation of particular wavelengths.

    Non-optical telescopes utilise many of the techniques found in regular telescopes, but also employ a variety of techniques to convert invisible light into spectacular imagery. In all cases, a detector is used to capture the image rather than an eyepiece, with a computer then processing the data and constructing the final image.

    There are also more exotic ways of looking at the universe that don’t use electromagnetic radiation at all, like neutrino telescopes and the cutting-edge gravitational wave telescopes, but they’re a separate subject of their own.

    To start off, let’s go straight to the top with the highest-energy radiation, gamma rays.

    Gamma ray telescopes

    Gamma radiation is generally defined as radiation of wavelengths less than 10−11 m, or a hundredth of a nanometre.

    Gamma-ray telescopes focus on the highest-energy phenomena in the universe, such as black holes and exploding stars. A high-energy gamma ray may contain a billion times as much energy as a photon of visible light, which can make them difficult to study.

    Unlike photons of visible light, that can be redirected using mirrors and reflectors, gamma rays simply pass through most materials. This means that gamma-ray telescopes must use sophisticated techniques that track the movement of individual gamma rays to construct an image.

    One technology that does this, in use in the Fermi Gamma-ray Space Telescope among other places, is called a pair production telescope.

    NASA/Fermi Telescope

    It uses a multi-layer sandwich of converter and detector materials. When a gamma ray enters the front of the detector it hits a converter layer, made of dense material such as lead, which causes the gamma-ray to produce an electron and a positron (known as a particle-antiparticle pair).

    The electron and the positron then continue to traverse the telescope, passing through layers of detector material. These layers track the movement of each particle by recording slight bursts of electrical charge along the layer. This trail of bursts allows astronomers to reconstruct the energy and direction of the original gamma ray. Tracing back along that path points to the source of the ray out in space. This data can then be used to create an image.

    The video below shows how this works in the space-based Fermi Large Area Telescope.

    NASA/Fermi LAT

    X-ray telescopes

    X-rays are radiation with wavelengths between 10 nanometres and 0.01 nanometres. They are used every day to image broken bones and scan suitcases in airports and can also be used to image hot gases floating in space. Celestial gas clouds and remnants of the explosive deaths of large stars, known as supernovas, are the focus of X-ray telescopes.

    Like gamma rays, X-rays are a high-energy form of radiation that can pass straight through most materials. To catch X-rays you need to use materials that are very dense.

    X-ray telescopes often use highly reflective mirrors that are coated with dense metals such as gold, nickel or iridium. Unlike optical mirrors, which can bounce light in any direction, these mirrors can only slightly deflect the path of the X-ray. The mirror is orientated almost parallel to the direction of the incoming X-rays. The X-rays lightly graze the mirror before moving on, a little like a stone skipping on a pond. By using lots of mirrors, each changing the direction of the radiation by a small amount, enough X-rays can be collected at the detector to produce an image.

    To maximise image quality the mirrors are loosely stacked, creating an internal structure resembling the layers of an onion.

    2
    Diagram showing how ‘grazing incidence’ mirrors are used in X-ray telescopes. NASA.

    NASA/Chandra X-ray Telescope

    ESA/XMM Newton X-ray telescope

    NASA NuSTAR X-ray telescope


    Ultraviolet telescopes

    Ultraviolet light is radiation with wavelengths just too short to be visible to human eyes, between 400 nanometres and 0.01 nanometres. It has less energy than X-rays and gamma rays, and ultraviolet telescopes are more like optical ones.

    Mirrors coated in materials that reflect UV radiation, such as silicon carbide, can be used to redirect and focus incoming light. The Hopkins Ultraviolet Telescope, which flew two short missions aboard the space shuttle in the 1990s, used a parabolic mirror coated with this material.

    3
    A schematic of the Hopkins Ultraviolet Telescope. NASA.

    NASA Hopkins Ultraviolet Telescope which flew on the ISS

    As redirected light reaches the focal point, a central point where all light beams converge, they are detected using a spectrogram. This specialised device can separate the UV light into individual wavelength bands in a way akin to splitting visible light into a rainbow.

    Analysis of this spectrogram can indicate what the observation target is made of. This allows astronomers to analyse the composition of interstellar gas clouds, galactic centres and planets in our solar system. This can be particularly useful when looking for elements essential to carbon-based life such as oxygen and carbon.

    Optical telescopes

    Optical telescopes are used to view the visible spectrum: wavelengths roughly between 400 and 700 nanometres. See separate article here.


    Keck Observatory, Maunakea, Hawaii, USA

    ESO/VLT at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level

    Gran Telescopio Canarias at the Roque de los Muchachos Observatory on the island of La Palma, in the Canaries, Spain, sited on a volcanic peak 2,267 metres (7,438 ft) above sea level

    Gemini/North telescope at Maunakea, Hawaii, USA

    Gemini South telescope, Cerro Tololo Inter-American Observatory (CTIO) campus near La Serena, Chile

    Infrared telescopes

    Sitting just below visible light on the electromagnetic spectrum is infrared light, with wavelengths between 700 nanometres and 1 millimetre.

    It’s used in night vision goggles, heaters and tracking devices as found in heat-seeking missiles. Any object or material that is hotter than absolute zero will emit some amount of infrared radiation, so the infrared band is a useful window to look at the universe through.

    Much infrared radiation is absorbed by water vapour in the atmosphere, so infrared telescopes are usually at high altitudes in dry places or even in space, like the Spitzer Space Telescope.

    Infrared telescopes are often very similar to optical ones. Mirrors and reflectors are used to direct the infrared light to a detector at the focal point. The detector registers the incoming radiation, which a computer then converts into a digital image.

    NASA/Spitzer Infrared Telescope

    Radio telescopes

    At the far end of the electromagnetic spectrum we find the radio waves, with frequencies less than 1000 megahertz and wavelengths of a metre and more. Radio waves penetrate the atmosphere easily, unlike higher-frequency radiation, so ground-based observatories can catch them.

    Radio telescopes feature three main components that each play an important role in capturing and processing incoming radio signals.

    The first is the massive antenna or ‘dish’ that faces the sky. The Parkes radio telescope in New South Wales, Australia, for instance, has a dish with a diameter of 64 metres, while the Aperture Spherical Telescope in southwest China is has a whopping 500-metre diameter.

    The great size allows for the collection of long wavelengths and very quiet signals. The dish is parabolic, directing radio waves collected over a large area to be focused to a receiver sitting in front of the dish. The larger the antenna, the weaker the radio source that can be detected, allowing larger telescopes to see more distant and faint objects billions of light years away.

    The receiver works with an amplifier to boost the very weak radio signal to make it strong enough for measurement. Receivers today are so sensitive that they use powerful coolers to minimise thermal noise generated by the movement of atoms in the metal of the structure.

    Finally, a recorder stores the radio signal for later processing and analysis.

    Radio telescopes are used to observe a wide array of subjects, including energetic pulsar and quasar systems, galaxies, nebulae, and of course to listen out for potential alien signals.

    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia



    GBO radio telescope, Green Bank, West Virginia, USA

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:01 am on July 24, 2017 Permalink | Reply
    Tags: , ESO Messenger Issue 168   

    From ESO: Messenger Issue 168 Available in the ESOshop 

    ESO 50 Large

    European Southern Observatory

    ESO Messenger Issue 168 is available for purchase in the ESOshop

    1
    Price: € 1,99 in the ESOshop

    The latest edition of ESO’s quarterly journal, The Messenger, is now available online. Find out the latest news from ESO on topics ranging from new instruments to the latest science discoveries.

    Highlights of this edition include:

    A Long Expected Party — The First Stone Ceremony for the Extremely Large Telescope
    The Adaptive Optics Facility: Commissioning Progress and Results
    The Cherenkov Telescope Array: Exploring the Very-high-energy Sky from ESO’s Paranal Site
    Towards a Sharper Picture of R136 with SPHERE Extreme Adaptive Optics
    The VIMOS Public Extragalactic Redshift Survey (VIPERS): Science Highlights and Final Data Release

    Download The Messenger in PDF format or visit The Messenger website to subscribe and receive a free printed copy.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition
    Visit ESO in Social Media-

    Facebook

    Twitter

    YouTube

    ESO Bloc Icon

    ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 16 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom, along with the host state of Chile. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is a major partner in ALMA, the largest astronomical project in existence. And on Cerro Armazones, close to Paranal, ESO is building the 39-metre European Extremely Large Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

    ESO LaSilla
    ESO/Cerro LaSilla 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO VLT
    VLT at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level.

    ESO Vista Telescope
    ESO/Vista Telescope at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level.

    ESO NTT
    ESO/NTT at Cerro LaSilla 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO VLT Survey telescope
    VLT Survey Telescope at Cerro Paranal with an elevation of 2,635 metres (8,645 ft) above sea level.

    ALMA Array
    ALMA on the Chajnantor plateau at 5,000 metres.

    ESO E-ELT
    ESO/E-ELT to be built at Cerro Armazones at 3,060 m.

    ESO APEX
    APEX Atacama Pathfinder 5,100 meters above sea level, at the Llano de Chajnantor Observatory in the Atacama desert.

    Leiden MASCARA instrument, La Silla, located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft)

    Leiden MASCARA cabinet at ESO Cerro la Silla located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft)

     
  • richardmitnick 11:35 am on July 23, 2017 Permalink | Reply
    Tags: , , , , Hot gas in the center of the milky way, , Universo Magico   

    From Universo Magico: “Hot gas in the center of the milky way” 

    Universo Magico

    July 23, 2017
    Juan Carlos

    1

    This image was produced by combining 12 observations of the X Chandra x-ray Observatory of a region 130 light-years from the center of the Milky way .

    NASA/Chandra Telescope

    The colors represent low-energy red X rays, average energy in green and high power in azul. Thanks to the unique power of resolution of Chandra, astronomers have been able to identify thousands of x-ray sources, as well as neutron stars, black holes, white dwarfs, stars in the foreground and the background galaxies. What remains is a diffuse glow of x-rays that extends from the upper left to the lower right, along the direction of the Galactic disk. The spectrum of the diffuse glow is consistent with a cloud of hot gas that contains two components, 10 million degrees Celsius and gas to 100 million degrees. Diffuse x-rays seem to be the brightest part of a crest of x-ray emission measuring thousands of years light across the disk of the Galaxy. The extension of this Crest implies that the diffuse hot gas in this image, probably not is being warmed by the supermassive black hole at the center of the milky way, known by astronomers as Sagittarius A.

    The shockwaves from explosions of supernovae are the most likely explanation to heat the gas up to 10 million degrees, but it is not known how heats the gas of 100 million degrees. Ordinary shock waves from supernova would not warm by very high energy particles that produces the wrong spectrum of x-rays. Moreover, the observed Galactic magnetic field appears to discard the heating and confinement by magnetic turbulence. It is possible that the high energy of the hot gas x-ray component seem only diffuse and, indeed, is due to the combined glow of a yet undetected population of point sources, as well as diffuse lights of a city seen at a great distance. The difficulty with this explanation is that 200,000 radioactive sources in the observed region would be necessary. A population so large sources undetected, would produce a glow of x-rays much softer than is observed. In addition, there is a known class of objects that can account for such a large number of high energy x-ray sources in the center of the milky way.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: