Tagged: isgtw Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:04 am on August 27, 2015 Permalink | Reply
    Tags: , , Digital seafloor maps, isgtw   

    From isgtw: “World’s first digital ocean floor map” 


    international science grid this week

    August 26, 2015


    Download mp4 here.

    Researchers from the University of Sydney’s School of Geosciences in Australia have created the world’s first digital map of the seafloor. Understanding the ocean — the Earth’s largest storehouse of carbon — as it relates to the seabed is critical to know how climate change will affect the ocean environment.

    1
    2
    3

    “In order to understand environmental change in the oceans we need to better understand the seabed,” says lead researcher Dr. Adriana Dutkiewicz. “Our research opens the door to a better understanding of the workings and history of the marine carbon cycle. We urgently need to understand how the ocean responds to climate change.”

    The last seabed map was hand drawn more than 40 years ago. Using an artificial intelligence method called support vector machine, experts at the National ICT Australia (NICTA) turned an assemblage of descriptions and sediment samples collected since the 1950s into a single contiguous digital map.

    “The difference between the new and old map is a little like comparing a barren tundra landscape with an exotic tropical paradise full of diversity,” says Dutkiewicz. “The ocean floor used to be portrayed as a monotonous seascape whereas the new map echoes the colorful patchworks of dreamtime art.”

    The map data can be downloaded for free [I got the download, but could not find a program to open it], and you can see the dreamy interactive 3D globe here.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 4:13 pm on August 17, 2015 Permalink | Reply
    Tags: , , isgtw,   

    From isgtw: “Simplifying and accelerating genome assembly” 


    international science grid this week

    August 12, 2015
    Linda Vu

    To extract meaning from a genome, scientists must reconstruct portions — a time consuming process akin to rebuilding the sentences and paragraphs of a book from snippets of text. But by applying novel algorithms and high-performance computational techniques to the cutting-edge de novogenome assembly tool Meraculous, a team of scientists have simplified and accelerated genome assembly — reducing a months-long process to mere minutes.

    Temp 1
    “The new parallelized version of Meraculous shows unprecedented performance and efficient scaling up to 15,360 processor cores for the human and wheat genomes on NERSC’s Edison supercomputer,” says Evangelos Georganas. “This performance improvement sped up the assembly workflow from days to seconds.” Courtesy NERSC.

    Researchers from the Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have made this gain by ‘parallelizing’ the DNA code — sometimes billions of bases long — to harness the processing power of supercomputers, such as the US Department of Energy’s National Energy Research Scientific Computing Center’s (NERSC’s) Edison system. (Parallelizing means splitting up tasks to run on the many nodes of a supercomputer at once.)

    “Using the parallelized version of Meraculous, we can now assemble the entire human genome in about eight minutes,” says Evangelos Georganas, a UC Berkeley graduate student. “With this tool, we estimate that the output from the world’s biomedical sequencing capacity could be assembled using just a portion of the Berkeley-managed NERSC’s Edison supercomputer.”

    Supercomputers: A game changer for assembly

    High-throughput next-generation DNA sequencers allow researchers to look for biological solutions — and for the most part, these machines are very accurate at recording the sequence of DNA bases. Sometimes errors do occur, however. These errors complicate analysis by making it harder to assemble genomes and identify genetic mutations. They can also lead researchers to misinterpret the function of a gene.

    Researchers use a technique called shotgun sequencing to identify these errors. This involves taking numerous copies of a DNA strand, breaking it up into random smaller pieces and then sequencing each piece separately. For a particularly complex genome, this process can generate several terabytes of data.

    To identify data errors quickly and effectively, the Berkeley Lab and UC Berkeley team use ‘Bloom filters‘ and massively parallel supercomputers. “Applying Bloom filters has been done before, but what we have done differently is to get Bloom filters to work with distributed memory systems,” says Aydin Buluç, a research scientist in Berkeley Lab’s Computational Research Division (CRD). “This task was not trivial; it required some computing expertise to accomplish.”

    The team also developed solutions for parallelizing data input and output (I/O). “When you have several terabytes of data, just getting the computer to read your data and output results can be a huge bottleneck,” says Steven Hofmeyr, a research scientist in CRD who developed these solutions. “By allowing the computer to download the data in multiple threads, we were able to speed up the I/O process from hours to minutes.”

    The assembly process

    Once errors are removed, researchers can begin the genome assembly. This process relies on computer programs to join k-mers — short DNA sequences consisting of a fixed number (K) of bases — at overlapping regions, so they form a continuous sequence, or contig. If the genome has previously been sequenced, scientists can use reference recorded gene annotations to align the reads. If not, they need to create a whole new catalog of contigs through de novo assembly.

    Temp 1
    “If assembling a single genome is like piecing together one novel, then assembling metagenomic data is like rebuilding the Library of Congress,” says Jarrod Chapman. Pictured: Human Chromosomes. Courtesy Jane Ades, National Human Genome Research Institute.

    De novoassembly is memory-intensive, and until recently was resistant to parallelization in distributed memory. Many researchers turned to specialized large memory nodes, several terabytes in size, to do this work, but even the largest commercially available memory nodes are not big enough to assemble massive genomes. Even with supercomputers, it still took several hours, days or even months to assemble a single genome.

    To make efficient use of massively parallel systems, Georganas created a novel algorithm for de novo assembly that takes advantage of the one-sided communication and Partitioned Global Address Space (PGAS) capabilities of the UPC (Unified Parallel C) programming language. PGAS lets researchers treat the physically separate memories of each supercomputer node as one address space, reducing the time and energy spent swapping information between nodes.

    Tackling the metagenome

    Now that computation is no longer a bottleneck, scientists can try a number of different parameters and run as many analyses as necessary to produce very accurate results. This breakthrough means that Meraculous could also be used to analyze metagenomes — microbial communities recovered directly from environmental samples. This work is important because many microbes exist only in nature and cannot be grown in a laboratory. These organisms may be the key to finding new medicines or viable energy sources.

    “Analyzing metagenomes is a tremendous effort,” says Jarrod Chapman, who developed Meraculous at the US Department of Energy’s Joint Genome Institute (managed by the Berkeley Lab). “If assembling a single genome is like piecing together one novel, then assembling metagenomic data is like rebuilding the Library of Congress. Using Meraculous to effectively do this analysis would be a game changer.”

    –iSGTW is becoming the Science Node. Watch for our new branding and website this September.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 8:31 am on August 14, 2015 Permalink | Reply
    Tags: , Bid Data, , isgtw   

    From isgtw: “EarthServer: Big Earth data at your fingertips becomes a reality” 


    international science grid this week

    August 12, 2015
    No Writer Credit

    1
    An image of Europe created using Envisat’s Medium Resolution Imaging Spectrometer [MERIS]. Image courtesy ESA.

    ESA Envisat
    MERIS is on the ESA ENVISAT space craft

    The Earth sciences, like geology, oceanography, and astronomy, generate vast quantities of data. Yet without the right tools, scientists either drown in this sea of big Earth data or the data sits in an archive, barely used.

    The vision of the EarthServer project is to offer researchers ‘big Earth data at your fingertips’, so that they can access and manipulate enormous data sets with just a few mouse clicks.

    “The project was the result of a ‘push’ and a ‘pull’,” says project coordinator Peter Baumann, professor of computer science at Jacobs University in Bremen, Germany. “On the demand side there was a need for new concepts to handle the wave of data crashing down on us. On the supply side we had a data cube technology that is well-suited to this domain.” A data cube is a three- (or higher) dimensional array of values, commonly used to describe a time series of image data.

    Data cubes help researchers access and visualize data

    EarthServer built advanced data cubes and custom web portals to make it possible for researchers to extract and visualize earth sciences data as 3D cubes, 2D maps, or 1D diagrams. The British Geological Survey, for example, used EarthServer technology to drill down through different layers of the Earth in 3D.

    “For the user, data cubes hide the unnecessary complexity of the data,” says Baumann. “As a user, I don’t want to see a million files: I want to see a few data cubes.”

    Data in the Earth sciences often takes the form of sensor recordings, images, simulation outputs, and statistical measurements — each often with an associated time dimension. The data items typically form regular or irregular grid values with space/time coordinates. EarthServer makes these arrays available as data cubes.

    Aside from ease-of-use, the data cubes also make it possible to integrate data from different disciplines, and scientists can combine measurement data with data generated from simulations.

    Building on existing technologies

    To handle big Earth data efficiently, EarthServer needed to extend existing technologies and standards. The SQL database query language, for example, is more oriented towards the manipulation of alphanumeric data.

    To enable data cubes, the project was built upon rasdaman, a new type of database management system specialized in multi-dimensional gridded data, called rasters or arrays. Rasdaman enables the flexible, fast extraction of data from big Earth data arrays of any size.

    “Essentially, we have married the SQL database language with image processing,” says Baumann. “This is now becoming part of the ISO SQL standard.”

    In addition, the project has strongly influenced the Big Earth Data standards of the Open Geospatial Consortium and INSPIRE, the European Spatial Data Infrastructure.

    EarthServer’s researchers also developed a ‘semantic parallelization’ technology that sub-divides a single database query into multiple sub-queries. These are sent to other database servers for processing.

    This method enables EarthServer to distribute a single incoming query over more than 1,000 cloud nodes and rapidly answer queries on hundreds of terabytes of data in less than a second.

    Bigger and better: EarthServer-2

    EarthServer-1, which ran from September 2011 for 36 months and received €4 million (~ $4.4 million) in EU funding, involved a range of multinational partners. Building on the success of the first phase of the project, EarthServer successfully applied for funding from the European Commission to support its next phase, EarthServer-2.

    This kicked off in May 2015 and will focus on the ‘data cube’ paradigm, as well as on handling even higher data volumes. “The plan is to focus on the fusion of data from different domains and to be able to resolve a query on a petabyte within a second,” says Baumann. “That would mean that a user could view the data on screen and manipulate it interactively.” EarthServer-2 is now working on the next frontier, open-source 4D visualization.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 10:27 am on July 29, 2015 Permalink | Reply
    Tags: , , isgtw, ,   

    From isgtw: “Supercomputers listen for extraterrestrial life” 


    international science grid this week

    July 29, 2015
    Lance Farrell

    Last week, NASA’s New Horizons spacecraft thrilled us with images from its close encounter with Pluto.

    NASA New Horizons spacecraft II
    NASA/New Horizons

    New Horizons now heads into the Kuiper belt and to points spaceward. Will it find life?


    Known objects in the Kuiper belt beyond the orbit of Neptune (scale in AU; epoch as of January 2015).

    That’s the question motivating Aline Vidotto, scientific collaborator at the Observatoire de Genève in Switzerland. Her recent study harnesses supercomputers to find out how to tune our radio dials to listen in on other planets.

    1
    Model of an interplanetary medium. Stellar winds stream from the star and interact with the magnetosphere of the hot-Jupiters. Courtesy Vidotto

    Vidotto has been studying interstellar environments for a while now, focusing on the interplanetary atmosphere surrounding so-called hot-Jupiter exoplanets since 2009. Similar in size to our Jupiter, these exoplanets orbit their star up to 20 times as closely as Earth orbits the sun, and are considered ‘hot’ due to the extra irradiation they receive.

    Every star generates a stellar wind, and the characteristics of this wind depend on the star from which it originates. The speed of its rotation, its magnetism, its gravity, or how active it is are among the factors affecting this wind. These variables also modify the effect this wind will have on planets in its path.

    Since the winds of different star systems are likely to be very different from our own, we need computers to help us boldly go where no one has ever gone before. “Observationally, we know very little about the winds and the interplanetary space of other stars,” Vidotto says. “This is why we need models and numerical simulations.”

    Vidotto’s research focuses on planets four to nine times closer to their host star than Mercury is to the sun. She takes observations of the magnetic fields around five stars from astronomers at the Canada-France-Hawaii Telescope (CFHT) in Hawaii and the Bernard-Lyot Telescope in France and feeds them into 3D simulations. For her most recent study, she divided the computational load between the Darwin cluster (part of the DiRAC network) at the University of Cambridge (UK) and the Piz Daint at the Swiss National Supercomputing Center.

    Canada-France-Hawaii Telescope
    CFHT nterior
    CFHT

    Bernard Lyot telescope
    Bernard Lyot telescope interior
    Bernard Lyot

    The Darwin cluster consists of 9,728 cores, with a theoretical peak in excess of 202 teraFLOPS. Piz Daint consists of 5,272 compute nodes with 32 GB of RAM per node, and is capable of 7.8 petaFLOPS — that’s more computation in a day than a typical laptop could manage in a millennium.

    Vidotto’s analysis of the DiRAC simulations reveals a much different interplanetary medium than in our home solar system, with an overall interplanetary magnetic field 100 times larger than ours, and stellar wind pressures at the point of orbit in excess of 10,000 times ours.

    This immense pressure means these planets must have a very strong magnetic shield (magnetosphere) or their atmospheres would be blown away by the stellar wind, as we suspect happened on Mars. A planet’s atmosphere is thought to be initimately related to its habitability.

    A planet’s magnetism can also tell us something about the interior properties of the planet such as its thermal state, composition, and dynamics. But since the actual magnetic fields of these exoplanets have not been observed, Vidotto is pursuing a simple hypothesis: What if they were similar to our own Jupiter?

    Temp 1
    A model of an exoplanet magnetosphere interacting with an interstellar wind. Knowing the characteristics of the interplanetary medium and the flux of the exoplanet radio emissions in this medium can help us tune our best telescopes to listen for distant signs of life. Courtesy Vidotto.

    If this were the case, then the magnetosphere around these planets would extend five times the radius of the planet (Earth’s magnetosphere extends 10-15 times). Where it mingles with the onrushing stellar winds, it creates the effect familiar to us as an aurora display. Indeed, Vidotto’s research reveals the auroral power in these exoplanets is more impressive than Jupiter’s. “If we were ever to live on one of these planets, the aurorae would be a fantastic show to watch!” she says.

    Knowing this auroral power enables astronomers to realistically characterize the interplanetary medium around the exoplanets, as well as the auroral ovals through which cosmic and stellar particles can penetrate the exoplanet atmosphere. This helps astronomers correctly estimate the flux of exoplanet radio emissions and how sensitive equipment on Earth would have to be to detect them. In short, knowing how to listen is a big step toward hearing.

    Radio emissions from these hot-Jupiters would present a challenge to our current class of radio telescopes, such as the Low Frequency Array for radio astronomy (LOFAR). However, “there is one radio array that is currently being designed where these radio fluxes could be detected — the Square Kilometre Array (SKA),” Vidotto says. The SKA is set for completion in 2023, and in the DiRAC clusters Vidotto finds some of the few supercomputers in the world capable of testing correlation software solutions.

    Lofar radio telescope

    While there’s much more work ahead of us, Vidotto’s research presents a significant advance in radio astronomy and is helping refine our ability to detect signals from beyond. With her 3D exoplanet simulations, the DiRAC computation power, and the ears of SKA, it may not be long before we’re able to hear radio signals from distant worlds.

    Stay tuned!

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 1:47 pm on July 21, 2015 Permalink | Reply
    Tags: , , isgtw,   

    From isgtw: “Simulations reveal a less crowded universe” 


    international science grid this week

    July 15, 2015
    Jan Zverina

    1
    Blue Waters supercomputer

    Simulations conducted on the Blue Waters supercomputer at the National Center for Supercomputing Applications (NCSA) suggest there may be far fewer galaxies in the universe than expected.

    The study, published this week in Astrophysical Journal Letters, shows the first results from the Renaissance Simulations, a suite of extremely high-resolution adaptive mesh refinement calculations of high redshift galaxy formation. Taking advantage of data transferred to SDSC Cloud at the San Diego Supercomputer Center (SDSC), these simulations show hundreds of well-resolved galaxies.

    “Most critically, we show that the ultraviolet luminosity function of our simulated galaxies is consistent with observations of redshift galaxy populations at the bright end of the luminosity function, but at lower luminosities is essentially flat rather than rising steeply,” says principal investigator and lead author Brian W. O’Shea, an associate professor at Michigan State University.

    This discovery allows researchers to make several novel and verifiable predictions ahead of the October 2018 launch of the James Webb Space Telescope, a new space observatory succeeding the Hubble Space Telescope.

    NASA Webb Telescope
    NASA/Webb

    NASA Hubble Telescope
    NASA/ESA Hubble

    “The Hubble Space Telescope can only see what we might call the tip of the iceberg when it comes to taking inventory of the most distant galaxies,” said SDSC director Michael Norman. “A key question is how many galaxies are too faint to see. By analyzing these new, ultra-detailed simulations, we find that there are 10 to 100 times fewer galaxies than a simple extrapolation would predict.”

    The simulations ran on the National Science Foundation (NSF) funded Blue Waters supercomputer, one of the largest and most powerful academic supercomputers in the world. “These simulations are physically complex and very large — we simulate thousands of galaxies at a time, including their interactions through gravity and radiation, and that poses a tremendous computational challenge,” says O’Shea.

    Blue Waters, based at the University of Illinois, is used to tackle a wide range of challenging problems, from predicting the behavior of complex biological systems to simulating the evolution of the cosmos. The supercomputer has more than 1.5 petabytes of memory — enough to store 300 million images from a digital camera — and can achieve a peak performance level of more than 13 quadrillion calculations per second.

    “The flattening at lower luminosities is a key finding and significant to researchers’ understanding of the reionization of the universe, when the gas in the universe changed from being mostly neutral to mostly ionized,” says John H. Wise, Dunn Family assistant professor of physics at the Georgia Institute of Technology.

    Temp 1
    Matter overdensity (top row) and ionized fraction (bottom row) for the regions simulated in the Renaissance Simulations. The red triangles represent locations of galaxies detectable with the Hubble Space Telescope. The James Webb Space Telescope will detect many more distant galaxies, shown by the blue squares and green circles. These first galaxies reionized the universe shown in the image with blue bubbles around the galaxies. Courtesy Brian W. O’Shea (Michigan State University), John H. Wise (Georgia Tech); Michael Norman and Hao Xu (UC San Diego). Click for larger image.

    The term ‘reionized’ is used because the universe was ionized immediately after the fiery big bang. During that time, ordinary matter consisted mostly of hydrogen atoms with positively charged protons stripped of their negatively charged electrons. Eventually, the universe cooled enough for electrons and protons to combine and form neutral hydrogen. They didn’t give off any optical or UV light — and without it, conventional telescopes are of no use in finding traces of how the cosmos evolved during these Dark Ages. The light returned when reionization began.

    In an earlier paper, previous simulations concluded that the universe was 20 percent ionized about 300 million years after the Big Bang; 50 percent ionized at 550 million years after; and fully ionized at 860 million years after its creation.

    “Our work suggests that there are far fewer faint galaxies than one could previously infer,” says O’Shea. “Observations of high redshift galaxies provide poor constraints on the low-luminosity end of the galaxy luminosity function, and thus make it challenging to accurately account for the full budget of ionizing photons during that epoch.”

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 3:39 pm on June 24, 2015 Permalink | Reply
    Tags: , , isgtw   

    From isgtw: “Analyzing a galaxy far, far away for clues to our origins” 


    international science grid this week

    June 24, 2015
    Makeda Easter

    Temp 0
    The Earth’s location in the universe. Courtesy Andrew Z. Colvin. CC BY-SA 3.0 or GFDL, via Wikimedia Commons.

    The Andromeda Galaxy (M31) lies more than two million light years away from Earth.

    2
    Andromeda. Author Adam Evans

    In 2011, an international group of astronomers began a four-year program to map and study the millions of stars comprising the galaxy. With the help of the Hubble telescope, Extreme Science and Engineering Discovery Environment (XSEDE), and the Texas Advanced Computing Center (TACC), they not only produced the best Andromeda pictures ever seen, but also put the question of universal star formation to rest.

    To map M31, the Panchromatic Hubble Andromeda Treasury (PHAT) looked to its namesake Hubble Space Telescope (HST). Because the HST orbits the Earth, it can provide information to astronomers that ground-based telescopes cannot. But more than just stunning pictures, each star revealed by the HST holds clues to the history of the galaxy’s formation — and thus our own. For instance, by analyzing a star’s color, researchers can infer its age. From its luminosity, scientists can measure its distance from Earth.

    PHAT used this information to develop star formation histories for M31, which meant decoding the number of stars of each type (age, mass, chemistry) and how much dust is obscuring their light. Modeling the star formation history of 100 million stars requires powerful computation, so the team turned to the US National Science Foundation (NSF), XSEDE, and TACC.

    “We had to measure over 100 million objects with 100 different parameters for every single one of them,” says Julianne Dalcanton, principal investigator on the PHAT project. “Having XSEDE resources has been absolutely fantastic because we were able to easily run the same process over and over again in parallel.”

    XSEDE enables researchers to interactively share computing resources, data, and expertise. Through XSEDE, the team gained access to the Stampede supercomputer at TACC, which was essential to determining the ages of every star mapped, patterns of star formation, and how the galaxy evolved over time.

    PHAT used this information to develop star formation histories for M31, which meant decoding the number of stars of each type (age, mass, chemistry) and how much dust is obscuring their light. Modeling the star formation history of 100 million stars requires powerful computation, so the team turned to the US National Science Foundation (NSF), XSEDE, and TACC.

    “We had to measure over 100 million objects with 100 different parameters for every single one of them,” says Julianne Dalcanton, principal investigator on the PHAT project. “Having XSEDE resources has been absolutely fantastic because we were able to easily run the same process over and over again in parallel.”

    XSEDE enables researchers to interactively share computing resources, data, and expertise. Through XSEDE, the team gained access to the Stampede supercomputer at TACC, which was essential to determining the ages of every star mapped, patterns of star formation, and how the galaxy evolved over time.

    Read more about the PHAT team’s quest to understand infinity here.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 7:33 pm on April 15, 2015 Permalink | Reply
    Tags: , , isgtw,   

    From isgtw: “Supercomputing enables researchers in Norway to tackle cancer” 


    international science grid this week

    April 15, 2015
    Yngve Vogt

    Cancer researchers are using the Abel supercomputer at the University of Oslo in Norway to detect which versions of genes are only found in cancer cells. Every form of cancer, even every tumour, has its own distinct variants.

    “This charting may help tailor the treatment to each patient,” says Rolf Skotheim, who is affiliated with the Centre for Cancer Biomedicine and the research group for biomedical informatics at the University of Oslo, as well as the Department of Molecular Oncology at Oslo University Hospital.

    Temp 0
    “Charting the versions of the genes that are only found in cancer cells may help tailor the treatment offered to each patient,” says Skotheim. Image courtesy Yngve Vogt.

    His research group is working to identify the genes that cause bowel and prostate cancer, which are both common diseases. There are 4,000 new cases of bowel cancer in Norway every year. Only six out of ten patients survive the first five years. Prostate cancer affects 5,000 Norwegians every year. Nine out of ten survive.

    Comparisons between healthy and diseased cells

    In order to identify the genes that lead to cancer, Skotheim and his research group are comparing genetic material in tumours with genetic material in healthy cells. In order to understand this process, a brief introduction to our genetic material is needed:

    Our genetic material consists of just over 20,000 genes. Each gene consists of thousands of base pairs, represented by a specific sequence of the four building blocks, adenine, thymine, guanine, and cytosine, popularly abbreviated to A, T, G, and C. The sequence of these building blocks is the very recipe for the gene. Our whole DNA consists of some six billion base pairs.

    The DNA strand carries the molecular instructions for activity in the cells. In other words, DNA contains the recipe for proteins, which perform the tasks in the cells. DNA, nevertheless, does not actually produce proteins. First, a copy of DNA is made: this transcript is called RNA and it is this molecule that is read when proteins are produced.

    RNA is only a small component of DNA, and is made up of its active constituents. Most of DNA is inactive. Only 1–2 % of the DNA strand is active.

    In cancer cells, something goes wrong with the RNA transcription. There is either too much RNA, which means that far too many proteins of a specific type are formed, or the composition of base pairs in the RNA is wrong. The latter is precisely the area being studied by the University of Oslo researchers.

    Wrong combinations

    All genes can be divided into active and inactive parts. A single gene may consist of tens of active stretches of nucleotides (exons). “RNA is a copy of a specific combination of the exons from a specific gene in DNA,” explains Skotheim. There are many possible combinations, and it is precisely this search for all of the possible combinations that is new in cancer research.

    Different cells can combine the nucleotides in a single gene in different ways. A cancer cell can create a combination that should not exist in healthy cells. And as if that didn’t make things complicated enough, sometimes RNA can be made up of stretches of nucleotides from different genes in DNA. These special, complex genes are called fusion genes.

    Temp 0
    “We need powerful computers to crunch the enormous amounts of raw data,” says Skotheim. “Even if you spent your whole life on this task, you would not be able to find the location of a single nucleotide.”

    In other words, researchers must look for errors both inside genes and between the different genes. “Fusion genes are usually found in cancer cells, but some of them are also found in healthy cells,” says Skotheim. In patients with prostate cancer, researchers have found some fusion genes that are only created in diseased cells. These fusion genes may then be used as a starting-point in the detection of and fight against cancer.

    The researchers have also found fusion genes in bowel cells, but they were not cancer-specific. “For some reason, these fusion genes can also be found in healthy cells,” adds Skotheim. “This discovery was a let-down.”
    Improving treatment

    There are different RNA errors in the various cancer diseases. The researchers must therefore analyze the RNA errors of each disease.

    Among other things, the researchers are comparing RNA in diseased and healthy tissue from 550 patients with prostate cancer. The patients that make up the study do not receive any direct benefits from the results themselves. However, the research is important in order to be able to help future patients.

    “We want to find the typical defects associated with prostate cancer,” says Skotheim. “This will make it easier to understand what goes wrong with healthy cells, and to understand the mechanisms that develop cancer. Once we have found the cancer-specific molecules, they can be used as biomarkers.” In some cases, the biomarkers can be used to find cancer, determine the level of severity of the cancer and the risk of spreading, and whether the patient should be given a more aggressive treatment.

    Even though the researchers find deviations in the RNA, there is no guarantee that there is appropriate, targeted medicine available. “The point of our research is to figure out more of the big picture,” says Skotheim. “If we identify a fusion gene that is only found in cancer cells, the discovery will be so important in itself that other research groups around the world will want to begin working on this straight away. If a cure is found that counteracts the fusion genes, this may have enormous consequences for the cancer treatment.”

    Laborious work

    Recreating RNA is laborious work. The set of RNA molecules consists of about 100 million bases, divided into a few thousand bases from each gene.

    The laboratory machine reads millions of small nucleotides. Each one is only 100 base pairs long. In order for the researchers to be able to place them in the right location, they must run large statistical analyses. The RNA analysis of a single patient can take a few days.

    All of the nucleotides must be matched with the DNA strand. Unfortunately the researchers do not have the DNA strands of each patient. In order to learn where the base pairs come from in the DNA strand, they must therefore use the reference genome of the human species. “This is not ideal, because there are individual differences,” explains Skotheim. The future potentially lies in fully sequencing the DNA of each patient when conducting medical experiments.
    Supercomputing

    There is no way this research could be carried out using pen and paper. “We need powerful computers to crunch the enormous amounts of raw data. Even if you spent your whole life on this task, you would not be able to find the location of a single nucleotide. This is a matter of millions of nucleotides that must be mapped correctly in the system of coordinates of the genetic material. Once we have managed to find the RNA versions that are only found in cancer cells, we will have made significant progress. However, the work to get that far requires advanced statistical analyses and supercomputing,” says Skotheim.

    The analyses are so demanding that the researchers must use the University of Oslo’s Abel supercomputer, which has a theoretical peak performance of over 250 teraFLOPS. “With the ability to run heavy analyses on such large amounts of data, we have an enormous advantage not available to other cancer researchers,” explains Skotheim. “Many medical researchers would definitely benefit from this possibility. This is why they should spend more time with biostatisticians and informaticians. RNA samples are taken from the patients only once. The types of analyses that can be run are only limited by the imagination.”

    “We need to be smart in order to analyze the raw data.” He continues: “There are enormous amounts of data here that can be interpreted in many different ways. We just got started. There is lots of useful information that we have not seen yet. Asking the right questions is the key. Most cancer researchers are not used to working with enormous amounts of data, and how to best analyze vast data sets. Once researchers have found a possible answer, they must determine whether the answer is chance or if it is a real finding. The solution is to find out whether they get the same answers from independent data sets from other parts of the world.”

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 4:07 am on April 2, 2015 Permalink | Reply
    Tags: , , isgtw   

    From isgtw: “Supporting research with grid computing and more” 


    international science grid this week

    April 1, 2015
    Andrew Purcell

    Temp 1
    “In order for researchers to be able to collaborate and share data with one another efficiently, the underlying IT infrastructures need to be in place,” says Gomes. “With the amount of data produced by research collaborations growing rapidly, this support is of paramount importance.”

    Jorge Gomes is the principal investigator of the computing group at the Portuguese Laboratory of Instrumentation and Experimental Particles Physics (LIP) in Lisbon and a member of the European Grid Infrastructure (EGI)executive board. As the technical coordinator of the Portuguese national grid infrastructure (INCD), he is also responsible for Portugal’s contribution to the Worldwide LHC Computing Grid (WLCG).

    iSGTW speaks to Gomes about the importance of supporting researchers through a variety of IT infrastructures ahead of the EGI Conference in Lisbon from 18 to 22 May 2015.

    What’s the main focus of your work at LIP?

    I’ve been doing research in the field of grid computing since 2001. LIP participates in both the ATLAS and CMS experiments on the Large Hadron Collider (LHC) at CERN, which is why we’ve been working on research and development projects for the grid computing infrastructure that supports these experiments.

    CERN ATLAS New
    ATLAS

    CERN CMS New II
    CMS

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC

    CERN Control Center
    CERN

    Here in Portugal, we now have a national ‘road map’ for research infrastructures, which includes IT infrastructures. Our work in the context of the Portuguese national grid infrastructure now involves supporting a wide range of research communities, not just high-energy physics. Today, we support research in fields such as astrophysics, life sciences, chemistry, civil engineering, and environmental modeling, among others. For us, it’s very important to support as wide a range of communities as possible.

    So, when you talk about supporting researchers by providing ‘IT infrastructures’, it’s about much more than grid computing, right?

    Yes, today we’re engaged in cloud computing, high-performance computing, and a wide range of data-related services. This larger portfolio of services has evolved to match the needs of the Portuguese research community.

    2
    Cloud computing metaphor: For a user, the network elements representing the provider-rendered services are invisible, as if obscured by a cloud.

    Why is it important to provide IT infrastructures to support research?

    Research is no longer done by isolated individuals; instead, it is increasingly common for it to be carried out by large collaborations, often on an international or even an intercontinental basis. So, in order for researchers to be able to collaborate and share data with one another efficiently, the underlying IT infrastructures need to be in place. With the amount of data produced by research collaborations growing rapidly, this support is of paramount importance.

    Here in Portugal, we have a lot of communities that don’t yet have access to these services, but they really do need them. Researchers don’t want to have to set up their own IT infrastructures, they want to concentrate on doing research in their own specialist field. This is why it’s important for IT specialists to provide them with these underlying services.

    Also, particularly in relatively small countries like Portugal, it’s important that resources scattered across universities and other research institutions can be integrated, in order to extract the maximum possible value.

    When it comes to encouraging researchers to make use of the IT infrastructures you provide, what are the main challenges you face?

    Trust, in particular, is a very important aspect. For researchers to build scientific software on top of IT infrastructures, they need to have confidence that the infrastructures will still be there several years down the line. This is also connected to challenges like ‘vendor lock in’ and standards in relation to cloud computing infrastructure. We need to have common solutions so that if a particular IT infrastructure provider — either public or private — fails, users can move to other available resources.

    Another challenge is related to the structure of some research communities. The large, complex experimental apparatuses involved in high-energy physics means that these research communities are very structured and there is often a high degree of collaboration between research groups. In other domains however, where it is common to have much smaller research groups, this is often not the case, which means it can be much more difficult to develop standard IT solutions and to achieve agreement on a framework for sharing IT resources.

    Why do you believe it is important to provide grid computing infrastructure at a European scale, through EGI, rather than just at a national scale?

    More and more research groups are working internationally, so it’s no longer enough to provide IT infrastructures at a national level. That’s why we also collaborate with our colleagues in Spain to provide IberGrid.

    EGI is of great strategic importance to research in Europe. We’re now exploring a range of exciting opportunities through the European Strategy Forum on Research Infrastructures (ESFRI) to support large flagship European research projects.

    The theme for the upcoming EGI conference is ‘engaging the research community towards an open science commons’. What’s the role of EGI in helping to establish this commons?

    In Europe we still have a fragmented ecosystem of services provided by many entities with interoperability issues. A better level of integration and sharing is needed to take advantage of the growing amounts of scientific data available. EGI proposes an integrated vision that encompasses data, instruments, ICT services, and knowledge to reduce the barriers to scientific collaboration and result sharing.

    EGI is in a strategic position to integrate services at the European level and to enable access to open data, thus promoting knowledge sharing. By gathering key players, next month’s conference will be an excellent opportunity to further develop this vision.

    Finally, what are you most looking forward to about the conference?

    The conference is a great opportunity for users, developers, and resource providers to meet and exchange experiences and ideas at all levels. It’s also an excellent opportunity for researchers to discuss their requirements and to shape the development of future IT infrastructures. I look forward to seeing a diverse range of people at the event!

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:43 am on January 24, 2015 Permalink | Reply
    Tags: , isgtw,   

    From isgtw: “Unlocking the secrets of vertebrate evolution” 


    international science grid this week

    January 21, 2015
    Lance Farrell

    Conventional wisdom holds that snakes evolved a particular form and skeleton by losing regions in their spinal column over time. These losses were previously explained by a disruption in Hox genes responsible for patterning regions of the vertebrae.

    Paleobiologists P. David Polly, professor of geological sciences at Indiana University, US, and Jason Head, assistant professor of earth and atmospheric sciences at the University of Nebraska-Lincoln, US, overturned that assumption. Recently published in Nature, their research instead reveals that snake skeletons are just as regionalized as those of limbed vertebrates.

    Using Quarry [being taken out of service Jan 30, 2015 and replaced by Karst, a supercomputer at Indiana University, Polly and Head arrived at a compelling new explanation for why snake skeletons are so different: Vertebrates like mammals, birds, and crocodiles evolved additional skeletal regions independently from ancestors like snakes and lizards.

    Karst
    Karst

    “Our study finds that snakes did not require extensive modification to their regulatory gene systems to evolve their elongate bodies,” Head notes.

    Despite having no limbs and more vertebrae, snake skeletons are just as regionalized as lizards’ skeletons.

    “Our study finds that snakes did not require extensive modification to their regulatory gene systems to evolve their elongate bodies,” Head notes.

    3
    P. David Polly. Photo courtesy Indiana University.

    Polly and Head had to overcome challenges in collection and analysis to arrive at this insight. “If you are sequencing a genome all you really need is a little scrap of tissue, and that’s relatively easy to get,” Polly says. “But if you want to do something like we have done, you not only need an entire skeleton, but also one for a whole lot of species.”

    To arrive at their conclusion, Head and Polly sampled 56 skeletons from collections worldwide. They began by photographing and digitizing the bones, then chose specific landmarks on each spinal segment. Using the digital coordinates of each vertebra, they then applied a technique called geometric-morphometrics, a multi-variant analysis that plots x and y coordinates to analyze an object’s shape.

    Armed with shape information, the scientists then fit a series of regressions and tracked each vertebra’s gradient over the entire spine. This led to a secondary challenge — with 36,000 landmarks applied to 3,000 digitized vertebrae, the regression analyses required to peer into the snake’s past called for a new analytical tool.

    “The computations required iteratively fitting four or more segmented regression models, each with 10 to 83 parameters, for every regional permutation of up to 230 vertebrae per skeleton. The amount of computational power required is well beyond any desktop system,” Head observes.

    Researchers like Polly and Head increasingly find quantitative analyses of data sets this size require the computational resources to match. With 7.2 million different models making up the data for their study, nothing less than a supercomputer would do.

    5
    Jason Head with ball python. Photo courtesy Craig Chandler, University of Nebraska-Lincoln.

    “Our supercomputing environments serve a broad base of users and purposes,” says David Hancock, manager of IU’s high performance systems. “We often support the research done in the hard sciences and math such as Polly’s, but we also see analytics done for business faculty, marketing and modeling for interior design projects, and lighting simulations for theater productions.”

    Analyses of the scale Polly and Head needed would have been unapproachable even a decade ago, and without US National Science Foundation support remain beyond the reach of most institutions. “A lot of the big jobs ran on Quarry,” says Polly. “To run one of these exhaustive models on a single snake took about three and a half days. Ten years ago we could barely have scratched the surface.”

    As high-performance computing resources reshape the future, scientists like Polly and Head have greater abilities to look into the past and unlock the secrets of evolution.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 5:00 pm on January 21, 2015 Permalink | Reply
    Tags: , , isgtw, Simulation Astronomy,   

    From isgtw: “Exploring the universe with supercomputing” 


    international science grid this week

    January 21, 2015
    Andrew Purcell

    The Center for Computational Astrophysics (CfCA) in Japan recently upgraded its ATERUI supercomputer, doubling the machine’s theoretical peak performance to 1.058 petaFLOPS. Eiichiro Kokubo, director of the center, tells iSGTW how supercomputers are changing the way research is conducted in astronomy.

    What’s your research background?

    I investigate the origin of planetary systems. I use many-body simulations to study how planets form and I also previously worked on the development of the Gravity Pipe, or ‘GRAPE’ supercomputer.

    Why is it important to use supercomputers in this work?

    In the standard scenario of planet formation, small solid bodies — known as ‘planetisimals’ — interact with one another and this causes their orbits around the sun to evolve. Collisions between these building blocks lead to the formation of rocky planets like the Earth. To understand this process, you really need to do very-large-scale many-body simulations. This is where the high-performance computing comes in: supercomputers act as telescopes for phenomena we wouldn’t otherwise be able to see.

    The scales of mass, energy, and time are generally huge in astronomy. However, as supercomputers have become ever more powerful, we’ve become able to program the relevant physical processes — motion, fluid dynamics, radiative transfer, etc. — and do meaningful simulation of astronomical phenomena. We can even conduct experiments by changing parameters within our simulations. Simulation is numerical exploration of the universe!

    How has supercomputing changed the way research is carried out?

    Simulation astronomy’ has now become a third major methodological approach within the field, alongside observational and theoretical astronomy. Telescopes rely on electromagnetic radiation, but there are still many things that we cannot see even with today’s largest telescopes. Supercomputers enable us to use complex physical calculations to visualize phenomena that would otherwise remain hidden to us. Their use also gives us the flexibility to simulate phenomena across a vast range of spatial and temporal scales.

    Simulation can be used to simply test hypotheses, but it can also be used to explore new worlds that are beyond our current imagination. Sometimes you get results from a simulation that you really didn’t expect — this is often the first step on the road to making new discoveries and developing new astronomical theories.

    2
    ATERUI has made the leap to become a petaFLOPS-scale supercomputer. Image courtesy NAOJ/Makoto Shizugami (VERA/CfCA, NAOJ).

    In astronomy, there are three main kinds of large-scale simulation: many-body, fluid dynamics, and radiative transfer. These problems can all be parallelized effectively, meaning that massively parallel computers — like the Cray XC30 system we’ve installed — are ideally suited to performing these kinds of simulations.

    3
    “Supercomputers act as telescopes for phenomena we wouldn’t otherwise be able to see,” says Kokubo.

    What research problems will the ATERUI enable you tackle?

    There are over 100 users in our community and they are tackling a wide variety of problems. One project, for example, is looking at supernovae: having very high-resolution 3D simulations of these explosions is vital to improving our understanding. Another project is looking at the distribution of galaxies throughout the universe, and there is a whole range of other things being studied using ATERUI too.

    Since installing ATERUI, it’s been used at over 90% of its capacity, in terms of the number of CPUs running at any given time. Basically, it’s almost full every single day!

    Don’t forget, we also have the K computer here in Japan. The National Astronomical Observatory of Japan, of which the CfCA is part, is actually one of the consortium members of the K supercomputer project. As such, we also have plenty of researchers using that machine, as well. High-end supercomputers like K are absolutely great, but it is also important to have middle-class supercomputers dedicated to specific research fields available.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 463 other followers

%d bloggers like this: