Tagged: Cloud computing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:29 am on June 12, 2021 Permalink | Reply
    Tags: "A Tectonic Shift in Analytics and Computing Is Coming", "Destination Earth", "Speech Understanding Research", "tensor processing units", , Cloud computing, , Computing clusters, , GANs: generative adversarial networks, , , , , Seafloor bathymetry, SML: supervised machine learning, UML: Unsupervised Machine Learning   

    From Eos: “A Tectonic Shift in Analytics and Computing Is Coming” 

    From AGU
    Eos news bloc

    From Eos

    4 June 2021
    Gabriele Morra
    morra@louisiana.edu
    Ebru Bozdag
    Matt Knepley
    Ludovic Räss
    Velimir Vesselinov

    Artificial intelligence combined with high-performance computing could trigger a fundamental change in how geoscientists extract knowledge from large volumes of data.

    1
    A Cartesian representation of a global adjoint tomography model, which uses high-performance computing capabilities to simulate seismic wave propagation, is shown here. Blue and red colorations represent regions of high and low seismic velocities, respectively. Credit: David Pugmire, DOE’s Oak Ridge National Laboratory (US).

    More than 50 years ago, a fundamental scientific revolution occurred, sparked by the concurrent emergence of a huge amount of new data on seafloor bathymetry and profound intellectual insights from researchers rethinking conventional wisdom. Data and insight combined to produce the paradigm of plate tectonics. Similarly, in the coming decade, a new revolution in data analytics may rapidly overhaul how we derive knowledge from data in the geosciences. Two interrelated elements will be central in this process: artificial intelligence (AI, including machine learning methods as a subset) and high-performance computing (HPC).

    Already today, geoscientists must understand modern tools of data analytics and the hardware on which they work. Now AI and HPC, along with cloud computing and interactive programming languages, are becoming essential tools for geoscientists. Here we discuss the current state of AI and HPC in Earth science and anticipate future trends that will shape applications of these developing technologies in the field. We also propose that it is time to rethink graduate and professional education to account for and capitalize on these quickly emerging tools.

    Work in Progress

    Great strides in AI capabilities, including speech and facial recognition, have been made over the past decade, but the origins of these capabilities date back much further. In 1971, the Defense Advanced Research Projects Agency (US) substantially funded a project called Speech Understanding Research [Journal of the Acoustical Society of America], and it was generally believed at the time that artificial speech recognition was just around the corner. We know now that this was not the case, as today’s speech and writing recognition capabilities emerged only as a result of both vastly increased computing power and conceptual breakthroughs such as the use of multilayered neural networks, which mimic the biological structure of the brain.

    Recently, AI has gained the ability to create images of artificial faces that humans cannot distinguish from real ones by using generative adversarial networks (GANs). These networks combine two neural networks, one that produces a model and a second one that tries to discriminate the generated model from the real one. Scientists have now started to use GANs to generate artificial geoscientific data sets.

    These and other advances are striking, yet AI and many other artificial computing tools are still in their infancy. We cannot predict what AI will be able to do 20–30 years from now, but a survey of existing AI applications recently showed that computing power is the key when targeting practical applications today. The fact that AI is still in its early stages has important implications for HPC in the geosciences. Currently, geoscientific HPC studies have been dominated by large-scale time-dependent numerical simulations that use physical observations to generate models [Morra et al, 2021a*]. In the future, however, we may work in the other direction—Earth, ocean, and atmospheric simulations may feed large AI systems that in turn produce artificial data sets that allow geoscientific investigations, such as Destination Earth, for which collected data are insufficient.

    *all citations are included in References below.

    Data-Centric Geosciences

    Development of AI capabilities is well underway in certain geoscience disciplines. For a decade now [Ma et al., 2019], remote sensing operations have been using convolutional neural networks (CNNs), a kind of neural network that adaptively learns which features to look at in a data set. In seismology (Figure 1), pattern recognition is the most common application of machine learning (ML), and recently, CNNs have been trained to find patterns in seismic data [Kong et al., 2019], leading to discoveries such as previously unrecognized seismic events [Bergen et al., 2019].

    2
    Fig. 1. Example of a workflow used to produce an interactive “visulation” system, in which graphic visualization and computer simulation occur simultaneously, for analysis of seismic data. Credit: Ben Kadlec.

    New AI applications and technologies are also emerging; these involve, for example, the self-ordering of seismic waveforms to detect structural anomalies in the deep mantle [Kim et al., 2020]. Recently, deep generative models, which are based on neural networks, have shown impressive capabilities in modeling complex natural signals, with the most promising applications in autoencoders and GANs (e.g., for generating images from data).

    CNNs are a form of supervised machine learning (SML), meaning that before they are applied for their intended use, they are first trained to find prespecified patterns in labeled data sets and to check their accuracy against an answer key. Training a neural network using SML requires large, well-labeled data sets as well as massive computing power. Massive computing power, in turn, requires massive amounts of electricity, such that the energy demand of modern AI models is doubling every 3.4 months and causing a large and growing carbon footprint.

    In the future, the trend in geoscientific applications of AI might shift from using bigger CNNs to using more scalable algorithms that can improve performance with less training data and fewer computing resources. Alternative strategies will likely involve less energy-intensive neural networks, such as spiking neural networks, which reduce data inputs by analyzing discrete events rather than continuous data streams.

    Unsupervised ML (UML), in which an algorithm identifies patterns on its own rather than searching for a user-specified pattern, is another alternative to data-hungry SML. One type of UML identifies unique features in a data set to allow users to discover anomalies of interest (e.g., evidence of hidden geothermal resources in seismic data) and to distinguish trends of interest (e.g., rapidly versus slowly declining production from oil and gas wells based on production rate transients) [Vesselinov et al., 2019].

    AI is also starting to improve the efficiency of geophysical sensors. Data storage limitations require instruments such as seismic stations, acoustic sensors, infrared cameras, and remote sensors to record and save data sets that are much smaller than the total amount of data they measure. Some sensors use AI to detect when “interesting” data are recorded, and these data are selectively stored. Sensor-based AI algorithms also help minimize energy consumption by and prolong the life of sensors located in remote regions, which are difficult to service and often powered by a single solar panel. These techniques include quantized CNN (using 8-bit variables) running on minimal hardware, such as Raspberry Pi [Wilkes et al., 2017].

    Advances in Computing Architectures

    Powerful, efficient algorithms and software represent only one part of the data revolution; the hardware and networks that we use to process and store data have evolved significantly as well.

    Since about 2004, when the increase in frequencies at which processors operate stalled at about 3 gigahertz (the end of Moore’s law), computing power has been augmented by increasing the number of cores per CPU and by the parallel work of cores in multiple CPUs, as in computing clusters.

    Accelerators such as graphics processing units (GPUs), once used mostly for video games, are now routinely used for AI applications and are at the heart of all major ML facilities (as well the DOE’s Exascale Ccomputing Project (US), a part of the National Strategic Computing Initiative – NSF (US)). For example, Summit and Sierra, the two fastest supercomputers in the United States, are based on a hierarchical CPU-GPU architecture.

    Meanwhile, emerging tensor processing units, which were developed specifically for matrix-based operations, excel at the most demanding tasks of most neural network algorithms. In the future, computers will likely become increasingly heterogeneous, with a single system combining several types of processors, including specialized ML coprocessors (e.g., Cerebras) and quantum computing processors.

    Computational systems that are physically distributed across remote locations and used on demand, usually called cloud computing, are also becoming more common, although these systems impose limitations on the code that can be run on them. For example, cloud infrastructures, in contrast to centralized HPC clusters and supercomputers, are not designed for performing large-scale parallel simulations. Cloud infrastructures face limitations on high-throughput interconnectivity, and the synchronization needed to help multiple computing nodes coordinate tasks is substantially more difficult to achieve for physically remote clusters. Although several cloud-based computing providers are now investing in high-throughput interconnectivity, the problem of synchronization will likely remain for the foreseeable future.

    Boosting 3D Simulations

    Artificial intelligence has proven invaluable in discovering and analyzing patterns in large, real-world data sets. It could also become a source of realistic artificial data sets, generated through models and simulations. Artificial data sets enable geophysicists to examine problems that are unwieldy or intractable using real-world data—because these data may be too costly or technically demanding to obtain—and to explore what-if scenarios or interconnected physical phenomena in isolation. For example, simulations could generate artificial data to help study seismic wave propagation; large-scale geodynamics; or flows of water, oil, and carbon dioxide through rock formations to assist in energy extraction and storage.

    HPC and cloud computing will help produce and run 3D models, not only assisting in improved visualization of natural processes but also allowing for investigation of processes that can’t be adequately studied with 2D modeling. In geodynamics, for example, using 2D modeling makes it difficult to calculate 3D phenomena like toroidal flow and vorticity because flow patterns are radically different in 3D. Meanwhile, phenomena like crustal porosity waves [Geophysical Research Letters] (waves of high porosity in rocks; Figure 2) and corridors of fast-moving ice in glaciers require extremely high spatial and temporal resolutions in 3D to capture [Räss et al., 2020].

    3
    Fig. 2. A 3D modeling run with 16 billion degrees of freedom simulates flow focusing in porous media and identifies a pulsed behavior phenomenon called porosity waves. Credit: Räss et al. [2018], CC BY 4.0.

    Adding an additional dimension to a model can require a significant increase in the amount of data processed. For example, in exploration seismology, going from a 2D to a 3D simulation involves a transition from requiring three-dimensional data (i.e., source, receiver, time) to five-dimensional data (source x, source y, receiver x, receiver y, and time [e.g., Witte et al., 2020]). AI can help with this transition. At the global scale, for example, the assimilation of 3D simulations in iterative full-waveform inversions for seismic imaging was performed recently with limited real-world data sets, employing AI techniques to maximize the amount of information extracted from seismic traces while maintaining the high quality of the data [Lei et al., 2020].

    Emerging Methods and Enhancing Education

    As far as we’ve come in developing AI for uses in geoscientific research, there is plenty of room for growth in the algorithms and computing infrastructure already mentioned, as well as in other developing technologies. For example, interactive programming, in which the programmer develops new code while a program is active, and language-agnostic programming environments that can run code in a variety of languages are young techniques that will facilitate introducing computing to geoscientists.

    Programming languages, such as Python and Julia, which are now being taught to Earth science students, will accompany the transition to these new methods and will be used in interactive environments such as the Jupyter Notebook. Julia was shown recently to perform well as compiled code for machine learning algorithms in its most recent implementations, such as the ones using differentiable programming, which reduces computational resource and energy requirements.

    Quantum computing, which uses the quantum states of atoms rather than streams of electrons to transmit data, is another promising development that is still in its infancy but that may lead to the next major scientific revolution. It is forecast that by the end of this decade, quantum computers will be applied in solving many scientific problems, including those related to wave propagation, crustal stresses, atmospheric simulations, and other topics in the geosciences. With competition from China in developing quantum technologies and AI, quantum computing and quantum information applications may become darlings of major funding opportunities, offering the means for ambitious geophysicists to pursue fundamental research.

    Taking advantage of these new capabilities will, of course, require geoscientists who know how to use them. Today, many geoscientists face enormous pressure to requalify themselves for a rapidly changing job market and to keep pace with the growing complexity of computational technologies. Academia, meanwhile, faces the demanding task of designing innovative training to help students and others adapt to market conditions, although finding professionals who can teach these courses is challenging because they are in high demand in the private sector. However, such teaching opportunities could provide a point of entry for young scientists specializing in computer science or part-time positions for professionals retired from industry or national labs [Morra et al., 2021b].

    The coming decade will see a rapid revolution in data analytics that will significantly affect the processing and flow of information in the geosciences. Artificial intelligence and high-performance computing are the two central elements shaping this new landscape. Students and professionals in the geosciences will need new forms of education enabling them to rapidly learn the modern tools of data analytics and predictive modeling. If done well, the concurrence of these new tools and a workforce primed to capitalize on them could lead to new paradigm-shifting insights that, much as the plate tectonic revolution did, help us address major geoscientific questions in the future.

    Acknowledgments:

    The listed authors thank Peter Gerstoft, Scripps Institution of Oceanography (US), University of California, San Diego; Henry M. Tufo, University of Colorado-Boulder (US); and David A. Yuen, Columbia University (US) and Ocean University of China [中國海洋大學](CN), Qingdao, who contributed equally to the writing of this article.

    References:

    Bergen, K. J., et al. (2019), Machine learning for data-driven discovery in solid Earth geoscience, Science, 363(6433), eaau0323, https://doi.org/10.1126/science.aau0323.

    Kim, D., et al. (2020), Sequencing seismograms: A panoptic view of scattering in the core-mantle boundary region, Science, 368(6496), 1,223–1,228, https://doi.org/10.1126/science.aba8972.

    Kong, Q., et al. (2019), Machine learning in seismology: Turning data into insights, Seismol. Res. Lett., 90(1), 3–14, https://doi.org/10.1785/0220180259.

    Lei, W., et al. (2020), Global adjoint tomography—Model GLAD-M25, Geophys. J. Int., 223(1), 1–21, https://doi.org/10.1093/gji/ggaa253.

    Ma, L., et al. (2019), Deep learning in remote sensing applications: A meta-analysis and review, ISPRS J. Photogramm. Remote Sens., 152, 166–177, https://doi.org/10.1016/j.isprsjprs.2019.04.015.

    Morra, G., et al. (2021a), Fresh outlook on numerical methods for geodynamics. Part 1: Introduction and modeling, in Encyclopedia of Geology, 2nd ed., edited by D. Alderton and S. A. Elias, pp. 826–840, Academic, Cambridge, Mass., https://doi.org/10.1016/B978-0-08-102908-4.00110-7.

    Morra, G., et al. (2021b), Fresh outlook on numerical methods for geodynamics. Part 2: Big data, HPC, education, in Encyclopedia of Geology, 2nd ed., edited by D. Alderton and S. A. Elias, pp. 841–855, Academic, Cambridge, Mass., https://doi.org/10.1016/B978-0-08-102908-4.00111-9.

    Räss, L., N. S. C. Simon, and Y. Y. Podladchikov (2018), Spontaneous formation of fluid escape pipes from subsurface reservoirs, Sci. Rep., 8, 11116, https://doi.org/10.1038/s41598-018-29485-5.

    Räss, L., et al. (2020), Modelling thermomechanical ice deformation using an implicit pseudo-transient method (FastICE v1.0) based on graphical processing units (GPUs), Geosci. Model Dev., 13, 955–976, https://doi.org/10.5194/gmd-13-955-2020.

    Vesselinov, V. V., et al. (2019), Unsupervised machine learning based on non-negative tensor factorization for analyzing reactive-mixing, J. Comput. Phys., 395, 85–104, https://doi.org/10.1016/j.jcp.2019.05.039.

    Wilkes, T. C., et al. (2017), A low-cost smartphone sensor-based UV camera for volcanic SO2 emission measurements, Remote Sens., 9(1), 27, https://doi.org/10.3390/rs9010027.

    Witte, P. A., et al. (2020), An event-driven approach to serverless seismic imaging in the cloud, IEEE Trans. Parallel Distrib. Syst., 31, 2,032–2,049, https://doi.org/10.1109/TPDS.2020.2982626.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

     
  • richardmitnick 1:18 pm on August 5, 2019 Permalink | Reply
    Tags: "Fermilab’s HEPCloud goes live", , , Cloud computing, , , ,   

    From Fermi National Accelerator Lab: “Fermilab’s HEPCloud goes live” 

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    From Fermi National Accelerator Lab , an enduring source of strength for the US contribution to scientific research world wide.

    August 5, 2019
    Marcia Teckenbrock

    To meet the evolving needs of high-energy physics experiments, the underlying computing infrastructure must also evolve. Say hi to HEPCloud, the new, flexible way of meeting the peak computing demands of high-energy physics experiments using supercomputers, commercial services and other resources.

    Five years ago, Fermilab scientific computing experts began addressing the computing resource requirements for research occurring today and in the next decade. Back then, in 2014, some of Fermilab’s neutrino programs were just starting up. Looking further into future, plans were under way for two big projects. One was Fermilab’s participation in the future High-Luminosity Large Hadron Collider at the European laboratory CERN.

    The other was the expansion of the Fermilab-hosted neutrino program, including the international Deep Underground Neutrino Experiment. All of these programs would be accompanied by unprecedented data demands.

    To meet these demands, the experts had to change the way they did business.

    HEPCloud, the flagship project pioneered by Fermilab, changes the computing landscape because it employs an elastic computing model. Tested successfully over the last couple of years, it officially went into production as a service for Fermilab researchers this spring.

    2
    Scientists on Fermilab’s NOvA experiment were able to execute around 2 million hardware threads at a supercomputer [NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science the Office of Science’s National Energy Research Scientific Computing Center.] And scientists on CMS experiment have been running workflows using HEPCloud at NERSC as a pilot project. Photo: Roy Kaltschmidt, Lawrence Berkeley National Laboratory]

    Experiments currently have some fixed computing capacity that meets, but doesn’t overshoot, its everyday needs. For times of peak demand, HEPCloud enables elasticity, allowing experiments to rent computing resources from other sources, such as supercomputers and commercial clouds, and manages them to satisfy peak demand. The prior method was to purchase local resources that on a day-to-day basis, overshoot the needs. In this new way, HEPCloud reduces the costs of providing computing capacity.

    “Traditionally, we would buy enough computers for peak capacity and put them in our local data center to cover our needs,” said Fermilab scientist Panagiotis Spentzouris, former HEPCloud project sponsor and a driving force behind HEPCloud. “However, the needs of experiments are not steady. They have peaks and valleys, so you want an elastic facility.”

    In addition, HEPCloud optimizes resource usage across all types, whether these resources are on site at Fermilab, on a grid such as Open Science Grid, in a cloud such as Amazon or Google, or at supercomputing centers like those run by the DOE Office of Science Advanced Scientific Computing Research program (ASCR). And it provides a uniform interface for scientists to easily access these resources without needing expert knowledge about where and how best to run their jobs.

    The idea to create a virtual facility to extend Fermilab’s computing resources began in 2014, when Spentzouris and Fermilab scientist Lothar Bauerdick began exploring ways to best provide resources for experiments at CERN’s Large Hadron Collider. The idea was to provide those resources based on the overall experiment needs rather than a certain amount of horsepower. After many planning sessions with computing experts from the CMS experiment at the LHC and beyond, and after a long period of hammering out the idea, a scientific facility called “One Facility” was born. DOE Associate Director of Science for High Energy Physics Jim Siegrist coined the name “HEPCloud” — a computing cloud for high-energy physics — during a general discussion about a solution for LHC computing demands. But interest beyond high-energy physics was also significant. DOE Associate Director of Science for Advanced Scientific Computing Research Barbara Helland was interested in HEPCloud for its relevancy to other Office of Science computing needs.

    3
    The CMS detector at CERN collects data from particle collisions at the Large Hadron Collider. Now that HEPCloud is in production, CMS scientists will be able to run all of their physics workflows on the expanded resources made available through HEPCloud. Photo: CERN

    The project was a collaborative one. In addition to many individuals at Fermilab, Miron Livny at the University of Wisconsin-Madison contributed to the design, enabling HEPCloud to use the workload management system known as Condor (now HTCondor), which is used for all of the lab’s current grid activities.

    Since its inception, HEPCloud has achieved several milestones as it moved through the several development phases leading up to production. The project team first demonstrated the use of cloud computing on a significant scale in February 2016, when the CMS experiment used HEPCloud to achieve about 60,000 cores on the Amazon cloud, AWS. In November 2016, CMS again used HEPCloud to run 160,000 cores using Google Cloud Services , doubling the total size of the LHC’s computing worldwide. Most recently in May 2018, NOvA scientists were able to execute around 2 million hardware threads at a supercomputer the Office of Science’s National Energy Research Scientific Computing Center (NERSC), increasing both the scale and the amount of resources provided. During these activities, the experiments were executing and benefiting from real physics workflows. NOvA was even able to report significant scientific results at the Neutrino 2018 conference in Germany, one of the most attended conferences in neutrino physics.

    CMS has been running workflows using HEPCloud at NERSC as a pilot project. Now that HEPCloud is in production, CMS scientists will be able to run all of their physics workflows on the expanded resources made available through HEPCloud.

    Next, HEPCloud project members will work to expand the reach of HEPCloud even further, enabling experiments to use the leadership-class supercomputing facilities run by ASCR at Argonne National Laboratory and Oak Ridge National Laboratory.

    Fermilab experts are working to see that, eventually, all Fermilab experiments be configured to use these extended computing resources.

    This work is supported by the DOE Office of Science.

    See the full here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    FNAL Icon

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.

     
  • richardmitnick 8:40 am on June 28, 2018 Permalink | Reply
    Tags: Cloud computing, Cloudy with a chance of disruption, Stanford Institute of Economic Policy Research (SIEPR),   

    From Stanford University: “Cloudy with a chance of disruption” 

    Stanford University Name
    From Stanford University

    1

    Jun 21 2018
    May Wong

    Artificial intelligence may be getting the lion’s share of attention as the technological disruptor du jour. But another — albeit less sensational — advance could prove to be another game-changer: cloud computing.

    The growing availability of convenient, seemingly endless cloud-based digital storage and services is “democratizing computing,” says Stanford economist Nicholas Bloom. New research he conducted with Stanford doctoral candidate Nicolas Pierri shows unprecedented rates of adoption for cloud-based services, spreading computing capabilities to hundreds of thousands of firms across the nation.

    Most strikingly, their findings show that smaller, younger firms have been the quickest to take advantage of cloud computing — signaling a potential boon to entrepreneurship.

    “The popularity of cloud computing has exploded during the last half-decade,” says Bloom, a senior fellow at the Stanford Institute of Economic Policy Research (SIEPR).

    “This is not just a technology used by hipster startups in New York and San Francisco — it’s being adopted all across the country.”

    Cloud computing provides businesses with remote access to a pool of digital resources via the internet. The management of emails was one of the earliest such offerings, but now cloud services encompass all kinds of data storage and include IT infrastructure that provides the computer servers and related IT management to support behind-the-scenes business operations. Cloud providers also offer business service packages, such as data management, analytics and networking capabilities.

    Essentially, cloud computing gives companies of any size a quick way to outsource some or all of its IT, avoiding the fixed investment costs associated with IT staff, computing hardware and data server centers. And cloud costs vary according to usage, so companies can nimbly scale their product or service up or down. Expanding — even globally — can be easier, and more secure, with cloud computing.

    For instance, cloud computing helps one small dairy analytics company store huge amounts of data on cow movement to track pregnancy and milk production potential. The network feature of cloud computing also helps another company analyze patient data and played a key role in its expanding its services from Seattle to cities across the United States and as far away as Singapore and India.

    The increase in use of cloud computing comes amid an apparent U.S. slowdown in new business creation — a worrisome trend as economists are also finding that superstar mega firms have grown evermore powerful in market share.

    The democratizing force of cloud computing can benefit the Davids fighting Goliaths, the researchers say. And technology has been known to disrupt incumbents when they least expect it.

    “This is exactly the type of technology we need to both promote growth and hopefully address inequality by helping smaller startup firms,” Bloom says.

    By cutting the fixed costs of computing, now even the smallest firm can satisfy large and unexpected computing needs, he explains.

    It’s getting more cloudy

    Their research, believed to be the first comprehensive analysis of cloud computing adoption in the United States, tapped a massive dataset of over 1 million U.S. firms.

    Cloud adoption rates have more than doubled every year, rising from 0.3 percent in 2010 — the first year of tracking firm usage of the technology — to 7 percent in 2016.

    The steep rise has spanned across every broad industry group, including construction and manufacturing, the study found. And the technology has expanded broadly across U.S. counties, though urban and more educated areas were the earliest and heaviest adopters.

    Pierri, whose research interests are productivity and innovation, then discovered age and size patterns that harbor potentially significant implications.

    The smallest firms across all industries or firm types appear to be the quickest to utilize cloud computing. Firms with less than 25 employees had the highest adoption rates on average — with 10 to 15 percent of them using cloud services. Middle-sized firms with about 100 employees had the lowest adoption rates, while large firms of 500 employees or more had rates of 5 to 10 percent.

    2

    Young firms are also embracing cloud computing faster than old ones — an indication that entrepreneurial companies are the pioneers of adoption.

    In contrast, looking back to the 1980s, two other major technologies — personal computing (PCs) and e-commerce — experienced a much slower uptake by small, young firms. Instead, those technologies had a “more classic” pattern of adoption, where larger, older firms tended to be the first adopters, the researchers say.

    Will it rain benefits?

    Evidently, cloud computing stands out as an unusual technology that appeals to small, young firms. Its ability to deliver high-powered computing without fixed overhead costs is probably a main reason for this, the researchers say.

    But how that greater operational agility — which economists have found to be valuable for companies in the face of uncertainty or fast-evolving competition — ultimately fits into broader economic forecasts is unclear.

    Research that measures the specific impact of cloud computing — from its potential to boost innovation, give smaller firms a greater chance of survival against superstar firms or to help reverse the nation’s decline in productivity — has yet to surface. These questions will be compelling research challenges in coming years as the technology proliferates and more data gets collected, Pierri says.

    “We don’t have the numbers on impact on firm growth, or on firm survival, so the data doesn’t allow us to put the last piece of the puzzle together yet,” Pierri says. “But this technology helps firms compete, and now we have a very clear sign that we have a force that goes in the other direction. This is very positive.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford University campus. No image credit

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 1:48 pm on January 23, 2013 Permalink | Reply
    Tags: , Cloud computing, ,   

    From isgtw: “The future for science in Europe is bright – and full of clouds!” 

    January 23, 2013
    Andrew Purcell

    “Last week, the Helix Nebula consortium held an event at the European Space Agency’s ESRIN facility in Frascati, Italy, to review the success of the project’s proof-of-concept phase. Helix Nebula aims to pave the way for the development and exploitation of a European wide cloud computing infrastructure. While this is initially based on the needs of IT-intense scientific research organisations in Europe, Helix Nebula intends to also serve governmental organisations and industry and, as such, will also reflect the needs of these stakeholders.

    esrin
    ESA’s ESRIN facility in Frascati, Italy, where yesterday’s event was held. Image courtesy ESA.

    “Helix Nebula is a partnership that was born out of a vision, says Maryline Lengert a senior advisor in the IT department of the European Space Agency (ESA), a founding partner of the initiative. ‘We want to operate as an ecosystem. Today, the market is fragmented, but we want to bring it together and by doing so we will benefit from the stability of diversity.’

    hn

    ESA, is working in collaboration with the French and German national space agencies, as well as the National Research Council in Italy, to create an Earth observation platform focusing on earthquake and volcano research. However, the maps created through this project can take over 300 hours of sequential computation time to complete, explains ESA’s Sveinung Loekken. ‘We want to put the processing of the maps onto the cloud, rather than on somebody’s workstation, which obviously struggles to handle it,’ says Loekken. ‘We want to give people access to large data processing capabilities. This is the raison d’être of the scheme.

    This project is one of three flagship projects undertaken during Helix Nebula’s two-year pilot phase. Ramon Medrano Llamas presented findings from CERN’s flagship project, which has seen the organization gain access to more computing power to process data from the international ATLAS experiment at its Large Hadron Collider accelerator. This has allowed CERN the possibility to dynamically acquire additional resources when needed. “The proof-of-concept deployment has been very successful,” concludes Llamas. ‘Processing in the cloud clearly works.’ Over the longer term, it is also hoped that use of commercial cloud resources could become a useful addition to very large data centres owned and managed by the scientific community.”

    See the full article here.

    isgtw is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”


    ScienceSprings is powered by MAINGEAR computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: