Tagged: AI Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:10 am on July 30, 2019 Permalink | Reply
    Tags: "Human intelligence is the key to the Artificial Intelligence age", AI, AI is the collection of interrelated technologies such as natural language processing; speech recognition; computer vision; machine learning and automated reasoning., Encouraging Australians to embrace emerging technology, Giving machines the ability to perform tasks and solve problems otherwise requiring human cognition.,   

    From University of New South Wales: “Human intelligence is the key to the Artificial Intelligence age” 

    U NSW bloc

    From University of New South Wales

    30 Jul 2019

    Louise Templeton
    Corporate Communications
    02-9385 0857
    LOUISE.TEMPLETON@UNSW.EDU.AU

    Artificial Intelligence (AI) can enhance Australia’s wellbeing, lift the economy, improve environmental sustainability and create a more inclusive and fair society.

    1
    A new report highlights how the nation would benefit from AI. No image credit found.

    A report from the Australian Council of Learned Academies (ACOLA), titled: The Effective and Ethical Development of Artificial Intelligence – An Opportunity to Improve our Wellbeing, encourages Australians to embrace emerging technology.

    The panel, co-chaired by UNSW Sydney Professor Toby Walsh, urges Australians to reflect on what AI-enabled future the nation wants, as the future impact of AI on our society will be ultimately determined by decisions taken today.

    AI is the collection of interrelated technologies, such as natural language processing, speech recognition, computer vision, machine learning and automated reasoning, that gives machines the ability to perform tasks and solve problems that would otherwise require human cognition.

    “With careful planning, AI offers great opportunities for Australia, provided we ensure that the use of the technology does not compromise our human values. As a nation, we should look to set the global example for the responsible adoption of AI,” Professor Walsh said.

    Launching the report, Australia’s Chief Scientist Dr Alan Finkel emphasized that nations had choices.

    “This report was commissioned by the National Science and Technology Council, to develop an intellectual context for our human society to turn to in deciding what living well in this new era will mean,” Dr Finkel said.

    “What kind of society do we want to be? That is the crucial question for all Australians, and for governments as our elected representatives.”

    The findings recognize the importance of having a national strategy, a community awareness campaign, safe and accessible digital infrastructure, a responsive regulatory system; and a diverse and highly skilled workforce.

    “By bringing together Australia’s leading experts from the sciences, technology and engineering, humanities, arts and social sciences, this ACOLA report comprehensively examines the key issues arising from the development and implementation of AI technologies, and importantly places the wellbeing of society at the centre of any development,” Professor Hugh Bradlow, Chair of the ACOLA Board, said.

    ACOLA’s report is the fourth in the Horizon Scanning series, each scoping the human implications of fast-evolving technologies in the decade ahead.
    The project was supported by the Australian Research Council; the Department of Industry, Innovation and Science; and the Department of Prime Minister and Cabinet.

    ACOLA’s expert working group:
    Professor Toby Walsh FAA (co-chair), Professor Neil Levy FAHA (co-chair), Professor Genevieve Bell FTSE, Professor Anthony Elliot FASSA, Professor Fiona Wood AM FAHMS, Professor James Maclaurin, Professor Iven Mareels FTSE.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U NSW Campus

    Welcome to UNSW Australia (The University of New South Wales), one of Australia’s leading research and teaching universities. At UNSW, we take pride in the broad range and high quality of our teaching programs. Our teaching gains strength and currency from our research activities, strong industry links and our international nature; UNSW has a strong regional and global engagement.

    In developing new ideas and promoting lasting knowledge we are creating an academic environment where outstanding students and scholars from around the world can be inspired to excel in their programs of study and research. Partnerships with both local and global communities allow UNSW to share knowledge, debate and research outcomes. UNSW’s public events include concert performances, open days and public forums on issues such as the environment, healthcare and global politics. We encourage you to explore the UNSW website so you can find out more about what we do.

     
  • richardmitnick 8:17 am on July 26, 2019 Permalink | Reply
    Tags: AI, , , ,   

    From National Geographics: “How artificial intelligence can tackle climate change” 

    National Geographic

    From National Geographics

    July 18, 2019
    Jackie Snow

    1
    Steam and smoke rise from the cooling towers and chimneys of a power plant. Artificial intelligence is being used to prove the case that plants that burn carbon-based fuels aren’t profitable. natgeo.com

    The biggest challenge on the planet might benefit from machine learning to help with solutions. Here are a just a few.

    Climate change is the biggest challenge facing the planet. It will need every solution possible, including technology like artificial intelligence (AI).

    Seeing a chance to help the cause, some of the biggest names in AI and machine learning—a discipline within the field—recently published a paper called Tackling Climate Change with Machine Learning The paper, which was discussed at a workshop during a major AI conference in June, was a “call to arms” to bring researchers together, said David Rolnick, a University of Pennsylvania postdoctoral fellow and one of the authors.

    “It’s surprising how many problems machine learning can meaningfully contribute to,” says Rolnick, who also helped organize the June workshop.

    The paper offers up 13 areas where machine learning can be deployed, including energy production, CO2 removal, education, solar geoengineering, and finance. Within these fields, the possibilities include more energy-efficient buildings, creating new low-carbon materials, better monitoring of deforestation, and greener transportation. However, despite the potential, Rolnick points out that this is early days and AI can’t solve everything.

    “AI is not a silver bullet,” he says.

    And though it might not be a perfect solution, it is bringing new insights into the problem. Here are three ways machine learning can help combat climate change.

    Better climate predictions

    This push builds on the work already done by climate informatics, a discipline created in 2011 that sits at the intersection of data science and climate science. Climate informatics covers a range of topics: from improving prediction of extreme events such as hurricanes, paleoclimatology, like reconstructing past climate conditions using data collected from things like ice cores, climate downscaling, or using large-scale models to predict weather on a hyper-local level, and the socio-economic impacts of weather and climate.

    AI can also unlock new insights from the massive amounts of complex climate simulations generated by the field of climate modeling, which has come a long way since the first system was created at Princeton in the 1960s. Of the dozens of models that have since come into existence, all represent atmosphere, oceans, land, cryosphere, or ice. But, even with agreement on basic scientific assumptions, Claire Monteleoni, a computer science professor at the University of Colorado, Boulder and a co-founder of climate informatics, points out that while the models generally agree in the short term, differences emerge when it comes to long-term forecasts.

    “There’s a lot of uncertainty,” Monteleoni said. “They don’t even agree on how precipitation will change in the future.”

    One project Monteleoni worked on uses machine learning algorithms to combine the predictions of the approximately 30 climate models used by the Intergovernmental Panel on Climate Change. Better predictions can help officials make informed climate policy, allow governments to prepare for change, and potentially uncover areas that could reverse some effects of climate change.

    Showing the effects of extreme weather

    Some homeowners have already experienced the effects of a changing environment. For others, it might seem less tangible. To make it more realistic for more people, researchers from Montreal Institute for Learning Algorithms (MILA), Microsoft, and ConscientAI Labs used GANs, a type of AI, to simulate what homes are likely to look like after being damaged by rising sea levels and more intense storms.

    “Our goal is not to convince people climate change is real, it’s to get people who do believe it is real to do more about that,” said Victor Schmidt, a co-author of the paper and Ph.D. candidate at MILA.

    So far, MILA researchers have met with Montreal city officials and NGOs eager to use the tool. Future plans include releasing an app to show individuals what their neighborhoods and homes might look like in the future with different climate change outcomes. But the app will need more data, and Schmidt said they eventually want to let people upload photos of floods and forest fires to improve the algorithm.

    “We want to empower these communities to help,” he said.

    Measuring where carbon is coming from

    Carbon Tracker is an independent financial think-tank working toward the UN goal of preventing new coal plants from being built by 2020. By monitoring coal plant emissions with satellite imagery, Carbon Tracker can use the data it gathers to convince the finance industry that carbon plants aren’t profitable.

    A grant from Google is expanding the nonprofit’s satellite imagery efforts to include gas-powered plants’ emissions and get a better sense of where air pollution is coming from. While there are continuous monitoring systems near power plants that can measure CO2 emissions more directly, they do not have global reach.

    “This can be used worldwide in places that aren’t monitoring,” said Durand D’souza, a data scientist at Carbon Tracker. “And we don’t have to ask permission.”

    AI can automate the analysis of images of power plants to get regular updates on emissions. It also introduces new ways to measure a plant’s impact, by crunching numbers of nearby infrastructure and electricity use. That’s handy for gas-powered plants that don’t have the easy-to-measure plumes that coal-powered plants have.

    Carbon Tracker will now crunch emissions for 4,000 to 5,000 power plants, getting much more information than currently available, and make it public. In the future, if a carbon tax passes, remote sensing Carbon Tracker’s could help put a price on emissions and pinpoint those responsible for it.

    “Machine learning is going to help a lot in this field,” D’souza said.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The National Geographic Society has been inspiring people to care about the planet since 1888. It is one of the largest nonprofit scientific and educational institutions in the world. Its interests include geography, archaeology and natural science, and the promotion of environmental and historical conservation.

     
  • richardmitnick 7:52 am on July 15, 2019 Permalink | Reply
    Tags: AI, , , MTU-Michigan Technical University,   

    From Michigan Technical University: “AI Searches for New Nanomaterials” 

    Michigan Tech bloc

    From Michigan Technical University

    By Jenny Woodman
    Computing And Data Science
    Science And Engineering

    The researchers-

    Yoke Khin Yap
    ykyap@mtu.edu
    906-487-2900

    Susanta Ghosh
    susantag@mtu.edu
    906-487-2689

    1
    Michigan Technological University

    There’s a method to my development of new nanomaterials.

    Marilyn Monroe may have convinced previous generations that diamonds are a girl’s best friend, but in the future people may not sing the praises of sparkly carbon-based gemstones; they might be singing about new nanomaterials related to graphene and nanotubes. Thanks to a multidisciplinary collaboration using artificial intelligence (AI), physicist Yoke Khin Yap at Michigan Technological University and a team of researchers hope to develop new nanomaterials that are smaller and much stronger than diamonds.

    About Those Nanotubes…

    Nanotubes and graphene are a class of low-dimensional materials that can be composed of carbon or some combination of carbon, boron or nitrogen (B-C-N nanomaterials). The molecular bonds that join the atoms to form these nanomaterials are remarkably strong and they have wide-ranging applications, from water purification to biomedical research, from solar cells to semiconductors for computing and electronics.

    DFT, AI, and Other Helpful Acronyms

    The work happens at a nanoscale — and Yap’s team needs an atomic window. Thanks to an instrumentation grant from the National Science Foundation (NSF), Michigan Tech purchased a scanning transmission electron microscope (STEM) in 2018. According to Yap, the system “allows us to image nanomaterials at the atomic resolution, and at the same time we can touch them; we can probe them; we can characterize them in situ.”

    2
    360-Degree View of the STEM. Michigan Technological University

    But getting a material to the microscope is a long road. What if new materials could be invented before they’re seen?

    Yap’s work was inspired by Density Functional Theory (DFT) developed in the 1970s. DFT is widely used in both academia and industry to study materials and predict behaviors at an atomic level, including the structure and electronic properties. In the early days of the theory, there were limitations; DFT was not terribly accurate. According to Yap, DFT told experimentalists important information about the properties of a new structure, but offered little insight about how to make those theoretical materials a reality. He adds that DFT predictions are better suited to the nanoscale, involving 100 atoms or fewer. Therefore, predicting new nanomaterials is the sweet spot of DFT.

    But AI makes the process even more interesting and sophisticated.

    Computer scientists plug in data from current published research on new materials and then compare that information using a popular type of vector-space modeling called Word Embedding. The researchers are looking for places where keywords, such as carbon nanotubes and graphene, might overlap with properties such as band gap or mobility, said Yap.

    “They use a computer to dig into all the theoretical predictions that are being published by physicists and chemists,” Yap said. “This brings together all the kinds of theory out there and we find there is a subset that potentially more people agree upon.”

    From there, the researchers take those data-mined results and feed them into a convolutional neural network (CNN), which is a type of machine deep-learning network wherein the computer applies logic to make predictions. By applying layer upon layer of filters, the CNN reveals even more patterns in the data.

    Susanta Ghosh is an assistant professor of mechanical engineering at Michigan Tech and works on new materials with Yap. He said, “The biggest challenge in the Materials by Design paradigm is the high-dimensionality of the material design space due to the vast amount of possible combinations or conditions that lead to different materials.”

    In other words, the design process is complicated by a staggeringly high number of dimensions.

    “Data-driven modeling integrated with experiments or simulations is showing tremendous promise to overcome this challenge,” Ghosh added. “Data-driven modeling, such as machine learning, is opening new possibilities for creating structure-property relations across diverse material length and time scales and enabling optimization in the microstructural space for material design.”

    The process is lengthy — and like DFT it’s limited to identifying what materials are possible. Yap said further research with another layer of DFT modeling on the dynamic of chemical reaction may reinforce the prediction from CNN and offer insights about how to actually fabricate the materials. He anticipates seeing new materials in the next 10 to 15 years.

    “It is complicated and it will become a very interdisciplinary collaboration between theorists, computer scientists, experimental physicists and chemists to make the new nanomaterials,” Yap said.

    This multidisciplinary approach advances existing knowledge and theory about the next generation of materials for computing, medicine, engineering and more. With AI and atomic imaging, possibilities won’t lose their shape and nanomaterials — not diamonds — are a physicist’s best friend.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Michigan Tech Campus
    Michigan Technological University (http://www.mtu.edu) is a leading public research university developing new technologies and preparing students to create the future for a prosperous and sustainable world. Michigan Tech offers more than 130 undergraduate and graduate degree programs in engineering; forest resources; computing; technology; business; economics; natural, physical and environmental sciences; arts; humanities; and social sciences.
    The College of Sciences and Arts (CSA) fills one of the most important roles on the Michigan Tech campus. We play a part in the education of every student who comes through our doors. We take pride in offering essential foundational courses in the natural sciences and mathematics, as well as the social sciences and humanities—courses that underpin every major on campus. With twelve departments, 28 majors, 30-or-so specializations, and more than 50 minors, CSA has carefully developed programs to suit many interests and skill sets. From sound design and audio technology to actuarial science, applied cognitive science and human factors to rhetoric and technical communication, the college offers many unique programs.

     
  • richardmitnick 1:06 pm on June 30, 2019 Permalink | Reply
    Tags: AI, , , ,   

    From COSMOS Magazine: “Thanks to AI, we know we can teleport qubits in the real world” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    26 June 2019
    Gabriella Bernardi

    Deep learning shows its worth in the word of quantum computing.

    1
    We’re coming to terms with quantum computing, (qu)bit by (qu)bit.
    MEHAU KULYK/GETTY IMAGES

    Italian researchers have shown that it is possible to teleport a quantum bit (or qubit) in what might be called a real-world situation.

    And they did it by letting artificial intelligence do much of the thinking.

    The phenomenon of qubit transfer is not new, but this work, which was led by Enrico Prati of the Institute of Photonics and Nanotechnologies in Milan, is the first to do it in a situation where the system deviates from ideal conditions.

    Moreover, it is the first time that a class of machine-learning algorithms known as deep reinforcement learning has been applied to a quantum computing problem.

    The findings are published in a paper in the journal Communications Physics.

    One of the basic problems in quantum computing is finding a fast and reliable method to move the qubit – the basic piece of quantum information – in the machine. This piece of information is coded by a single electron that has to be moved between two positions without passing through any of the space in between.

    In the so-called “adiabatic”, or thermodynamic, quantum computing approach, this can be achieved by applying a specific sequence of laser pulses to a chain of an odd number of quantum dots – identical sites in which the electron can be placed.

    It is a purely quantum process and a solution to the problem was invented by Nikolay Vitanov of the Helsinki Institute of Physics in 1999. Given its nature, rather distant from the intuition of common sense, this solution is called a “counterintuitive” sequence.

    However, the method applies only in ideal conditions, when the electron state suffers no disturbances or perturbations.

    Thus, Prati and colleagues Riccardo Porotti and Dario Tamaschelli of the University of Milan and Marcello Restelli of the Milan Polytechnic, took a different approach.

    “We decided to test the deep learning’s artificial intelligence, which has already been much talked about for having defeated the world champion at the game Go, and for more serious applications such as the recognition of breast cancer, applying it to the field of quantum computers,” Prati says.

    Deep learning techniques are based on artificial neural networks arranged in different layers, each of which calculates the values for the next one so that the information is processed more and more completely.

    Usually, a set of known answers to the problem is used to “train” the network, but when these are not known, another technique called “reinforcement learning” can be used.

    In this approach two neural networks are used: an “actor” has the task of finding new solutions, and a “critic” must assess the quality of these solution. Provided a reliable way to judge the respective results can be given by the researchers, these two networks can examine the problem independently.

    The researchers, then, set up this artificial intelligence method, assigning it the task of discovering alone how to control the qubit.

    “So, we let artificial intelligence find its own solution, without giving it preconceptions or examples,” Prati says. “It found another solution that is faster than the original one, and furthermore it adapts when there are disturbances.”

    In other words, he adds, artificial intelligence “has understood the phenomenon and generalised the result better than us”.

    “It is as if artificial intelligence was able to discover by itself how to teleport qubits regardless of the disturbance in place, even in cases where we do not already have any solution,” he explains.

    “With this work we have shown that the design and control of quantum computers can benefit from the using of artificial intelligence.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 12:32 pm on May 27, 2019 Permalink | Reply
    Tags: AI, , , , DLR's German Remote Sensing Data Center (DFD), , , the Leibniz Computer Centre (LRZ)   

    From DLR German Aerospace Center: Terra_Byte – Top computing power for researching global change 

    DLR Bloc

    From DLR German Aerospace Center

    Contacts

    Falk Dambowsky
    German Aerospace Center (DLR)
    Media Relations
    Tel.: +49 2203 601-3959

    Prof. Dr Stefan Dech
    German Aerospace Center (DLR)
    Earth Observation Center (EOC) – German Remote Sensing Data Center
    Tel.: +49 8153 28-2885
    Fax: +49 8153 28-3444

    Dr. rer. nat. Vanessa Keuck
    German Aerospace Center (DLR)
    Programme Strategy Space Research and Technology
    Tel.: +49 228 601-5555

    Dr Ludger Palm
    Leibniz Computer Centre (LRZ)
    Tel.: +49 89 35831-8792

    1
    Germany’s SuperMUC-NG supercomputer goes live. DatacenterDynamics

    One of Europe’s largest supercomputing centres – the Leibniz Computer Centre (LRZ) of the Bavarian Academy of Sciences – and Europe’s largest space research institution – the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) – will work together to evaluate the vast quantities of data acquired by Earth observation satellites alongside other global data sources, such as social networks, on the state of our planet on a daily basis.

    “The collaboration between DLR and the LRZ marks a milestone in the development of future-oriented research within Bavaria, a hub for science! This project illustrates the wealth of resources that the Munich research landscape has to offer in this area,” said Bavaria’s Minister of Science and Arts/Culture, Bernd Sibler, at the signing of the cooperation agreement between the partner institutions on 27 May 2019 in Garching. “The collaboration between DLR and the LRZ marks a milestone in the development of future studies within Bavaria, an established hub for science research. This project illustrates the wealth of resources that the Munich research landscape has to offer in this area.” With this cooperation, DLR and the LRZ are pooling their vast expertise in the fields of satellite-based Earth observation and supercomputing.

    “To understand the processes of global change and their development we must be able to evaluate the data from our satellites as effectively as possible,” stressed Hansjörg Dittus, DLR Executive Board Member for Space Research and Technology. “In future, the cooperation between DLR and the LRZ will make it possible to analyse vast quantities of data using the latest methods independently and highly efficiently, to aid in our understanding of global change and its consequences. Examples of this are increasing urbanisation, the expansion of agricultural land use across the globe at the expense of natural ecosystems and the rapid changes occurring in the Earth’s polar regions and in the atmosphere, which will have an undisputed impact on humankind. We will contribute our innovations and technology from space research, as well as our own sensor data to the analysis.”

    Dieter Kranzlmüller, Director of the LRZ, says, “The collaboration between these two leading research institutions brings together two partners that complement each other perfectly and contribute their relevant expertise, resources and research topics. The Leibniz Computing Centre has proven experience as an innovative provider of IT services and a high-performance computing centre. It is also a reliable and capable partner for Bavarian universities and will, in future, cooperate with DLR and its institutes in Oberpfaffenhofen.”

    Huge volumes of Earth observation data

    Every day, Earth observation satellites generate vast quantities of data at such high resolution that conventional evaluation methods have long been pushed to their limits. “Only the combination of the online availability of a wide range of historical and current data stocks with cutting-edge supercomputing systems will make it possible for our researchers to derive high-resolution global information that will enable us to make statements about the development and evolution of Earth. Artificial intelligence methods are playing an increasingly important role in fully automated analysis. This enables us to identify phenomena and developments in ways that would be difficult to detect using conventional methods,” says Stefan Dech, Director of the DLR German Remote Sensing Data Center. This cooperation is key for the DLR institutes in Oberpfaffenhofen involved in research into satellite-based Earth observation. We can now carry out a range of global methodological and geoscientific analyses, which have been the sole preserve of example cases up until now, due to the sheer quantity of data and limited computing power. The technological data concept jointly developed by DLR and the LRZ is particularly important, as it will link the LRZ up with DLR’s German Satellite Data Archive in Oberpfaffenhofen, and, in addition to making global data stocks available online, will link historical data from our archive and DLR’s own data,” continues Dech.

    A challenge for data analysis

    To cite one example, the volume of data from the European Earth observation programme Copernicus has already exceeded 10 petabytes.


    ESA Sentinels (Copernicus)

    One petabtye is equivalent to the content of around 223,000 DVDs – which would weigh approximately 3.5 tonnes. By 2024, the Sentinel satellites of the Copernicus programme will have produced over 40 petabytes of data. These will be supplemented by even more petabytes worth of data from national Earth observation missions, such as DLR’s TerraSAR-X and TanDEM-X radar satellites and US Landsat data.

    DLR TerraSAR-X Satellite

    DLR TanDEM-X satellite

    NASA LandSat 8

    However, it is not only the large amounts of data from the satellite missions that are currently presenting scientists with challenges, but also data on global change that are published on social networks. While these are valuable sources, challenges arise because these data are extremely disparate, their accuracy is uncertain and they are only available for a limited period of time.

    DLR researchers are thus increasingly using artificial intelligence (AI) and machine learning methods to identify trends in global change and analyses of natural disasters and environmental contexts in global and regional time series spanning several decades. But these methods require that the necessary data be available online, on high-performance data analytics platforms (HPDAs). The technical objective of this collaboration is to set up such a platform, providing researchers with access to all of the necessary Earth observation data via DLR’s German Satellite Data Archive (D-SDA) in Oberpfaffenhofen and data distribution points of various providers of freely available satellite data.

    DLR’s German Remote Sensing Data Center (DFD) will coordinate the activities of the participating DLR institutes. In addition to the DFD, the Remote Sensing Technology Institute, the Institute for Atmospheric Physics and the Microwaves and Radar Institute in Oberpfaffenhofen are involved in the project. The Institute of Data Science in Jena and the Simulation and Software Technology Facility in Cologne are also involved in the implementation of the technology.

    Cooperation on global change

    As part of the collaboration, DLR will address issues relating to environmental development and global change, methodological and algorithmic process development in physical modelling and artificial intelligence, the management of long-term archives and the processing of large data volumes.

    The LRZ focuses on the research and implementation of operational, scalable, secure and reliable IT services and technologies, the optimisation of processes and procedures, supercomputing and cloud computing, as well as the use of artificial intelligence and Big Data methods. The LRZ’s existing IT systems (including the MUC-NG supercomputer) and its experience with energy-efficient supercomputing will also prove useful.

    The plan is to make around 40 petabytes available online for thousands of computing cores. DLR and the LRZ are arranging joint investment in the project, with the first stage of expansion planned for late 2020. The new HPDA platform will be integrated into the LRZ’s existing infrastructure in Garching, near Munich. Most of the data on the platform will also be freely and openly available to scientists from Bavarian universities and higher education institutions.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    DLR Center

    DLR is the national aeronautics and space research centre of the Federal Republic of Germany. Its extensive research and development work in aeronautics, space, energy, transport and security is integrated into national and international cooperative ventures. In addition to its own research, as Germany’s space agency, DLR has been given responsibility by the federal government for the planning and implementation of the German space programme. DLR is also the umbrella organisation for the nation’s largest project management agency.

    DLR has approximately 8000 employees at 16 locations in Germany: Cologne (headquarters), Augsburg, Berlin, Bonn, Braunschweig, Bremen, Goettingen, Hamburg, Juelich, Lampoldshausen, Neustrelitz, Oberpfaffenhofen, Stade, Stuttgart, Trauen, and Weilheim. DLR also has offices in Brussels, Paris, Tokyo and Washington D.C.

     
  • richardmitnick 11:04 am on May 11, 2019 Permalink | Reply
    Tags: AI, BlueData software platform, HPE Apollo systems,   

    From insideHPC: “HPE BlueData Appliance Accelerates AI and Data-Driven Innovation” 

    From insideHPC

    Today HPE announced a powerful new appliance for AI and data driven-innovation based on the HPE BlueData software platform and HPE Apollo systems.

    1

    “To stay a step ahead of the competition, enterprise organizations in every industry are embarking on AI-enabled and data-driven digital transformation initiatives,” said Milan Shetti, GM, HPE Storage. “Having BlueData as part of HPE’s portfolio extends our best-in-class solutions for these customers, enabling us to provide differentiated hybrid IT solutions for AI, machine learning, and advanced analytics.”

    The announcement is an important milestone in HPE’s ongoing strategy for the AI market. Acquired by HPE in 2018, BlueData is also now available as a standalone HPE software solution. Customers can continue to run BlueData software on any infrastructure, leveraging the portability of containers across on-premises, hybrid cloud, and multi-cloud environments.

    Enterprise adoption of AI is accelerating rapidly. The number of enterprises implementing AI grew 270% in the past four years, according to Gartner’s recent 2019 CIO Survey1. However, organizations face many challenges in executing a successful AI strategy, including deployment complexity for distributed AI and analytics as well as the shortage of skilled data scientists and AI/ML developers.

    The container-based BlueData software platform provides customers with simple, one-click automated deployment for their AI and analytics tools of choice – and ensures enterprise-grade security, scalability and performance. Data science teams and AI/ML developers can focus on what they do best, with greater productivity and efficiency. These teams can rapidly build models and develop data pipelines to drive business innovation – without worrying about the underlying infrastructure.

    Customers can leverage the following capabilities and services from HPE and BlueData to accelerate their AI initiatives:

    BlueData software subscriptions can be ordered together with HPE Apollo server and storage infrastructure. New bundled solutions with BlueData and HPE infrastructure will be made available in a variety of different configurations.
    New HPE Pointnext services offerings are now available for BlueData to streamline the software implementation and ongoing support operations, ensuring success and faster time-to-value for AI and data-driven initiatives.
    A new HPE reference configuration is now available, providing guidance and best practices for deploying BlueData software on the HPE Elastic Platform for Analytics (EPA). This reference configuration will provide customers with a clear template for leveraging modular building blocks from HPE for compute, storage and networking.
    HPE plans to deliver new BlueData-based reference architectures for a variety of different AI/ML and analytics use cases and ecosystem tools, including Apache Kafka, Apache Spark, Cloudera, H2O, and TensorFlow. These reference architectures include in-depth testing and validation from HPE engineering to determine the workload-optimized configuration for high performance and greater efficiency.
    The BlueData team is developing new innovations for the use of containers in large-scale AI and analytics deployments, with ongoing contributions to the Kubernetes open source community.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 10:14 am on May 1, 2019 Permalink | Reply
    Tags: AI, , MIT's Sertac Karman and Vivienne Sze developed the new chip, New chips   

    From M.I.T. Technology Review: “This chip was demoed at Jeff Bezos’s secretive tech conference. It could be key to the future of AI.” 

    MIT Technology Review
    From M.I.T. Technology Review

    1
    Photographs by Tony Luong

    May 1, 2019
    Will Knight

    Artificial Intelligence

    On a dazzling morning in Palm Springs, California, recently, Vivienne Sze took to a small stage to deliver perhaps the most nerve-racking presentation of her career.

    2
    MIT’s Sertac Karman and Vivienne Sze developed the new chip

    She knew the subject matter inside-out. She was to tell the audience about the chips, being developed in her lab at MIT, that promise to bring powerful artificial intelligence to a multitude of devices where power is limited, beyond the reach of the vast data centers where most AI computations take place. However, the event—and the audience—gave Sze pause.

    The setting was MARS, an elite, invite-only conference where robots stroll (or fly) through a luxury resort, mingling with famous scientists and sci-fi authors. Just a few researchers are invited to give technical talks, and the sessions are meant to be both awe-inspiring and enlightening. The crowd, meanwhile, consisted of about 100 of the world’s most important researchers, CEOs, and entrepreneurs. MARS is hosted by none other than Amazon’s founder and chairman, Jeff Bezos, who sat in the front row.

    “It was, I guess you’d say, a pretty high-caliber audience,” Sze recalls with a laugh.

    Other MARS speakers would introduce a karate-chopping robot, drones that flap like large, eerily silent insects, and even optimistic blueprints for Martian colonies. Sze’s chips might seem more modest; to the naked eye, they’re indistinguishable from the chips you’d find inside any electronic device. But they are arguably a lot more important than anything else on show at the event.

    New capabilities

    Newly designed chips, like the ones being developed in Sze’s lab, may be crucial to future progress in AI—including stuff like the drones and robots found at MARS. Until now, AI software has largely run on graphical chips, but new hardware could make AI algorithms more powerful, which would unlock new applications. New AI chips could make warehouse robots more common or let smartphones create photo-realistic augmented-reality scenery.

    Sze’s chips are both extremely efficient and flexible in their design, something that is crucial for a field that’s evolving incredibly quickly.

    The microchips are designed to squeeze more out of the “deep learning” AI algorithms that have already turned the world upside down. And in the process, they may inspire those algorithms themselves to evolve. “We need new hardware because Moore’s law has slowed down,” Sze says, referring to the axiom coined by Intel cofounder Gordon Moore that predicted that the number of transistors on a chip will double roughly every 18 months—leading to a commensurate performance boost in computer power.

    3

    This law is increasingly now running into the physical limits that come with engineering components at an atomic scale. And it is spurring new interest in alternative architectures and approaches to computing.

    The high stakes that come with investing in next-generation AI chips, and maintaining America’s dominance in chipmaking overall, aren’t lost on the US government. Sze’s microchips are being developed with funding from a Defense Advanced Research Projects Agency (DARPA) program meant to help develop new AI chip designs (see The out-there AI ideas designed to keep the US ahead of China).

    But innovation in chipmaking has been spurred mostly by the emergence of deep learning, a very powerful way for machines to learn to perform useful tasks. Instead of giving a computer a set of rules to follow, a machine basically programs itself. Training data is fed into a large, simulated artificial neural network, which is then tweaked so that it produces the desired result. With enough training, a deep-learning system can find subtle and abstract patterns in data. The technique is applied to an ever-growing array of practical tasks, from face recognition on smartphones to predicting disease from medical images.

    The new chip race

    Deep learning is not so reliant on Moore’s law. Neural nets run many mathematical computations in parallel, so they run far more effectively on the specialized video game graphics chips that perform parallel computations for rendering 3D imagery. But microchips designed specifically for the computations that underpin deep learning should be even more powerful.

    The potential for new chip architectures to improve AI has stirred up a level of entrepreneurial activity that the chip industry hasn’t seen in decades (see The race to power AI’s silicon brains and China has never had a real chip industry. AI may change that).

    4

    Big tech companies hoping to harness and commercialize AI including Google, Microsoft, and (yes) Amazon, are all working on their own deep learning chips. Many smaller companies are developing new chips, too. “It impossible to keep track of all the companies jumping into the AI-chip space,” says Mike Delmer, a microchip analyst at the Linley Group , an analyst firm. “I’m not joking that we learn about a new one nearly every week.”

    The real opportunity, says Sze, isn’t building the most powerful deep learning chips possible. Power efficiency is important because AI also needs to run beyond the reach of large datacenters and so can only rely on the power available on the device itself to run. This is known as operating on the “edge.”

    “AI will be everywhere—and figuring out ways to make things more energy efficient will be extremely important,” says Naveen Rao, vice president of the Artificial Intelligence group at Intel.

    For example, Sze’s hardware is more efficient partly because it physically reduces the bottleneck between where data is stored and where it’s analyzed, but also because it uses clever schemes for reusing data. Before joining MIT, Sze pioneered this approach for improving the efficiency of video compression while at Texas Instruments.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 3:22 pm on March 13, 2019 Permalink | Reply
    Tags: "Quantum computing should supercharge this machine-learning technique", AI, Certain machine-learning tasks could be revolutionized by more powerful quantum computers., , ,   

    From M.I.T Technology Review: “Quantum computing should supercharge this machine-learning technique” 

    MIT Technology Review
    From M.I.T Technology Review

    March 13, 2019
    Will Knight

    1
    The machine-learning experiment was performed using this IBM Q quantum computer.

    Certain machine-learning tasks could be revolutionized by more powerful quantum computers.

    Quantum computing and artificial intelligence are both hyped ridiculously. But it seems a combination of the two may indeed combine to open up new possibilities.

    In a research paper published today in the journal Nature, researchers from IBM and MIT show how an IBM quantum computer can accelerate a specific type of machine-learning task called feature matching. The team says that future quantum computers should allow machine learning to hit new levels of complexity.

    As first imagined decades ago, quantum computers were seen as a different way to compute information. In principle, by exploiting the strange, probabilistic nature of physics at the quantum, or atomic, scale, these machines should be able to perform certain kinds of calculations at speeds far beyond those possible with any conventional computer (see “What is a quantum computer?”). There is a huge amount of excitement about their potential at the moment, as they are finally on the cusp of reaching a point where they will be practical.

    At the same time, because we don’t yet have large quantum computers, it isn’t entirely clear how they will outperform ordinary supercomputers—or, in other words, what they will actually do (see “Quantum computers are finally here. What will we do with them?”).

    Feature matching is a technique that converts data into a mathematical representation that lends itself to machine-learning analysis. The resulting machine learning depends on the efficiency and quality of this process. Using a quantum computer, it should be possible to perform this on a scale that was hitherto impossible.

    The MIT-IBM researchers performed their simple calculation using a two-qubit quantum computer. Because the machine is so small, it doesn’t prove that bigger quantum computers will have a fundamental advantage over conventional ones, but it suggests that would be the case, The largest quantum computers available today have around 50 qubits, although not all of them can be used for computation because of the need to correct for errors that creep in as a result of the fragile nature of these quantum bits.

    “We are still far off from achieving quantum advantage for machine learning,” the IBM researchers, led by Jay Gambetta, write in a blog post. “Yet the feature-mapping methods we’re advancing could soon be able to classify far more complex data sets than anything a classical computer could handle. What we’ve shown is a promising path forward.”

    “We’re at stage where we don’t have applications next month or next year, but we are in a very good position to explore the possibilities,” says Xiaodi Wu, an assistant professor at the University of Maryland’s Joint Center for Quantum Information and Computer Science. Wu says he expects practical applications to be discovered within a year or two.

    Quantum computing and AI are hot right now. Just a few weeks ago, Xanadu, a quantum computing startup based in Toronto, came up with an almost identical approach to that of the MIT-IBM researchers, which the company posted online. Maria Schuld, a machine-learning researcher at Xanadu, says the recent work may be the start of a flurry of research papers that combine the buzzwords “quantum” and “AI.”

    “There is a huge potential,” she says.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 2:06 pm on March 12, 2019 Permalink | Reply
    Tags: "Nine companies are steering the future of artificial intelligence", AI,   

    From Science News: “Nine companies are steering the future of artificial intelligence” 

    From Science News

    March 12, 2019
    Maria Temming

    1
    UNDUE INFLUENCE The Big Nine explores how a handful of tech corporations involved in developing artificial intelligence, including Apple, whose headquarters is shown above, play an outsized role in determining the future of society. Uladzik Kryhin/Shutterstock

    1
    The Big Nine
    Amy Webb
    PublicAffairs, $27

    The book highlights warning signs of what happens when we increasingly rely on technology created by corporations that prioritize commercial and political interests over the public. These red flags include mismanagement of users’ personal data (SN Online: 4/15/18) in the United States and a state-sanctioned “social credit” system that monitors people’s behavior in China. Webb generally holds the Big Nine accountable but occasionally pivots to defend the companies, which she believes are led by people with good intentions.

    Readers who aren’t as convinced of the Big Nine’s noble intentions may at least agree with Webb that great power begets great responsibility. The second half of the book details three possible futures through 2069, ranging from a best-case scenario where the Big Nine commit to making user interests the No. 1 priority to a worst-case scenario where the Big Nine continue business as usual.

    Webb’s assessments are based on analyses of patent filings, policy briefings, interviews and other sources. She paints vivid pictures of how AI could benefit the average person, via precision medicine or smarter dating apps, for example, though she primarily focuses on people in the United States. Her forecasts are provocative and unsettlingly plausible.

    Webb closes with a somewhat perfunctory call to action, including predictable steps like reading the G-MAFIA’s terms of service. Unfortunately, The Big Nine may leave many readers feeling less like empowered citizens and more like extras in a film where tech giants and world leaders play the protagonists. But for anyone who wants a preview of how a few tech firms could reshape society in relatively short order, Webb’s account is an accessible, intriguing read.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
    • Skyscapes for the Soul 5:13 pm on March 12, 2019 Permalink | Reply

      A couple years ago I met a guy who is the CEO of H2O.ai – which he said was an AI company. I’m surprised they weren’t on the list.

      Like

  • richardmitnick 11:43 am on November 5, 2018 Permalink | Reply
    Tags: AI, Toby Walsh Scientia Professor of Artificial Intelligence at UNSW Sydney considers 2062 the year that artificial intelligence will match human intelligence although a fundamental shift has already occu,   

    From University of New South Wales: “AI will match human intelligence by 2062 claims UNSW expert” 

    U NSW bloc

    From University of New South Wales

    05 Nov 2018
    Lori Youmshajekian

    Scientia Professor Toby Walsh tells the Festival of Dangerous Ideas that Artificial Intelligence is less than 50 years away from matching humans.

    1
    Toby Walsh, Scientia Professor of Artificial Intelligence at UNSW Sydney, delivers a talk at the Festival of Dangerous Ideas. Photo by Yaya Stempler

    The idea that Artificial Intelligence will learn unique human traits like adaptability, creativity and emotional intelligence is something that many in society consider to be an unlikely or distant possibility.

    But Toby Walsh, Scientia Professor of Artificial Intelligence at UNSW Sydney, has put a date on this looming reality. He considers 2062 the year that artificial intelligence will match human intelligence, although a fundamental shift has already occurred in the world as we know it.

    Speaking at the Festival of Dangerous Ideas, Walsh argued that we are already experiencing the risks of artificial intelligence that seem to be so far in the future.

    “Even without machines that are very smart, I’m starting to get a little bit nervous about where it’s going and the important choices we should be making”, Walsh said.

    The key challenge will be to avoid the apocalyptic rhetoric of AI and to determine how to move forward in the new age of information.

    Weapons of mass persuasion

    Privacy concerns about the collection of personal data is nothing new. Citing the Cambridge Analytica scandal, Walsh argues that we should be more sceptical about how data is misused by tech companies.

    “A lot of the debate has focused on how personal information was stolen from people, and we should be rightly outraged by that,” Walsh said.

    “But there is another side to the story that I’m surprised hasn’t gotten as much attention from the media, which is that the information was used very actively to manipulate how people were going to vote.”

    Information is the currency of today’s tech giants, and there is a growing fear that many people are in denial, or even complicit, in just how much data is collected about themselves on a daily basis. According to Walsh, breaches of data privacy will occur more often and are becoming increasingly normalised.

    “Many of us have smartwatches that are monitoring our vital signs; our blood pressure, our heartbeat, and if you look at the terms of service, you don’t own that data,” Walsh explained.

    “We’re giving up our analogue privacy, the most personal things about us. Just think what you could do as an advertiser if you could tell how people really respond to your adverts.

    “You can lie about your digital preferences, but you can’t lie about your heartbeat.”

    The ethics of killer robots

    Untangling the ethics of machine accountability will be the second fundamental shift in the world as we know it, according to Walsh.

    “Fully autonomous machines will radically change the nature of warfare and will be the third revolution in warfare,” Walsh said.

    But using autonomous machines as weapons of war poses an ethical dilemma – can you hold a machine accountable for death?

    “Machines have no moral compass, they are not sentient, they don’t suffer pain and they can’t be punished,” Walsh added.

    “This takes us into interesting new legal territory of who should be held responsible, and there is no simple answer.”

    Artificial Intelligence is developed by learning from examples – therefore the key driver of its behaviour is the environment that it is exposed to, more so than the programmer.

    Walsh believes the issue is creating machines that are aligned with human values, which is currently a problem on other platforms driven by Artificial Intelligence.

    “Facebook is an example of the alignment problem, it is optimised for your attention, not for creating political debate or for making society a better place,” Walsh said.

    But it’s not all doom and gloom, according to Walsh. Artificial Intelligence isn’t necessarily heading towards an apocalyptic scenario.

    “The future is not fixed. There is this idea that technology is going to shape our future and that we are going to have to deal with it, but this is the wrong picture to think of because society gets to push back and change the technology,” he said.

    Instead of being proponents of technological determinism, Walsh argued that we need to push for societal determinism, ensuring that we build trustworthy systems with distinct lines of accountability.

    Toby Walsh’s new book, 2062: The World that AI Made, is now available.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U NSW Campus

    Welcome to UNSW Australia (The University of New South Wales), one of Australia’s leading research and teaching universities. At UNSW, we take pride in the broad range and high quality of our teaching programs. Our teaching gains strength and currency from our research activities, strong industry links and our international nature; UNSW has a strong regional and global engagement.

    In developing new ideas and promoting lasting knowledge we are creating an academic environment where outstanding students and scholars from around the world can be inspired to excel in their programs of study and research. Partnerships with both local and global communities allow UNSW to share knowledge, debate and research outcomes. UNSW’s public events include concert performances, open days and public forums on issues such as the environment, healthcare and global politics. We encourage you to explore the UNSW website so you can find out more about what we do.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: