Tagged: Computer technology Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:51 am on March 10, 2020 Permalink | Reply
    Tags: "Introducing the light-operated hard drives of tomorrow", , , Computer technology,   

    From École Polytechnique Fédérale de Lausanne: “Introducing the light-operated hard drives of tomorrow” 

    From École Polytechnique Fédérale de Lausanne

    Tooba Neda Safi

    What do you get when you place a thin film of perovkite material used in solar cells on top of a magnetic substrate? More efficient hard drive technology. EPFL physicist László Forró and his team pave the way for the future of data storage.

    “The key was to get the technology to work at room temperature,” explains László Forró, EPFL physicist. “We had already known that it was possible to rewrite magnetic spin using light, but you’d have to cool the apparatus to – 180 degrees Celsius.”

    Forró, along with his colleagues Bálint Náfrádi and Endre Horváth, succeeded at tuning one ferromagnet at room temperature with visible light, a proof of concept that establishes the foundations of a new generation of hard drives that will be physically smaller, faster, and cheaper, requiring less energy compared to today’s commercial hard drives. The results are published in PNAS.

    A hard drive functions as a data storage device in a computer, where a large amount of data can be stored with an electromagnetically charged surface.

    Nowadays, the demand for high capacity hard drives has increased more than ever. Computer users handle large files, databases, image or video files, using software, all of which require a large amount of memory in order to save and process the data as quickly as possible.

    The EPFL scientists used a halide perovskite/oxide perovskite heterostructure in their new method for reversible, light-induced tuning of ferromagnetism at room temperature. Having a perovskite structure represents a novel class of light-absorbing materials.

    As reported in the publication,

    “The rise of digitalization led to an exponential increase in demand for data storage. Mass-storage is resolved by hard-disk drives, HDDs, due to their relatively long lifespan and low price. HDDs use magnetic domains, which are rotated to store and retrieve information. However, an increase in capacity and speed is continuously demanded. We report a method to facilitate the writing of magnetic bits optically. We use a sandwich of a highly light sensitive (MAPbI3) and a ferromagnetic material (LSMO), where illumination of MAPbI3 drives charge carriers into LSMO and decreases its magnetism. This is a viable alternative of the long-sought-after heat-assisted magnetic recording (HAMR) technology, which would heat up the disk material during the writing process.”

    The method is still experimental, but it may be used to build the next generation of memory-storage systems, with higher capacities and with low energy demands. The method provides a stand for the development of a new generation of magneto-optical hard drives. Forró concludes: “ We are now looking for investors who would be interested in carrying on the patent application, and for industrial partners to implement this original idea and proof of principle into a product.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    EPFL bloc

    EPFL campus

    EPFL is Europe’s most cosmopolitan technical university. It receives students, professors and staff from over 120 nationalities. With both a Swiss and international calling, it is therefore guided by a constant wish to open up; its missions of teaching, research and partnership impact various circles: universities and engineering schools, developing and emerging countries, secondary schools and gymnasiums, industry and economy, political circles and the general public.

  • richardmitnick 10:25 am on February 24, 2020 Permalink | Reply
    Tags: "Helix of an Elusive Rare Earth Metal Could Help Push Moore's Law to The Next Level", , Computer technology, , , Rare earth metal Tellurium, , , The tellurium helix slip neatly inside a nanotube of boron nitride., Transistors   

    From Purdue University via Science Alert: “Helix of an Elusive Rare Earth Metal Could Help Push Moore’s Law to The Next Level” 

    From Purdue University



    Science Alert

    23 FEB 2020

    Tellurium helix (Qin et al., Nature Electronics, 2020)

    To cram ever more computing power into your pocket, engineers need to come up with increasingly ingenious ways to add transistors to an already crowded space.

    Unfortunately there’s a limit to how small you can make a wire. But a twisted form of rare earth metal just might have what it takes to push the boundaries a little further.

    A team of researchers funded by the US Army have discovered a way to turn twisted nanowires of one of the rarest of rare earth metals, tellurium, into a material with just the right properties that make it an ideal transistor at just a couple of nanometres across.

    “This tellurium material is really unique,” says Peide Ye, an electrical engineer from Purdue University.

    “It builds a functional transistor with the potential to be the smallest in the world.”

    Transistors are the work horse of anything that computes information, using tiny changes in charge to prevent or allow larger currents to flow.

    Typically made of semiconducting materials, they can be thought of as traffic intersections for electrons. A small voltage change in one place opens the gate for current to flow, serving as both a switch and an amplifier.

    Combinations of open and closed switches are the physical units representing the binary language underpinning logic in computer operations. As such, the more you have in one spot, the more operations you can run.

    Ever since the first chunky transistor was prototyped a little more than 70 years ago, a variety of methods and novel materials have led to regular downsizing of the transistor.

    In fact the shrinking was so regular that co-founder of the computer giant Intel, George Moore, famously noted in 1965 that it would follow a trend of transistors doubling in density every two years.

    Today, that trend has slowed considerably. For one thing, more transistors in one spot means more heat building up.

    But there are also only so many ways you can shave atoms from a material and still have it function as a transistor. Which is where tellurium comes in.

    Though not exactly a common element in Earth’s crust, it’s a semi-metal in high demand, finding a place in a variety of alloys to improve hardness and help it resist corrosion.

    It also has properties of a semiconductor; carrying a current under some circumstances and acting as a resistor under others.

    Curious about its characteristics on a nanoscale, engineers grew single-dimensional chains of the element and took a close look at them under an electron microscope. Surprisingly, the super-thin ‘wire’ wasn’t exactly a neat line of atoms.

    “Silicon atoms look straight, but these tellurium atoms are like a snake. This is a very original kind of structure,” says Ye.

    On closer inspection they worked out that the chain was made of pairs of tellurium atoms bonded strongly together, and then stacking into a crystal form pulled into a helix by weaker van der Waal forces.

    Building any kind of electronics from a crinkly nanowire is just asking for trouble, so to give the material some structure the researchers went on the hunt for something to encapsulate it in.

    The solution, they found, was a nanotube of boron nitride. Not only did the tellurium helix slip neatly inside, the tube acted as an insulator, ticking all the boxes that would make it suit life as a transistor.

    Most importantly, the whole semiconducting wire was a mere 2 nanometres across, putting it in the same league as the 1 nanometre record set a few years ago.

    Time will tell if the team can squeeze it down further with fewer chains, or even if it will function as expected in a circuit.

    If it works as hoped, it could contribute to the next generation of miniaturised electronics, potentially halving the size of current cutting edge microchips.

    “Next, the researchers will optimise the device to further improve its performance, and demonstrate a highly efficient functional electronic circuit using these tiny transistors, potentially through collaboration with ARL researchers,” says Joe Qiu, program manager for the Army Research Office.

    Even if the concept pans out, there’s a variety of other challenges for shrinking technology to overcome before we’ll find it in our pockets.

    While tellurium isn’t currently considered to be a scarce resource, in spite of its relative rarity, it could be in high demand in future electronics such as solar cells.

    This research was published in Nature Electronics.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Purdue University is a public research university in West Lafayette, Indiana, and the flagship campus of the Purdue University system. The university was founded in 1869 after Lafayette businessman John Purdue donated land and money to establish a college of science, technology, and agriculture in his name. The first classes were held on September 16, 1874, with six instructors and 39 students.

    The main campus in West Lafayette offers more than 200 majors for undergraduates, over 69 masters and doctoral programs, and professional degrees in pharmacy and veterinary medicine. In addition, Purdue has 18 intercollegiate sports teams and more than 900 student organizations. Purdue is a member of the Big Ten Conference and enrolls the second largest student body of any university in Indiana, as well as the fourth largest foreign student population of any university in the United States.

  • richardmitnick 8:57 am on December 29, 2019 Permalink | Reply
    Tags: "A chip made with carbon nanotubes not silicon marks a computing milestone", , Computer technology,   

    From Science News: “A chip made with carbon nanotubes, not silicon, marks a computing milestone” 

    From Science News

    August 28, 2019
    Maria Temming

    The sun may be setting on silicon. Now, computer chips made with carbon nanotubes (one pictured) are the up-and-comers.
    G. Hills et al/Nature 2019

    “Silicon Valley” may soon be a misnomer.

    Inside a new microprocessor, the transistors — tiny electronic switches that collectively perform computations — are made with carbon nanotubes, rather than silicon. By devising techniques to overcome the nanoscale defects that often undermine individual nanotube transistors (SN: 7/19/17), researchers have created the first computer chip that uses thousands of these switches to run programs.

    The prototype, described in the Aug. 29 Nature, is not yet as speedy or as small as commercial silicon devices. But carbon nanotube computer chips may ultimately give rise to a new generation of faster, more energy-efficient electronics.

    This is “a very important milestone in the development of this technology,” says Qing Cao, a materials scientist at the University of Illinois at Urbana-Champaign not involved in the work.

    The heart of every transistor is a semiconductor component, traditionally made of silicon, which can act either like an electrical conductor or an insulator. A transistor’s “on” and “off” states, where current is flowing through the semiconductor or not, encode the 1s and 0s of computer data (SN: 4/2/13). By building leaner, meaner silicon transistors, “we used to get exponential gains in computing every single year,” says Max Shulaker, an electrical engineer at MIT. But “now performance gains have started to level off,” he says. Silicon transistors can’t get much smaller and more efficient than they already are.

    Because carbon nanotubes are almost atomically thin and ferry electricity so well, they make better semiconductors than silicon. In principle, carbon nanotube processors could run three times faster while consuming about one-third of the energy of their silicon predecessors, Shulaker says. But until now, carbon nanotubes have proved too finicky to construct complex computing systems.

    One issue is that, when a network of carbon nanotubes is deposited onto a computer chip wafer, the tubes tend to bunch together in lumps that prevent the transistor from working. It’s “like trying to build a brick patio, with a giant boulder in the middle of it,” Shulaker says. His team solved that problem by spreading nanotubes on a chip, then using vibrations to gently shake unwanted bundles off the layer of nanotubes.

    A new kind of computer chip (array of chips on the wafer pictured above) contains thousands of transistors made with carbon nanotubes, rather than silicon. Although the current prototypes can’t compete with silicon chips for size or speed yet, carbon nanotube-based computing promises to usher in a new era of even faster, more energy-efficient electronics.G. Hills et al/Nature 2019

    Another problem the team faced is that each batch of semiconducting carbon nanotubes contains about 0.01 percent metallic nanotubes. Since metallic nanotubes can’t properly flip between conductive and insulating, these tubes can muddle a transistor’s readout.

    In search of a work-around, Shulaker and colleagues analyzed how badly metallic nanotubes affected different transistor configurations, which perform different kinds of operations on bits of data (SN: 10/9/15). The researchers found that defective nanotubes affected the function of some transistor configurations more than others — similar to the way a missing letter can make some words illegible, but leave others mostly readable. So Shulaker and colleagues carefully designed the circuitry of their microprocessor to avoid transistor configurations that were most confused by metallic nanotube glitches.

    “One of the biggest things that impressed me about this paper was the cleverness of that circuit design,” says Michael Arnold, a materials scientist at the University of Wisconsin–Madison not involved in the work.

    With over 14,000 carbon nanotube transistors, the resulting microprocessor executed a simple program to write the message, “Hello, world!” — the first program that many newbie computer programmers learn to write.

    The newly minted carbon nanotube microprocessor isn’t yet ready to unseat silicon chips as the mainstay of modern electronics. Each one is about a micrometer across, compared with current silicon transistors that are tens of nanometers across. And each carbon nanotube transistor in this prototype can flip on and off about a million times each second, whereas silicon transistors can flicker billions of times per second. That puts these nanotube transistors on par with silicon components produced in the 1980s.

    Shrinking the nanotube transistors would help electricity zip through them with less resistance, allowing the devices to switch on and off more quickly, Arnold says. And aligning the nanotubes in parallel, rather than using a randomly oriented mesh, could also increase the electric current through the transistors to boost processing speed.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 8:49 am on October 14, 2019 Permalink | Reply
    Tags: "Wrangling big data into real-time actionable intelligence", Actionable intelligence is the next level of data analysis where analysis is put into use for near-real-time decision-making., , Computer technology, Developing the science to gather insights from data in nearly real time., Every day there’s about 2.5 quintillion (or 2.5 billion billion) bytes of data generated, Hortonworks Data Platform, , We need to know what we want before we build something that gets us what we want., We’re trying to make data discoverable accessible and usable.   

    From Sandia Lab: “Wrangling big data into real-time, actionable intelligence” 

    From Sandia Lab

    October 14, 2019
    Kristen Meub

    Social media, cameras, sensors and more generate huge amounts of data that can overwhelm analysts sifting through it all for meaningful, actionable information to provide decision-makers such as political leaders and field commanders responding to security threats.

    Sandia National Laboratories computer scientists Tian Ma, left, and Rudy Garcia, led a project to deliver actionable information from streaming data in nearly real time. (Photo by Randy Montoya)

    Sandia National Laboratories researchers are working to lessen that burden by developing the science to gather insights from data in nearly real time.

    “The amount of data produced by sensors and social media is booming — every day there’s about 2.5 quintillion (or 2.5 billion billion) bytes of data generated,” said Tian Ma, a Sandia computer scientist and project co-lead. “About 90% of all data has been generated in the last two years — there’s more data than we have people to analyze. Intelligence communities are basically overwhelmed, and the problem is that you end up with a lot of data sitting on disks that could get overlooked.”

    Sandia researchers worked with students at the University of Illinois Urbana-Champaign, an Academic Alliance partner, to develop analytical and decision-making algorithms for streaming data sources and integrated them into a nearly real-time distributed data processing framework using big data tools and computing resources at Sandia. The framework takes disparate data from multiple sources and generates usable information that can be acted on in nearly real time.

    To test the framework, the researchers and the students used Chicago traffic data such as images, integrated sensors, tweets and streaming text to successfully measure traffic congestion and suggest faster driving routes around it for a Chicago commuter. The research team selected the Chicago traffic example because the data inputted has similar characteristics to data typically observed for national security purposes, said Rudy Garcia, a Sandia computer scientist and project co-lead.

    Drowning in data

    “We create data without even thinking about it,” said Laura Patrizi, a Sandia computer scientist and research team member, during a talk at the 2019 United States Geospatial Intelligence Foundation’s GEOINT Symposium. “When we walk around with our phone in our pocket or tweet about horrible traffic, our phone is tracking our location and can attach a geolocation to our tweet.”

    To harness this data avalanche, analysts typically use big data tools and machine learning algorithms to find and highlight significant information, but the process runs on recorded data, Ma said.

    “We wanted to see what can be analyzed with real-time data from multiple data sources, not what can be learned from mining historical data,” Ma said. “Actionable intelligence is the next level of data analysis where analysis is put into use for near-real-time decision-making. Success on this research will have a strong impact to many time-critical national security applications.”

    Building a data processing framework

    The team stacked distributed technologies into a series of data processing pipelines that ingest, curate and index the data. The scientists wrangling the data specified how the pipelines should acquire and clean the data.

    Sandia National Laboratories is turning big data into actionable intelligence. (Illustration by Michael Vittitow)

    “Each type of data we ingest has its own data schema and format,” Garcia said. “In order for the data to be useful, it has to be curated first so it can be easily discovered for an event.”

    Hortonworks Data Platform, running on Sandia’s computers, was used as the software infrastructure for the data processing and analytic pipelines. Within Hortonworks, the team developed and integrated Apache Storm topologies for each data pipeline. The curated data was then stored in Apache Solr, an enterprise search engine and database. PyTorch and Lucidwork’s Banana were used for vehicle object detection and data visualization.

    Finding the right data

    “Bringing in large amounts of data is difficult, but it’s even more challenging to find the information you’re really looking for,” Garcia said. “For example, during the project we would see tweets that say something like ‘Air traffic control has kept us on the ground for the last hour at Midway.’ Traffic is in the tweet, but it’s not relevant to freeway traffic.”

    To determine the level of traffic congestion on a Chicago freeway, ideally the tool could use a variety of data types, including a traffic camera showing flow in both directions, geolocated tweets about accidents, road sensors measuring average speed, satellite imagery of the areas and traffic signs estimating current travel times between mileposts, said Forest Danford, a Sandia computer scientist and research team member.

    “However, we also get plenty of bad data like a web camera image that’s hard to read, and it is rare that we end up with many different data types that are very tightly co-located in time and space,” Danford said. “We needed a mechanism to learn on the 90 million-plus events (related to Chicago traffic) we’ve observed to be able to make decisions based on incomplete or imperfect information.”

    The team added a traffic congestion classifier by training merged computer systems modeled on the human brain on features extracted from labeled images and tweets, and other events that corresponded to the data in time and space. The trained classifier was able to generate predictions on traffic congestion based on operational data at any given time point and location, Danford said.

    Professors Minh Do and Ramavarapu Sreenivas and their students at UIUC worked on real-time object and image recognition with web-camera imaging and developed robust route planning processes based off the various data sources.

    “Developing cogent science for actionable intelligence requires us to grapple with information-based dynamics,” Sreenivas said. “The holy grail here is to solve the specification problem. We need to know what we want before we build something that gets us what we want. This is a lot harder than it looks, and this project is the first step in understanding exactly what we would like to have.”

    Moving forward, the Sandia team is transferring the architecture, analytics and lessons learned in Chicago to other government projects and will continue to investigate analytic tools, make improvements to the Labs’ object recognition model and work to generate meaningful, actionable intelligence.

    “We’re trying to make data discoverable, accessible and usable,” Garcia said. “And if we can do that through these big data architectures, then I think we’re helping.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Sandia Campus
    Sandia National Laboratory

    Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.

  • richardmitnick 9:20 am on October 7, 2019 Permalink | Reply
    Tags: A commercially viable way to manufacture integrated silicon III-V chips with high-performance III-V devices inserted into their design., , Computer technology, , , SMART-Singapore-MIT Alliance for Research and Technology   

    From MIT News: “SMART develops a way to commercially manufacture integrated silicon III-V chips” 

    MIT News

    From MIT News

    October 3, 2019
    Singapore-MIT Alliance for Research and Technology

    A LEES researcher reviews a 200 mm Silicon III-V wafer. Photo: SMART

    LEES innovative silicon III-V manufacturing process. Image: SMART

    New method from MIT’s research enterprise in Singapore paves the way for improved optoelectronic and 5G devices.

    The Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, has announced the successful development of a commercially viable way to manufacture integrated silicon III-V chips with high-performance III-V devices inserted into their design.

    In most devices today, silicon-based CMOS chips are used for computing, but they are not efficient for illumination and communications, resulting in low efficiency and heat generation. This is why current 5G mobile devices on the market get very hot upon use and can shut down after a short time.

    This is where III-V semiconductors are valuable. III-V chips are made with compounds including elements in the third and fifth columns of the periodic table, such as gallium nitride (GaN) and indium gallium arsenide (InGaAs). Due to their unique properties, they are exceptionally well-suited for optoelectronics (such as LEDs) and communications (such as 5G wireless), boosting efficiency substantially.

    “By integrating III-V into silicon, we can build upon existing manufacturing capabilities and low-cost volume production techniques of silicon and include the unique optical and electronic functionality of III-V technology,” says Eugene Fitzgerald, CEO and director of SMART and the Merton C. Flemings-SMA Professor of Materials Science and Engineering at MIT. “The new chips will be at the heart of future product innovation and power the next generation of communications devices, wearables, and displays.”

    Kenneth Lee, senior scientific director of the SMART Low Energy Electronic Systems (LEES) research program, adds: “Integrating III-V semiconductor devices with silicon in a commercially viable way is one of the most difficult challenges faced by the semiconductor industry, even though such integrated circuits have been desired for decades. Current methods are expensive and inefficient, which is delaying the availability of the chips the industry needs. With our new process, we can leverage existing capabilities to manufacture these new integrated silicon III-V chips cost-effectively and accelerate the development and adoption of new technologies that will power economies.”

    The new technology developed by SMART builds two layers of silicon and III-V devices on separate substrates and integrates them vertically together within a micron, which is 1/50th the diameter of a human hair. The process can use existing 200 micrometer manufacturing tools, which will allow semiconductor manufacturers in Singapore and around the world to make new use of their current equipment. Today, the cost of investing in a new manufacturing technology is in the range of tens of billions of dollars; the new integrated circuit platform is highly cost-effective, and will result in much lower-cost novel circuits and electronic systems.

    SMART is focusing on creating new chips for pixelated illumination/display and 5G markets, which has a combined potential market of over $100 billion. Other markets that SMART’s new integrated silicon III-V chips will disrupt include wearable mini-displays, virtual reality applications, and other imaging technologies.

    The patent portfolio has been exclusively licensed by New Silicon Corporation (NSC), a Singapore-based spinoff from SMART. NSC is the first fabless silicon integrated circuit company with proprietary materials, processes, devices, and design for monolithic integrated silicon III-V circuits.

    SMART’s new integrated Silicon III-V chips will be available next year and expected in products by 2021.

    SMART’s LEES Interdisciplinary Research Group is creating new integrated circuit technologies that result in increased functionality, lower power consumption, and higher performance for electronic systems. These integrated circuits of the future will impact applications in wireless communications, power electronics, LED lighting, and displays. LEES has a vertically-integrated research team possessing expertise in materials, devices, and circuits, comprising multiple individuals with professional experience within the semiconductor industry. This ensures that the research is targeted to meet the needs of the semiconductor industry both within Singapore and globally.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 12:15 pm on January 30, 2019 Permalink | Reply
    Tags: Computer technology, , Intel Xeon W-3175X processor released,   

    From insideHPC: “Intel steps up with 28 core Xeon Processor for High-End Workstations” 

    From insideHPC

    January 30, 2019

    Intel Corporation has announced the release of the Intel Xeon W-3175X processor in January 2019. The Intel Xeon W-3175X is a 28-core workstation powerhouse built for select, highly-threaded and computing-intensive applications such as architectural and industrial design and professional content creation. (Credit: Tim Herman/Intel Corporation)

    Today Intel announced that their new Intel Xeon W-3175X processor is now available. This unlocked 28-core workstation powerhouse is built for select, highly-threaded and computing-intensive applications such as architectural and industrial design and professional content creation.

    “Built for handling heavily threaded applications and tasks, the Intel Xeon W-3175X processor delivers uncompromising single- and all-core world-class performance for the most advanced professional creators and their demanding workloads. With the most cores and threads, CPU PCIe lanes, and memory capacity of any Intel desktop processor, the Intel Xeon W-3175X processor has the features that matter for massive mega-tasking projects such as film editing and 3D rendering.”

    Other key features and capabilities:

    Intel Mesh Architecture, which delivers low latency and high data bandwidth between CPU cores, cache, memory and I/O while increasing the number of cores per processor– a critical need for the demanding, highly-threaded workloads of creators and experts.
    Intel Extreme Tuning Utility, a precision toolset that helps experienced overclockers optimize their experience with unlocked1processors.
    Intel Extreme Memory Profile, which simplifies the overclocking1experience by removing the guesswork of memory overclocking.
    Intel AVX-512 ratio offset and memory controller trim voltage control that allow for optimization of overclocking frequencies regardless of SSE or AVX workloads, and allow maximization of memory overclocking1.
    Intel Turbo Boost Technology 2.0 that delivers frequencies up to 4.3 GHz.
    Up to 68 platform PCIe lanes, 38.5 MB Intel Smart Cache, 6-channel DDR4 memory support with up to 512 GB at 2666 MHz, and ECC and standard RAS support power peripherals and high-speed tools.
    Intel C621 chipset based systems designed to support the Intel Xeon W-3175X processor allow professional content creators to achieve a new level of performance.
    Asetek 690LX-PN all-in-one liquid cooler, a custom created solution sold separately by Asetek, helps ensure the processor runs smoothly at both stock settings and while overclocking.

    The Intel Xeon W-3175X processor is available from system integrators that develop purpose-built desktop workstations.

    Intel Xeon W-3175X Specifications:

    Base Clock Speed (GHz): 3.1
    Intel Turbo Boost Technology 2.0 Maximum Single Core Turbo Frequency (GHz): 4.3
    Cores/Threads: 28/56
    TDP: 255W
    Intel Smart Cache: 38.5 MB
    Unlocked: Yes
    Platform PCIE Lanes: Up to 68
    Memory Support: Six Channels, DDR4-2666
    Standard RAS Support: Yes
    ECC Support: Yes
    RCP Pricing (USD 1K): $2,999

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

  • richardmitnick 11:04 am on January 2, 2019 Permalink | Reply
    Tags: Computer technology, , , , Physicists record “lifetime” of graphene qubits, , ,   

    From MIT News: “Physicists record ‘lifetime’ of graphene qubits” 

    MIT News
    MIT Widget

    From MIT News

    December 31, 2018
    Rob Matheson

    Researchers from MIT and elsewhere have recorded the “temporal coherence” of a graphene qubit — how long it maintains a special state that lets it represent two logical states simultaneously — marking a critical step forward for practical quantum computing. Stock image

    First measurement of its kind could provide stepping stone to practical quantum computing.

    Researchers from MIT and elsewhere have recorded, for the first time, the “temporal coherence” of a graphene qubit — meaning how long it can maintain a special state that allows it to represent two logical states simultaneously. The demonstration, which used a new kind of graphene-based qubit, represents a critical step forward for practical quantum computing, the researchers say.

    Superconducting quantum bits (simply, qubits) are artificial atoms that use various methods to produce bits of quantum information, the fundamental component of quantum computers. Similar to traditional binary circuits in computers, qubits can maintain one of two states corresponding to the classic binary bits, a 0 or 1. But these qubits can also be a superposition of both states simultaneously, which could allow quantum computers to solve complex problems that are practically impossible for traditional computers.

    The amount of time that these qubits stay in this superposition state is referred to as their “coherence time.” The longer the coherence time, the greater the ability for the qubit to compute complex problems.

    Recently, researchers have been incorporating graphene-based materials into superconducting quantum computing devices, which promise faster, more efficient computing, among other perks. Until now, however, there’s been no recorded coherence for these advanced qubits, so there’s no knowing if they’re feasible for practical quantum computing.

    In a paper published today in Nature Nanotechnology, the researchers demonstrate, for the first time, a coherent qubit made from graphene and exotic materials. These materials enable the qubit to change states through voltage, much like transistors in today’s traditional computer chips — and unlike most other types of superconducting qubits. Moreover, the researchers put a number to that coherence, clocking it at 55 nanoseconds, before the qubit returns to its ground state.

    The work combined expertise from co-authors William D. Oliver, a physics professor of the practice and Lincoln Laboratory Fellow whose work focuses on quantum computing systems, and Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT who researches innovations in graphene.

    “Our motivation is to use the unique properties of graphene to improve the performance of superconducting qubits,” says first author Joel I-Jan Wang, a postdoc in Oliver’s group in the Research Laboratory of Electronics (RLE) at MIT. “In this work, we show for the first time that a superconducting qubit made from graphene is temporally quantum coherent, a key requisite for building more sophisticated quantum circuits. Ours is the first device to show a measurable coherence time — a primary metric of a qubit — that’s long enough for humans to control.”

    There are 14 other co-authors, including Daniel Rodan-Legrain, a graduate student in Jarillo-Herrero’s group who contributed equally to the work with Wang; MIT researchers from RLE, the Department of Physics, the Department of Electrical Engineering and Computer Science, and Lincoln Laboratory; and researchers from the Laboratory of Irradiated Solids at the École Polytechnique and the Advanced Materials Laboratory of the National Institute for Materials Science.

    A pristine graphene sandwich

    Superconducting qubits rely on a structure known as a “Josephson junction,” where an insulator (usually an oxide) is sandwiched between two superconducting materials (usually aluminum). In traditional tunable qubit designs, a current loop creates a small magnetic field that causes electrons to hop back and forth between the superconducting materials, causing the qubit to switch states.

    But this flowing current consumes a lot of energy and causes other issues. Recently, a few research groups have replaced the insulator with graphene, an atom-thick layer of carbon that’s inexpensive to mass produce and has unique properties that might enable faster, more efficient computation.

    To fabricate their qubit, the researchers turned to a class of materials, called van der Waals materials — atomic-thin materials that can be stacked like Legos on top of one another, with little to no resistance or damage. These materials can be stacked in specific ways to create various electronic systems. Despite their near-flawless surface quality, only a few research groups have ever applied van der Waals materials to quantum circuits, and none have previously been shown to exhibit temporal coherence.

    For their Josephson junction, the researchers sandwiched a sheet of graphene in between the two layers of a van der Waals insulator called hexagonal boron nitride (hBN). Importantly, graphene takes on the superconductivity of the superconducting materials it touches. The selected van der Waals materials can be made to usher electrons around using voltage, instead of the traditional current-based magnetic field. Therefore, so can the graphene — and so can the entire qubit.

    When voltage gets applied to the qubit, electrons bounce back and forth between two superconducting leads connected by graphene, changing the qubit from ground (0) to excited or superposition state (1). The bottom hBN layer serves as a substrate to host the graphene. The top hBN layer encapsulates the graphene, protecting it from any contamination. Because the materials are so pristine, the traveling electrons never interact with defects. This represents the ideal “ballistic transport” for qubits, where a majority of electrons move from one superconducting lead to another without scattering with impurities, making a quick, precise change of states.

    How voltage helps

    The work can help tackle the qubit “scaling problem,” Wang says. Currently, only about 1,000 qubits can fit on a single chip. Having qubits controlled by voltage will be especially important as millions of qubits start being crammed on a single chip. “Without voltage control, you’ll also need thousands or millions of current loops too, and that takes up a lot of space and leads to energy dissipation,” he says.

    Additionally, voltage control means greater efficiency and a more localized, precise targeting of individual qubits on a chip, without “cross talk.” That happens when a little bit of the magnetic field created by the current interferes with a qubit it’s not targeting, causing computation problems.

    For now, the researchers’ qubit has a brief lifetime. For reference, conventional superconducting qubits that hold promise for practical application have documented coherence times of a few tens of microseconds, a few hundred times greater than the researchers’ qubit.

    But the researchers are already addressing several issues that cause this short lifetime, most of which require structural modifications. They’re also using their new coherence-probing method to further investigate how electrons move ballistically around the qubits, with aims of extending the coherence of qubits in general.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 1:07 pm on December 3, 2018 Permalink | Reply
    Tags: , , Computer technology, , MESO devices, , Multiferroics,   

    From UC Berkeley: “New quantum materials could take computing devices beyond the semiconductor era” 

    UC Berkeley

    From UC Berkeley

    December 3, 2018
    Robert Sanders

    MESO devices, based on magnetoelectric and spin-orbit materials, could someday replace the ubiquitous semiconductor transistor, today represented by CMOS. MESO uses up-and-down magnetic spins in a multiferroic material to store binary information and conduct logic operations. (Intel graphic)

    Researchers from Intel Corp. and UC Berkeley are looking beyond current transistor technology and preparing the way for a new type of memory and logic circuit that could someday be in every computer on the planet.

    In a paper appearing online Dec. 3 in advance of publication in the journal Nature, the researchers propose a way to turn relatively new types of materials, multiferroics and topological materials, into logic and memory devices that will be 10 to 100 times more energy-efficient than foreseeable improvements to current microprocessors, which are based on CMOS (complementary metal–oxide–semiconductor).

    The magneto-electric spin-orbit or MESO devices will also pack five times more logic operations into the same space than CMOS, continuing the trend toward more computations per unit area, a central tenet of Moore’s Law.

    The new devices will boost technologies that require intense computing power with low energy use, specifically highly automated, self-driving cars and drones, both of which require ever increasing numbers of computer operations per second.

    “As CMOS develops into its maturity, we will basically have very powerful technology options that see us through. In some ways, this could continue computing improvements for another whole generation of people,” said lead author Sasikanth Manipatruni, who leads hardware development for the MESO project at Intel’s Components Research group in Hillsboro, Oregon. MESO was invented by Intel scientists, and Manipatruni designed the first MESO device.

    Transistor technology, invented 70 years ago, is used today in everything from cellphones and appliances to cars and supercomputers. Transistors shuffle electrons around inside a semiconductor and store them as binary bits 0 and 1.

    Single crystals of the multiferroic material bismuth-iron-oxide. The bismuth atoms (blue) form a cubic lattice with oxygen atoms (yellow) at each face of the cube and an iron atom (gray) near the center. The somewhat off-center iron interacts with the oxygen to form an electric dipole (P), which is coupled to the magnetic spins of the atoms (M) so that flipping the dipole with an electric field (E) also flips the magnetic moment. The collective magnetic spins of the atoms in the material encode the binary bits 0 and 1, and allow for information storage and logic operations.

    In the new MESO devices, the binary bits are the up-and-down magnetic spin states in a multiferroic, a material first created in 2001 by Ramamoorthy Ramesh, a UC Berkeley professor of materials science and engineering and of physics and a senior author of the paper.

    “The discovery was that there are materials where you can apply a voltage and change the magnetic order of the multiferroic,” said Ramesh, who is also a faculty scientist at Lawrence Berkeley National Laboratory. “But to me, ‘What would we do with these multiferroics?’ was always a big question. MESO bridges that gap and provides one pathway for computing to evolve”

    In the Nature paper, the researchers report that they have reduced the voltage needed for multiferroic magneto-electric switching from 3 volts to 500 millivolts, and predict that it should be possible to reduce this to 100 millivolts: one-fifth to one-tenth that required by CMOS transistors in use today. Lower voltage means lower energy use: the total energy to switch a bit from 1 to 0 would be one-tenth to one-thirtieth of the energy required by CMOS.

    “A number of critical techniques need to be developed to allow these new types of computing devices and architectures,” said Manipatruni, who combined the functions of magneto-electrics and spin-orbit materials to propose MESO. “We are trying to trigger a wave of innovation in industry and academia on what the next transistor-like option should look like.”

    Internet of things and AI

    The need for more energy-efficient computers is urgent. The Department of Energy projects that, with the computer chip industry expected to expand to several trillion dollars in the next few decades, energy use by computers could skyrocket from 3 percent of all U.S. energy consumption today to 20 percent, nearly as much as today’s transportation sector. Without more energy-efficient transistors, the incorporation of computers into everything – the so-called internet of things – would be hampered. And without new science and technology, Ramesh said, America’s lead in making computer chips could be upstaged by semiconductor manufacturers in other countries.

    “Because of machine learning, artificial intelligence and IOT, the future home, the future car, the future manufacturing capability is going to look very different,” said Ramesh, who until recently was the associate director for Energy Technologies at Berkeley Lab. “If we use existing technologies and make no more discoveries, the energy consumption is going to be large. We need new science-based breakthroughs.”

    Paper co-author Ian Young, a UC Berkeley Ph.D., started a group at Intel eight years ago, along with Manipatruni and Dmitri Nikonov, to investigate alternatives to transistors, and five years ago they began focusing on multiferroics and spin-orbit materials, so-called “topological” materials with unique quantum properties.

    “Our analysis brought us to this type of material, magneto-electrics, and all roads led to Ramesh,” said Manipatruni.

    Multiferroics and spin-orbit materials

    Multiferroics are materials whose atoms exhibit more than one “collective state.” In ferromagnets, for example, the magnetic moments of all the iron atoms in the material are aligned to generate a permanent magnet. In ferroelectric materials, on the other hand, the positive and negative charges of atoms are offset, creating electric dipoles that align throughout the material and create a permanent electric moment.

    MESO is based on a multiferroic material consisting of bismuth, iron and oxygen (BiFeO3) that is both magnetic and ferroelectric. Its key advantage, Ramesh said, is that these two states – magnetic and ferroelectric – are linked or coupled, so that changing one affects the other. By manipulating the electric field, you can change the magnetic state, which is critical to MESO.

    The key breakthrough came with the rapid development of topological materials with spin-orbit effect, which allow for the state of the multiferroic to be read out efficiently. In MESO devices, an electric field alters or flips the dipole electric field throughout the material, which alters or flips the electron spins that generate the magnetic field. This capability comes from spin-orbit coupling, a quantum effect in materials, which produces a current determined by electron spin direction.

    In another paper that appeared earlier this month in Science Advances, UC Berkeley and Intel experimentally demonstrated voltage-controlled magnetic switching using the magneto-electric material bismuth-iron-oxide (BiFeO3), a key requirement for MESO.

    “We are looking for revolutionary and not evolutionary approaches for computing in the beyond-CMOS era,” Young said. “MESO is built around low-voltage interconnects and low-voltage magneto-electrics, and brings innovation in quantum materials to computing.”

    Other co-authors of the Nature paper are Chia-Ching Lin, Tanay Gosavi and Huichu Liu of Intel and Bhagwati Prasad, Yen-Lin Huang and Everton Bonturim of UC Berkeley. The work was supported by Intel.


    Beyond CMOS computing with spin and polarization Nature Physics

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded in the wake of the gold rush by leaders of the newly established 31st state, the University of California’s flagship campus at Berkeley has become one of the preeminent universities in the world. Its early guiding lights, charged with providing education (both “practical” and “classical”) for the state’s people, gradually established a distinguished faculty (with 22 Nobel laureates to date), a stellar research library, and more than 350 academic programs.

    UC Berkeley Seal

  • richardmitnick 12:33 pm on December 3, 2018 Permalink | Reply
    Tags: a major Lab research initiative called “Beyond Moore’s Law, , Computer technology, , , Ramamoorthy Ramesh   

    From Lawrence Berkeley National Lab: “Berkeley Lab Takes a Quantum Leap in Microelectronics” Ramamoorthy Ramesh 

    Berkeley Logo

    From Lawrence Berkeley National Lab

    December 3, 2018
    Julie Chao
    (510) 486-6491

    (Courtesy Ramamoorthy Ramesh)
    A Q&A with Ramamoorthy Ramesh on the need for next-generation computer chips

    Ramamoorthy Ramesh, a Lawrence Berkeley National Laboratory (Berkeley Lab) scientist in the Materials Sciences Division, leads a major Lab research initiative called “Beyond Moore’s Law,” which aims to develop next-generation microelectronics and computing architectures.

    Moore’s Law – which holds that the number of transistors on a chip will double about every two years and has held true in the industry for the last four decades – is coming to an inevitable end as physical limitations are reached. Major innovations will be required to sustain advances in computing. Working with industry leaders, Berkeley Lab’s approach spans fundamental materials discovery, materials physics, device development, algorithms, and systems architecture.

    In collaboration with scientists at Intel Corp., Ramesh proposes a new memory in logic device for replacing or augmenting conventional transistors. The work is detailed in a new Nature paper described in this UC Berkeley news release [blog post will follow]. Here Ramesh discusses the need for a quantum leap in microelectronics and how Berkeley Lab plans to play a role.

    Q. Why is the end of Moore’s Law such an urgent problem?

    If we look around, at the macro level there are two big global phenomena happening in electronics. One is the Internet of Things. It basically means every building, every car, every manufacturing capability is going to be fully accessorized with microelectronics. So, they’re all going to be interconnected. While the exact size of this market (in terms of number of units and their dollar value) is being debated, there is agreement that it is growing rapidly.

    The second big revolution is artificial intelligence/machine learning. This field is in its nascent stages and will find applications in diverse technology spaces. However, these applications are currently limited by the memory wall and the limitations imposed by the efficiency of computing. Thus, we will need more powerful chips that consume much lower energy. Driven by these emerging applications, there is the potential for the microelectronics market to grow exponentially.

    Semiconductors have been progressively shrinking and becoming faster, but they are consuming more and more power. If we don’t do anything to curb their energy consumption, the total energy consumption of microelectronics will jump from 4 percent to about 20 percent of primary energy. As a point of reference, today transportation consumes 24 percent of U.S. energy, manufacturing another 24 percent, and buildings 38 percent; that’s almost 90 percent. This could become almost like transportation. So, we said, that’s a big number. We need to go to a totally new technology and reduce energy consumption by several orders of magnitude.

    Q. So energy consumption is the main driver for the need for semiconductor innovation?

    No, there are two other factors. One is national security. Microelectronics and computing systems are a critical part of our national security infrastructure. And the other is global competitiveness. China has been investing hundreds of billions of dollars into making these fabs. Previously only U.S. companies made them. For two years, the fastest computer in the world was built in China. So this is a strategic issue for the U.S.

    Q. What is Berkeley Lab doing to address the problem?

    Berkeley lab is pursuing a “co-design” framework using exemplar demonstration pathways. In our co-design framework, the four key components are: (1) computational materials discovery and device scale modeling (led by Kristin Persson and Lin-wang Wang), (2) materials synthesis and materials physics (led by Peter Fischer), (3) scale up of synthesis pathways (led by Patrick Naulleau), and (4) circuit architecture and algorithms (led by John Shalf). These components are all working together to identify the key elements of an “attojoule” (10-18 Joules) logic-in-memory switch, where attojoule refers to the energy consumption per logic operation.

    One key outcome of the Berkeley Lab co-design framework is to understand the fundamental scientific issues that will impact the attojoule device, which will be about six orders of magnitude lower in energy compared to today’s state-of-the-art CMOS transistors, which work at around 50 picojoules (10-12 Joules) per logic operation.

    This paper presents the key elements of a pathway by which such an attojoule switch can be designed and fabricated using magnetoelectric multiferroics and more broadly, using quantum materials. There are still scientific as well as technological challenges.

    Berkeley Lab’s capabilities and facilities are well suited to tackle these challenges. We have nanoscience and x-ray facilities such as the Molecular Foundry and Advanced Light Source, big scientific instruments, which will be critical and allow us to rapidly explore new materials and understand their electronic, magnetic, and chemical properties.

    Another is the Materials Project, which enables discovery of new materials using a computational approach. Plus there is our ongoing work on deep UV lithography, which is carried out under the aegis of the Center for X-Ray Optics. This provides us with a perfect framework to address how we can do device processing at large scales.

    All of this will be done in collaboration with faculty and students at UC Berkeley and our partners in industry, as this paper illustrated.

    Q. What is the timeline?

    It will take a decade. There’s still a lot of work to be done. Your computer today operates at 3 volts. This device in the Nature paper proposes something at 100 millivolts. We need to understand the physics a lot better. That’s why a place like Berkeley Lab is so important.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Bringing Science Solutions to the World

    In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

    Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

    Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

    A U.S. Department of Energy National Laboratory Operated by the University of California.

    University of California Seal

    DOE Seal

  • richardmitnick 4:52 pm on June 17, 2016 Permalink | Reply
    Tags: , Computer technology, , World’s First 1000-Processor Chip   

    From UC Davis: “World’s First 1,000-Processor Chip” 

    UC Davis bloc

    UC Davis

    June 17, 2016
    Andy Fell

    This microchip with 1,000 processor cores was designed by graduate students in the UC Davis Department of Electrical and Computer Engineering. The chip is thought to be fastest designed in a university lab. No image credit.

    A microchip containing 1,000 independent programmable processors has been designed by a team at the University of California, Davis, Department of Electrical and Computer Engineering. The energy-efficient “KiloCore” chip has a maximum computation rate of 1.78 trillion instructions per second and contains 621 million transistors. The KiloCore was presented at the 2016 Symposium on VLSI Technology and Circuits in Honolulu on June 16.

    “To the best of our knowledge, it is the world’s first 1,000-processor chip and it is the highest clock-rate processor ever designed in a university,” said Bevan Baas, professor of electrical and computer engineering, who led the team that designed the chip architecture. While other multiple-processor chips have been created, none exceed about 300 processors, according to an analysis by Baas’ team. Most were created for research purposes and few are sold commercially. The KiloCore chip was fabricated by IBM using their 32 nm CMOS technology.

    Each processor core can run its own small program independently of the others, which is a fundamentally more flexible approach than so-called Single-Instruction-Multiple-Data approaches utilized by processors such as GPUs; the idea is to break an application up into many small pieces, each of which can run in parallel on different processors, enabling high throughput with lower energy use, Baas said.

    Because each processor is independently clocked, it can shut itself down to further save energy when not needed, said graduate student Brent Bohnenstiehl, who developed the principal architecture. Cores operate at an average maximum clock frequency of 1.78 GHz, and they transfer data directly to each other rather than using a pooled memory area that can become a bottleneck for data.

    The chip is the most energy-efficient “many-core” processor ever reported, Baas said. For example, the 1,000 processors can execute 115 billion instructions per second while dissipating only 0.7 Watts, low enough to be powered by a single AA battery. The KiloCore chip executes instructions more than 100 times more efficiently than a modern laptop processor.

    Applications already developed for the chip include wireless coding/decoding, video processing, encryption, and others involving large amounts of parallel data such as scientific data applications and datacenter record processing.

    The team has completed a compiler and automatic program mapping tools for use in programming the chip.

    Additional team members are Aaron Stillmaker, Jon Pimentel, Timothy Andreas, Bin Liu, Anh Tran and Emmanuel Adeagbo, all graduate students at UC Davis. The fabrication was sponsored by the Department of Defense and ARL/ARO Grant W911NF-13-1-0090; with support from NSF Grants 0903549, 1018972, 1321163, and CAREER Award 0546907; and SRC GRC Grants 1971 and 2321.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    UC Davis Campus

    The University of California, Davis, is a major public research university located in Davis, California, just west of Sacramento. It encompasses 5,300 acres of land, making it the second largest UC campus in terms of land ownership, after UC Merced.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: