Recent Updates Page 2 Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:29 pm on February 18, 2019 Permalink | Reply
    Tags: , , , , , Video "ESOcast 194: Cutting Edge of Contemporary Astronomy"   

    From European Southern Observatory: Video “ESOcast 194: Cutting Edge of Contemporary Astronomy” 

    ESO 50 Large

    From European Southern Observatory

    ESOcast 194: Cutting Edge of Contemporary Astronomy – Video

    ESO’s observatories operate a suite of the most advanced ground-based astronomical telescopes in the world, providing researchers with state-of-the-art facilities to study the Universe. Observing time on the telescopes is highly sought-after due to the remarkable detail in which they can capture the sky.

    Every year, ESO receives thousands of observing proposals from researchers across the globe – up to ten times more hours of observations than are actually available. ESO therefore has to decide which cutting-edge astronomical questions should be awarded valuable telescope time .

    In this ESOcast, six of the astronomers who help to make these decisions tell us about the hottest topics in contemporary astronomy. Covering topics ranging from dark matter to exoplanets, these astronomers make the case for why these cutting-edge fields deserve time at ESO’s telescopes.

    You can subscribe to the ESOcasts on iTunes or receive future episodes on YouTube.

    Many other ESOcast episodes are also available.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.


    Stem Education Coalition

    Visit ESO in Social Media-

    Facebook

    Twitter

    YouTube

    ESO Bloc Icon

    ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 16 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom, along with the host state of Chile. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is a major partner in ALMA, the largest astronomical project in existence. And on Cerro Armazones, close to Paranal, ESO is building the 39-metre EEuropean Extremely Large Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

    ESO La Silla HELIOS (HARPS Experiment for Light Integrated Over the Sun)

    ESO/HARPS at La Silla

    ESO 3.6m telescope & HARPS at Cerro LaSilla, Chile, 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO 2.2 meter telescope at La Silla, 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO/Cerro LaSilla, 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO VLT Platform at Cerro Paranal elevation 2,635 m (8,645 ft)


    ESO VLT 4 lasers on Yepun

    Glistening against the awesome backdrop of the night sky above ESO_s Paranal Observatory, four laser beams project out into the darkness from Unit Telescope 4 UT4 of the VLT.

    ESO/NTT at Cerro La Silla, Chile, at an altitude of 2400 metres



    ESO/Vista Telescope at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    ESO/E-ELT,to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).

    ESO/APEX high on the Chajnantor plateau in Chile’s Atacama region, at an altitude of over 4,800 m (15,700 ft)

    Leiden MASCARA instrument, La Silla, located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft)

    Leiden MASCARA cabinet at ESO Cerro la Silla located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft)

    ESO Next Generation Transit Survey at Cerro Paranel, 2,635 metres (8,645 ft) above sea level

    SPECULOOS four 1m-diameter robotic telescopes 2016 in the ESO Paranal Observatory, 2,635 metres (8,645 ft) above sea level

    ESO TAROT telescope at Paranal, 2,635 metres (8,645 ft) above sea level

    ESO ExTrA telescopes at Cerro LaSilla at an altitude of 2400 metres

    Advertisements
     
  • richardmitnick 2:24 pm on February 18, 2019 Permalink | Reply
    Tags: , , , , Rising Temperatures Reduce Colorado River Flow   

    From Eos: “Rising Temperatures Reduce Colorado River Flow” 

    From AGU
    Eos news bloc

    From Eos

    2.18.19
    Sarah Stanley

    1
    New research teases out the relative roles of hotter temperatures and declining precipitation in reducing the flow volume of the Colorado River, which feeds Lake Mead, pictured here [and much more]. Credit: John Fleck

    The Colorado River flows through seven U.S. states and northern Mexico, before discharging into the Gulf of California. Along the way, it provides drinking water to millions of people and irrigates thousands of square kilometers of cropland. However, although annual precipitation in the region increased by about 1% in the past century, the volume of water flowing down the river has dropped by over 15%.

    New research by Xiao et al. [Water Resources Research]. examines the causes behind this 100-year decline in natural flow, teasing out the relative contributions of rising temperatures and changes in precipitation. This work builds on a 2017 paper [Water Resources Research] showing that rising temperatures played a significant role in reduced flows during the Millennium Drought between 2000 and 2014.

    Rising temperatures can lower flow by increasing the amount of water lost to evaporation from soil and surface water, boosting the amount of water used by plants, lengthening the growing season, and shrinking snowpacks that contribute to flow via meltwater.

    To investigate the impact of rising temperatures on Colorado River flow over the past century, the authors of the new paper employed the Variable Infiltration Capacity (VIC) hydrologic model. The VIC model enabled them to simulate 100 years of flow at different locations throughout the vast network of tributaries and subbasins that make up the Colorado River system and to tease out the effects of long-term changes in precipitation and temperature throughout the entire Colorado River.

    The researchers found that rising temperatures are responsible for 53% of the long-term decline in the river’s flow, with changing precipitation patterns and other factors accounting for the rest. The sizable effects of rising temperatures are largely due to increased evaporation and water uptake by plants, as well as by sublimation of snowpacks.

    Additional simulations with the VIC model showed that warming drove 54% of the decline in flow seen during the Millennium Drought, which began in 2000 (and is ongoing). Flows also declined because precipitation fell on less productive (i.e., more arid) subbasins rather than on highly productive subbasins near the Continental Divide. This contrasts strongly with an earlier (1950s–1960s) drought of similar severity, which was caused almost entirely by below-normal precipitation over most of the basin.

    The authors note that the situation is complex, given different long-term trends and drought response across the basin, as well as seasonal differences in temperature and precipitation. Still, the new findings support an argument from the 2017 research that as global warming progresses, the relative contribution of rising temperatures to decreased Colorado River flow will increase.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

     
  • richardmitnick 2:05 pm on February 18, 2019 Permalink | Reply
    Tags: "Can we trust scientific discoveries made using machine learning?", , , Machine learning (ML) is a branch of statistics and computer science concerned with building computational systems that learn from data rather than following explicit instructions,   

    From Rice University: “Can we trust scientific discoveries made using machine learning?” 

    Rice U bloc

    From Rice University

    February 18, 2019

    Jeff Falk
    713-348-6775
    jfalk@rice.edu

    Jade Boyd
    713-348-6778
    jadeboyd@rice.edu

    Rice U. expert: Key is creating ML systems that question their own predictions.

    Rice University statistician Genevera Allen says scientists must keep questioning the accuracy and reproducibility of scientific discoveries made by machine-learning techniques until researchers develop new computational systems that can critique themselves.

    1
    Genevera Allen (Photo by Tommy LaVergne/Rice University)

    Allen, associate professor of statistics, computer science and electrical and computer engineering at Rice and of pediatrics-neurology at Baylor College of Medicine, will address the topic in both a press briefing and a general session today at the 2019 Annual Meeting of the American Association for the Advancement of Science (AAAS).

    “The question is, ‘Can we really trust the discoveries that are currently being made using machine-learning techniques applied to large data sets?’” Allen said. “The answer in many situations is probably, ‘Not without checking,’ but work is underway on next-generation machine-learning systems that will assess the uncertainty and reproducibility of their predictions.”

    Machine learning (ML) is a branch of statistics and computer science concerned with building computational systems that learn from data rather than following explicit instructions. Allen said much attention in the ML field has focused on developing predictive models that allow ML to make predictions about future data based on its understanding of data it has studied.

    “A lot of these techniques are designed to always make a prediction,” she said. “They never come back with ‘I don’t know,’ or ‘I didn’t discover anything,’ because they aren’t made to.”

    She said uncorroborated data-driven discoveries from recently published ML studies of cancer data are a good example.

    “In precision medicine, it’s important to find groups of patients that have genomically similar profiles so you can develop drug therapies that are targeted to the specific genome for their disease,” Allen said. “People have applied machine learning to genomic data from clinical cohorts to find groups, or clusters, of patients with similar genomic profiles.

    “But there are cases where discoveries aren’t reproducible; the clusters discovered in one study are completely different than the clusters found in another,” she said. “Why? Because most machine-learning techniques today always say, ‘I found a group.’ Sometimes, it would be far more useful if they said, ‘I think some of these are really grouped together, but I’m uncertain about these others.’”

    Allen will discuss uncertainty and reproducibility of ML techniques for data-driven discoveries at a 10 a.m. press briefing today, and she will discuss case studies and research aimed at addressing uncertainty and reproducibility in the 3:30 p.m. general session, “Machine Learning and Statistics: Applications in Genomics and Computer Vision.” Both sessions are at the Marriott Wardman Park Hotel.

    Allen is the founding director of Rice’s Center for Transforming Data to Knowledge (D2K Lab) and a member of the Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital. Her research lies in the areas of modern multivariate analysis, graphical models, statistical machine learning and data integration, with a particular focus on statistical methods that help scientists make sense of “big data” from high-throughput genomics, neuroimaging and other applications. Her previous honors include a National Science Foundation CAREER award, the International Biometric Society’s Young Statistician Showcase award and Forbes ’30 under 30′ in science and health care.

    AAAS is the world’s largest multi-disciplinary science society, and the AAAS Annual Meeting, Feb. 14-17, is the world’s largest general scientific gathering. For more information, visit: https://aaas.org.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    stem

    Stem Education Coalition

    Rice U campus

    In his 1912 inaugural address, Rice University president Edgar Odell Lovett set forth an ambitious vision for a great research university in Houston, Texas; one dedicated to excellence across the range of human endeavor. With this bold beginning in mind, and with Rice’s centennial approaching, it is time to ask again what we aspire to in a dynamic and shrinking world in which education and the production of knowledge will play an even greater role. What shall our vision be for Rice as we prepare for its second century, and how ought we to advance over the next decade?

    This was the fundamental question posed in the Call to Conversation, a document released to the Rice community in summer 2005. The Call to Conversation asked us to reexamine many aspects of our enterprise, from our fundamental mission and aspirations to the manner in which we define and achieve excellence. It identified the pressures of a constantly changing and increasingly competitive landscape; it asked us to assess honestly Rice’s comparative strengths and weaknesses; and it called on us to define strategic priorities for the future, an effort that will be a focus of the next phase of this process.

     
  • richardmitnick 9:05 pm on February 17, 2019 Permalink | Reply
    Tags: "The Secret History of Women in Coding", , ,   

    From The New York Times: Women In STEM-“The Secret History of Women in Coding” 

    New York Times

    From The New York Times

    Feb. 13, 2019
    Clive Thompson

    Computer programming once had much better gender balance than it does today. What went wrong?

    1
    2
    Mary Allen Wilkes with a LINC at M.I.T., where she was a programmer. Credit Joseph C. Towler, Jr.

    As a teenager in Maryland in the 1950s, Mary Allen Wilkes had no plans to become a software pioneer — she dreamed of being a litigator. One day in junior high in 1950, though, her geography teacher surprised her with a comment: “Mary Allen, when you grow up, you should be a computer programmer!” Wilkes had no idea what a programmer was; she wasn’t even sure what a computer was. Relatively few Americans were. The first digital computers had been built barely a decade earlier at universities and in government labs.

    By the time she was graduating from Wellesley College in 1959, she knew her legal ambitions were out of reach. Her mentors all told her the same thing: Don’t even bother applying to law school. “They said: ‘Don’t do it. You may not get in. Or if you get in, you may not get out. And if you get out, you won’t get a job,’ ” she recalls. If she lucked out and got hired, it wouldn’t be to argue cases in front of a judge. More likely, she would be a law librarian, a legal secretary, someone processing trusts and estates.

    But Wilkes remembered her junior high school teacher’s suggestion. In college, she heard that computers were supposed to be the key to the future. She knew that the Massachusetts Institute of Technology had a few of them.


    So on the day of her graduation, she had her parents drive her over to M.I.T. and marched into the school’s employment office. “Do you have any jobs for computer programmers?” she asked. They did, and they hired her.

    It might seem strange now that they were happy to take on a random applicant with absolutely no experience in computer programming. But in those days, almost nobody had any experience writing code. The discipline did not yet really exist; there were vanishingly few college courses in it, and no majors. (Stanford, for example, didn’t create a computer-science department until 1965.) So instead, institutions that needed programmers just used aptitude tests to evaluate applicants’ ability to think logically. Wilkes happened to have some intellectual preparation: As a philosophy major, she had studied symbolic logic, which can involve creating arguments and inferences by stringing together and/or statements in a way that resembles coding.

    Wilkes quickly became a programming whiz. She first worked on the IBM 704, which required her to write in an abstruse “assembly language.”

    7
    An IBM 704 computer, with IBM 727 tape drives and IBM 780 CRT display. (Image courtesy of LLNL.)

    (A typical command might be something like “LXA A, K,” telling the computer to take the number in Location A of its memory and load it into to the “Index Register” K.) Even getting the program into the IBM 704 was a laborious affair. There were no keyboards or screens; Wilkes had to write a program on paper and give it to a typist, who translated each command into holes on a punch card. She would carry boxes of commands to an “operator,” who then fed a stack of such cards into a reader. The computer executed the program and produced results, typed out on a printer.

    Often enough, Wilkes’s code didn’t produce the result she wanted. So she had to pore over her lines of code, trying to deduce her mistake, stepping through each line in her head and envisioning how the machine would execute it — turning her mind, as it were, into the computer. Then she would rewrite the program. The capacity of most computers at the time was quite limited; the IBM 704 could handle only about 4,000 “words” of code in its memory. A good programmer was concise and elegant and never wasted a word. They were poets of bits. “It was like working logic puzzles — big, complicated logic puzzles,” Wilkes says. “I still have a very picky, precise mind, to a fault. I notice pictures that are crooked on the wall.”

    What sort of person possesses that kind of mentality? Back then, it was assumed to be women. They had already played a foundational role in the prehistory of computing: During World War II, women operated some of the first computational machines used for code-breaking at Bletchley Park in Britain.

    9
    A Colossus Mark 2 computer being operated by Wrens. The slanted control panel on the left was used to set the “pin” (or “cam”) patterns of the Lorenz. The “bedstead” paper tape transport is on the right.

    Develope-Tommy Flowers, assisted by Sidney Broadhurst, William Chandler and for the Mark 2 machines, Allen Coombs
    Manufacturer-Post Office Research Station
    Type-Special-purpose electronic digital programmable computer
    Generation-First-generation computer
    Release date Mk 1: December 1943 Mk 2: 1 June 1944
    Discontinued 1960

    8
    The Lorenz SZ machines had 12 wheels, each with a different number of cams (or “pins”).
    Wheel number 1 2 3 4 5 6 7 8 9 10 11 12
    BP wheel name[13] ψ1 ψ2 ψ3 ψ4 ψ5 μ37 μ61 χ1 χ2 χ3 χ4 χ5
    Number of cams (pins) 43 47 51 53 59 37 61 41 31 29 26 23

    Colossus was a set of computers developed by British codebreakers in the years 1943–1945 to help in the cryptanalysis of the Lorenz cipher. Colossus used thermionic valves (vacuum tubes) to perform Boolean and counting operations. Colossus is thus regarded as the world’s first programmable, electronic, digital computer, although it was programmed by switches and plugs and not by a stored program.

    Colossus was designed by research telephone engineer Tommy Flowers to solve a problem posed by mathematician Max Newman at the Government Code and Cypher School (GC&CS) at Bletchley Park. Alan Turing’s use of probability in cryptanalysis (see Banburismus) contributed to its design. It has sometimes been erroneously stated that Turing designed Colossus to aid the cryptanalysis of the Enigma.Turing’s machine that helped decode Enigma was the electromechanical Bombe, not Colossus.

    In the United States, by 1960, according to government statistics, more than one in four programmers were women. At M.I.T.’s Lincoln Labs in the 1960s, where Wilkes worked, she recalls that most of those the government categorized as “career programmers” were female. It wasn’t high-status work — yet.

    In 1961, Wilkes was assigned to a prominent new project, the creation of the LINC.

    LINC from MIT Lincoln Lab


    Wesley Clark in 1962 at a demonstration of the first Laboratory Instrument Computer, or LINC. Credit MIT Lincoln Laboratory

    As one of the world’s first interactive personal computers, it would be a breakthrough device that could fit in a single office or lab. It would even have its own keyboard and screen, so it could be programmed more quickly, without awkward punch cards or printouts. The designers, who knew they could make the hardware, needed Wilkes to help write the software that would let a user control the computer in real time.

    For two and a half years, she and a team toiled away at flow charts, pondering how the circuitry functioned, how to let people communicate with it. “We worked all these crazy hours; we ate all kinds of terrible food,” she says. There was sexism, yes, especially in the disparity between how men and women were paid and promoted, but Wilkes enjoyed the relative comity that existed among the men and women at Lincoln Labs, the sense of being among intellectual peers. “We were a bunch of nerds,” Wilkes says dryly. “We were a bunch of geeks. We dressed like geeks. I was completely accepted by the men in my group.” When they got an early prototype of the LINC working, it solved a fiendish data-processing problem for a biologist, who was so excited that he danced a happy jig around the machine.

    In late 1964, after Wilkes returned from traveling around the world for a year, she was asked to finish writing the LINC’s operating system. But the lab had been relocated to St. Louis, and she had no desire to move there. Instead, a LINC was shipped to her parents’ house in Baltimore. Looming in the front hall near the foot of the stairs, a tall cabinet of whirring magnetic tapes across from a refrigerator-size box full of circuitry, it was an early glimpse of a sci-fi future: Wilkes was one of the first people on the planet to have a personal computer in her home. (Her father, an Episcopal clergyman, was thrilled. “He bragged about it,” she says. “He would tell anybody who would listen, ‘I bet you don’t have a computer in your living room.’ ”) Before long, LINC users around the world were using her code to program medical analyses and even create a chatbot that interviewed patients about their symptoms.

    But even as Wilkes established herself as a programmer, she still craved a life as a lawyer. “I also really finally got to the point where I said, ‘I don’t think I want to do this for the rest of my life,’ ” she says. Computers were intellectually stimulating but socially isolating. In 1972, she applied and got in to Harvard Law School, and after graduating, she spent the next four decades as a lawyer. “I absolutely loved it,” she says.

    Today Wilkes is retired and lives in Cambridge, Mass. White-haired at 81, she still has the precise mannerisms and the ready, beaming smile that can be seen in photos from the ’60s, when she posed, grinning, beside the LINC. She told me that she occasionally gives talks to young students studying computer science. But the industry they’re heading into is, astonishingly, less populated with women — and by many accounts less welcoming to them — than it was in Wilkes’s day. In 1960, when she started working at M.I.T., the proportion of women in computing and mathematical professions (which are grouped together in federal government data) was 27 percent. It reached 35 percent in 1990. But, in the government’s published figures, that was the peak. The numbers fell after that, and by 2013, women were down to 26 percent — below their share in 1960.

    When Wilkes talks to today’s young coders, they are often shocked to learn that women were among the field’s earliest, towering innovators and once a common sight in corporate America. “Their mouths are agape,” Wilkes says. “They have absolutely no idea.”

    Almost 200 years ago, the first person to be what we would now call a coder was, in fact, a woman: Lady Ada Lovelace.

    4
    Ada Lovelace aka Augusta Ada Byron-1843 or 1850 a rare daguerreotype by Antoine Claudet. Picture taken in his studio probably near Regents Park in London
    Date 2 January 1843
    Source https://blogs.bodleian.ox.ac.uk/adalovelace/2015/10/14/only-known-photographs-of-ada-lovelace-in-bodleian-display/ Reproduction courtesy of Geoffrey Bond.
    Augusta Ada King, Countess of Lovelace (née Byron; 10 December 1815 – 27 November 1852) was an English mathematician and writer, chiefly known for her work on Charles Babbage’s proposed mechanical general-purpose computer, the Analytical Engine [below]. She was the first to recognise that the machine had applications beyond pure calculation, and published the first algorithm intended to be carried out by such a machine. As a result, she is sometimes regarded as the first to recognise the full potential of a “computing machine” and the first computer programmer.

    Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage’s difference engine

    As a young mathematician in England in 1833, she met Charles Babbage, an inventor who was struggling to design what he called the Analytical Engine, which would be made of metal gears and able to execute if/then commands and store information in memory. Enthralled, Lovelace grasped the enormous potential of a device like this. A computer that could modify its own instructions and memory could be far more than a rote calculator, she realized. To prove it, Lovelace wrote what is often regarded as the first computer program in history, an algorithm with which the Analytical Engine would calculate the Bernoulli sequence of numbers. (She wasn’t shy about her accomplishments: “That brain of mine is something more than merely mortal; as time will show,” she once wrote.) But Babbage never managed to build his computer, and Lovelace, who died of cancer at 36, never saw her code executed.

    Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage’s difference engine

    When digital computers finally became a practical reality in the 1940s, women were again pioneers in writing software for the machines. At the time, men in the computing industry regarded writing code as a secondary, less interesting task. The real glory lay in making the hardware. Software? “That term hadn’t yet been invented,” says Jennifer S. Light, a professor at M.I.T. who studies the history of science and technology.

    This dynamic was at work in the development of the first programmable digital computer in the United States, the Electronic Numerical Integrator and Computer, or Eniac, during the 1940s.

    3
    Computer operators with an Eniac — the world’s first programmable general-purpose computer. Credit Corbis/Getty Images

    ENIAC progamming. Columbia University

    Funded by the military, the thing was a behemoth, weighing more than 30 tons and including 17,468 vacuum tubes. Merely getting it to work was seen as the heroic, manly engineering feat. In contrast, programming it seemed menial, even secretarial. Women had long been employed in the scut work of doing calculations. In the years leading up to the Eniac, many companies bought huge electronic tabulating machines — quite useful for tallying up payroll, say — from companies like IBM; women frequently worked as the punch-card operators for these overgrown calculators. When the time came to hire technicians to write instructions for the Eniac, it made sense, to the men in charge, to pick an all-female team: Kathleen McNulty, Jean Jennings, Betty Snyder, Marlyn Wescoff, Frances Bilas and Ruth Lichterman. The men would figure out what they wanted Eniac to do; the women “programmed” it to execute the instructions.

    “We could diagnose troubles almost down to the individual vacuum tube,” Jennings later told an interviewer for the IEEE Annals of the History of Computing. Jennings, who grew up as the tomboy daughter of low-income parents near a Missouri community of 104 people, studied math at college. “Since we knew both the application and the machine, we learned to diagnose troubles as well as, if not better than, the engineer.”

    The Eniac women were among the first coders to discover that software never works right the first time — and that a programmer’s main work, really, is to find and fix the bugs. Their innovations included some of software’s core concepts. Betty Snyder realized that if you wanted to debug a program that wasn’t running correctly, it would help to have a “break point,” a moment when you could stop a program midway through its run. To this day, break points are a key part of the debugging process.

    In 1946, Eniac’s creators wanted to show off the computer to a group of leaders in science, technology and the military. They asked Jennings and Snyder to write a program that calculated missile trajectories. After weeks of intense effort, they and their team had a working program, except for one glitch: It was supposed to stop when the missile landed, but for some reason it kept running. The night before the demo, Snyder suddenly intuited the problem. She went to work early the next day, flipped a single switch inside the Eniac and eliminated the bug. “Betty could do more logical reasoning while she was asleep than most people can do awake,” Jennings later said. Nonetheless, the women got little credit for their work. At that first official demonstration to show off Eniac, the male project managers didn’t mention, much less introduce, the women.

    After the war, as coding jobs spread from the military into the private sector, women remained in the coding vanguard, doing some of the highest-profile work.

    3
    Rear Admiral Grace M. Hopper, 1984

    Grace Brewster Murray Hopper (née Murray; December 9, 1906 – January 1, 1992) was an American computer scientist and United States Navy rear admiral. One of the first programmers of the Harvard Mark I computer, she was a pioneer of computer programming who invented one of the first compiler related tools. She popularized the idea of machine-independent programming languages, which led to the development of COBOL, an early high-level programming language still in use today.

    The pioneering programmer Grace Hopper is frequently credited with creating the first “compiler,” a program that lets users create programming languages that more closely resemble regular written words: A coder could thus write the English-like code, and the compiler would do the hard work of turning it into ones and zeros for the computer. Hopper also developed the “Flowmatic” language for nontechnical businesspeople. Later, she advised the team that created the Cobol language, which became widely used by corporations. Another programmer from the team, Jean E. Sammet, continued to be influential in the language’s development for decades. Fran Allen was so expert in optimizing Fortran, a popular language for performing scientific calculations, that she became the first female IBM fellow.

    NERSC Hopper Cray XE6 supercomputer

    When the number of coding jobs exploded in the ’50s and ’60s as companies began relying on software to process payrolls and crunch data, men had no special advantage in being hired. As Wilkes had discovered, employers simply looked for candidates who were logical, good at math and meticulous. And in this respect, gender stereotypes worked in women’s favor: Some executives argued that women’s traditional expertise at painstaking activities like knitting and weaving manifested precisely this mind-set. (The 1968 book Your Career in Computers stated that people who like “cooking from a cookbook” make good programmers.)

    The field rewarded aptitude: Applicants were often given a test (typically one involving pattern recognition), hired if they passed it and trained on the job, a process that made the field especially receptive to neophytes. “Know Nothing About Computers? Then We’ll Teach You (and Pay You While Doing So),” one British ad promised in 1965. In a 1957 recruiting pitch in the United States, IBM’s brochure titled My Fair Ladies specifically encouraged women to apply for coding jobs.

    Such was the hunger for programming talent that a young black woman named Arlene Gwendolyn Lee [no photo available] could become one of the early female programmers in Canada, despite the open discrimination of the time. Lee was half of a biracial couple to whom no one would rent, so she needed money to buy a house. According to her son, who has described his mother’s experience in a blog post, Lee showed up at a firm after seeing its ad for data processing and systems analytics jobs in a Toronto newspaper sometime in the early 1960s. Lee persuaded the employers, who were all white, to let her take the coding aptitude test. When she placed in the 99th percentile, the supervisors grilled her with questions before hiring her. “I had it easy,” she later told her son. “The computer didn’t care that I was a woman or that I was black. Most women had it much harder.”

    Elsie Shutt learned to code during her college summers while working for the military at the Aberdeen Proving Ground, an Army facility in Maryland.

    8
    Elsie Shutt founded one of the first software businesses in the U.S. in 1958

    In 1953, while taking time off from graduate school, she was hired to code for Raytheon, where the programmer work force “was about 50 percent men and 50 percent women,” she told Janet Abbate, a Virginia Tech historian and author of the 2012 book Recoding Gender. “And it really amazed me that these men were programmers, because I thought it was women’s work!”

    When Shutt had a child in 1957, state law required her to leave her job; the ’50s and ’60s may have been welcoming to full-time female coders, but firms were unwilling to offer part-time work, even to superb coders. So Shutt founded Computations Inc., a consultancy that produced code for corporations. She hired stay-at-home mothers as part-time employees; if they didn’t already know how to code, she trained them. They cared for their kids during the day, then coded at night, renting time on local computers. “What it turned into was a feeling of mission,” Shutt told Abbate, “in providing work for women who were talented and did good work and couldn’t get part-time jobs.” Business Week called the Computations work force the “pregnant programmers” in a 1963 article illustrated with a picture of a baby in a bassinet in a home hallway, with the mother in the background, hard at work writing software. (The article’s title: Mixing Math and Motherhood.)

    By 1967, there were so many female programmers that Cosmopolitan magazine published an article about The Computer Girls, accompanied by pictures of beehived women at work on computers that evoked the control deck of the U.S.S. Enterprise. The story noted that women could make $20,000 a year doing this work (or more than $150,000 in today’s money). It was the rare white-collar occupation in which women could thrive. Nearly every other highly trained professional field admitted few women; even women with math degrees had limited options: teaching high school math or doing rote calculations at insurance firms.

    “Women back then would basically go, ‘Well, if I don’t do programming, what else will I do?’ ” Janet Abbate says. “The situation was very grim for women’s opportunities.”

    If we want to pinpoint a moment when women began to be forced out of programming, we can look at one year: 1984. A decade earlier, a study revealed that the numbers of men and women who expressed an interest in coding as a career were equal. Men were more likely to enroll in computer-science programs, but women’s participation rose steadily and rapidly through the late ’70s until, by the 1983-84 academic year, 37.1 percent of all students graduating with degrees in computer and information sciences were women. In only one decade, their participation rate more than doubled.

    But then things went into reverse. From 1984 onward, the percentage dropped; by the time 2010 rolled around, it had been cut in half. Only 17.6 percent of the students graduating from computer-science and information-science programs were women.

    One reason for this vertiginous decline has to do with a change in how and when kids learned to program. The advent of personal computers in the late ’70s and early ’80s remade the pool of students who pursued computer-science degrees. Before then, pretty much every student who showed up at college had never touched a computer or even been in the room with one. Computers were rare and expensive devices, available for the most part only in research labs or corporate settings. Nearly all students were on equal footing, in other words, and new to programming.

    Once the first generation of personal computers, like the Commodore 64 or the TRS-80, found their way into homes, teenagers were able to play around with them, slowly learning the major concepts of programming in their spare time.

    9
    Commodore 64

    10
    Radio Shack Tandy TRS80

    By the mid-’80s, some college freshmen were showing up for their first class already proficient as programmers. They were remarkably well prepared for and perhaps even a little jaded about what Computer Science 101 might bring. As it turned out, these students were mostly men, as two academics discovered when they looked into the reasons women’s enrollment was so low.

    5
    Keypunch operators at IBM in Stockholm in the 1930s. Credit IBM

    One researcher was Allan Fisher, then the associate dean of the computer-science school at Carnegie Mellon University. The school established an undergraduate program in computer science in 1988, and after a few years of operation, Fisher noticed that the proportion of women in the major was consistently below 10 percent. In 1994, he hired Jane Margolis, a social scientist who is now a senior researcher in the U.C.L.A. School of Education and Information Studies, to figure out why. Over four years, from 1995 to 1999, she and her colleagues interviewed and tracked roughly 100 undergraduates, male and female, in Carnegie Mellon’s computer-science department; she and Fisher later published the findings in their 2002 book “Unlocking the Clubhouse: Women in Computing.”

    What Margolis discovered was that the first-year students arriving at Carnegie Mellon with substantial experience were almost all male. They had received much more exposure to computers than girls had; for example, boys were more than twice as likely to have been given one as a gift by their parents. And if parents bought a computer for the family, they most often put it in a son’s room, not a daughter’s. Sons also tended to have what amounted to an “internship” relationship with fathers, working through Basic-language manuals with them, receiving encouragement from them; the same wasn’t true for daughters. “That was a very important part of our findings,” Margolis says. Nearly every female student in computer science at Carnegie Mellon told Margolis that her father had worked with her brother — “and they had to fight their way through to get some attention.”

    Their mothers were typically less engaged with computers in the home, they told her. Girls, even the nerdy ones, picked up these cues and seemed to dial back their enthusiasm accordingly. These were pretty familiar roles for boys and girls, historically: Boys were cheered on for playing with construction sets and electronics kits, while girls were steered toward dolls and toy kitchens. It wasn’t terribly surprising to Margolis that a new technology would follow the same pattern as it became widely accepted.

    At school, girls got much the same message: Computers were for boys. Geeky boys who formed computer clubs, at least in part to escape the torments of jock culture, often wound up, whether intentionally or not, reproducing the same exclusionary behavior. (These groups snubbed not only girls but also black and Latino boys.) Such male cliques created “a kind of peer support network,” in Fisher’s words.

    This helped explain why Carnegie Mellon’s first-year classes were starkly divided between the sizable number of men who were already confident in basic programming concepts and the women who were frequently complete neophytes. A cultural schism had emerged. The women started doubting their ability. How would they ever catch up?

    What Margolis heard from students — and from faculty members, too — was that there was a sense in the classroom that if you hadn’t already been coding obsessively for years, you didn’t belong. The “real programmer” was the one who “had a computer-screen tan from being in front of the monitor all the time,” as Margolis puts it. “The idea was, you just have to love being with a computer all the time, and if you don’t do it 24/7, you’re not a ‘real’ programmer.” The truth is, many of the men themselves didn’t fit this monomaniacal stereotype. But there was a double standard: While it was O.K. for the men to want to engage in various other pursuits, women who expressed the same wish felt judged for not being “hard core” enough. By the second year, many of these women, besieged by doubts, began dropping out of the program. (The same was true for the few black and Latino students who also arrived on campus without teenage programming experience.)

    A similar pattern took hold at many other campuses. Patricia Ordóñez, a first-year student at Johns Hopkins University in 1985, enrolled in an Introduction to Minicomputers course. She had been a math whiz in high school but had little experience in coding; when she raised her hand in class at college to ask a question, many of the other students who had spent their teenage years programming — and the professor — made her feel singled out. “I remember one day he looked at me and said, ‘You should already know this by now,’ ” she told me. “I thought, I’m never going to succeed.” She switched majors as a result.

    Yet a student’s decision to stick with or quit the subject did not seem to be correlated with coding talent. Many of the women who dropped out were getting perfectly good grades, Margolis learned. Indeed, some who left had been top students. And the women who did persist and made it to the third year of their program had by then generally caught up to the teenage obsessives. The degree’s coursework was, in other words, a leveling force. Learning Basic as a teenage hobby might lead to lots of fun and useful skills, but the pace of learning at college was so much more intense that by the end of the degree, everyone eventually wound up graduating at roughly the same levels of programming mastery.

    5
    An E.R.A./Univac 1103 computer in the 1950s.Credit Hum Images/Alamy

    “It turned out that having prior experience is not a great predictor, even of academic success,” Fisher says. Ordóñez’s later experience illustrates exactly this: After changing majors at Johns Hopkins, she later took night classes in coding and eventually got a Ph.D. in computer science in her 30s; today, she’s a professor at the University of Puerto Rico Río Piedras, specializing in data science.

    By the ’80s, the early pioneering work done by female programmers had mostly been forgotten. In contrast, Hollywood was putting out precisely the opposite image: Computers were a male domain. In hit movies like Revenge of the Nerds, Weird Science, Tron, WarGames and others, the computer nerds were nearly always young white men. Video games, a significant gateway activity that led to an interest in computers, were pitched far more often at boys, as research in 1985 by Sara Kiesler [Psychology of Women Quartly], a professor at Carnegie Mellon, found. “In the culture, it became something that guys do and are good at,” says Kiesler, who is also a program manager at the National Science Foundation. “There were all kinds of things signaling that if you don’t have the right genes, you’re not welcome.”

    A 1983 study involving M.I.T. students produced equally bleak accounts. Women who raised their hands in class were often ignored by professors and talked over by other students. They would be told they weren’t aggressive enough; if they challenged other students or contradicted them, they heard comments like “You sure are bitchy today — must be your period.” Behavior in some research groups “sometimes approximates that of the locker room,” the report concluded, with men openly rating how “cute” their female students were. (“Gee, I don’t think it’s fair that the only two girls in the group are in the same office,” one said. “We should share.”) Male students mused about women’s mediocrity: “I really don’t think the woman students around here are as good as the men,” one said.

    By then, as programming enjoyed its first burst of cultural attention, so many students were racing to enroll in computer science that universities ran into a supply problem: They didn’t have enough professors to teach everyone. Some added hurdles, courses that students had to pass before they could be accepted into the computer-science major. Punishing workloads and classes that covered the material at a lightning pace weeded out those who didn’t get it immediately. All this fostered an environment in which the students mostly likely to get through were those who had already been exposed to coding — young men, mostly. “Every time the field has instituted these filters on the front end, that’s had the effect of reducing the participation of women in particular,” says Eric S. Roberts, a longtime professor of computer science, now at Reed College, who first studied this problem and called it the “capacity crisis.”

    When computer-science programs began to expand again in the mid-’90s, coding’s culture was set. Most of the incoming students were men. The interest among women never recovered to the levels reached in the late ’70s and early ’80s. And the women who did show up were often isolated. In a room of 20 students, perhaps five or even fewer might be women.

    In 1991, Ellen Spertus, now a computer scientist at Mills College, published a report on women’s experiences in programming classes. She cataloged a landscape populated by men who snickered about the presumed inferiority of women and by professors who told female students that they were “far too pretty” to be studying electrical engineering; when some men at Carnegie Mellon were asked to stop using pictures of naked women as desktop wallpaper on their computers, they angrily complained that it was censorship of the sort practiced by “the Nazis or the Ayatollah Khomeini.”

    As programming was shutting its doors to women in academia, a similar transformation was taking place in corporate America. The emergence of what would be called “culture fit” was changing the who, and the why, of the hiring process. Managers began picking coders less on the basis of aptitude and more on how well they fit a personality type: the acerbic, aloof male nerd.

    The shift actually began far earlier, back in the late ’60s, when managers recognized that male coders shared a growing tendency to be antisocial isolates, lording their arcane technical expertise over that of their bosses. Programmers were “often egocentric, slightly neurotic,” as Richard Brandon, a well-known computer-industry analyst, put it in an address at a 1968 conference, adding that “the incidence of beards, sandals and other symptoms of rugged individualism or nonconformity are notably greater among this demographic.”

    In addition to testing for logical thinking, as in Mary Allen Wilkes’s day, companies began using personality tests to select specifically for these sorts of caustic loner qualities. “These became very powerful narratives,” says Nathan Ensmenger, a professor of informatics at Indiana University, who has studied [Gender and Computing] this transition. The hunt for that personality type cut women out. Managers might shrug and accept a man who was unkempt, unshaven and surly, but they wouldn’t tolerate a women who behaved the same way. Coding increasingly required late nights, but managers claimed that it was too unsafe to have women working into the wee hours, so they forbid them to stay late with the men.

    At the same time, the old hierarchy of hardware and software became inverted. Software was becoming a critical, and lucrative, sector of corporate America. Employers increasingly hired programmers whom they could envision one day ascending to key managerial roles in programming. And few companies were willing to put a woman in charge of men. “They wanted people who were more aligned with management,” says Marie Hicks, a historian at the Illinois Institute of Technology. “One of the big takeaways is that technical skill does not equate to success.”

    By the 1990s and 2000s, the pursuit of “culture fit” was in full force, particularly at start-ups, which involve a relatively small number of people typically confined to tight quarters for long hours. Founders looked to hire people who were socially and culturally similar to them.

    “It’s all this loosey-goosey ‘culture’ thing,” says Sue Gardner, former head of the Wikimedia Foundation, the nonprofit that hosts Wikipedia and other sites. After her stint there, Gardner decided to study why so few women were employed as coders. In 2014, she surveyed more than 1,400 women in the field and conducted sit-down interviews with scores more. It became clear to her that the occupation’s takeover by men in the ’90s had turned into a self-perpetuating cycle. Because almost everyone in charge was a white or Asian man, that was the model for whom to hire; managers recognized talent only when it walked and talked as they did. For example, many companies have relied on whiteboard challenges when hiring a coder — a prospective employee is asked to write code, often a sorting algorithm, on a whiteboard while the employers watch. This sort of thing bears almost no resemblance to the work coders actually do in their jobs. But whiteboard questions resemble classroom work at Ivy League institutions. It feels familiar to the men doing the hiring, many of whom are only a few years out of college. “What I came to realize,” Gardner says, “is that it’s not that women are excluded. It’s that practically everyone is excluded if you’re not a young white or Asian man who’s single.”

    One coder, Stephanie Hurlburt, was a stereotypical math nerd who had deep experience working on graphics software. “I love C++, the low-level stuff,” she told me, referring to a complex language known for allowing programmers to write very fast-running code, useful in graphics. Hurlburt worked for a series of firms this decade, including Unity (which makes popular software for designing games), and then for Facebook on its Oculus Rift VR headset, grinding away for long hours in the run-up to the release of its first demo. Hurlburt became accustomed to shrugging off negative attention and crude sexism. She heard, including from many authority figures she admired, that women weren’t wired for math. While working as a coder, if she expressed ignorance of any concept, no matter how trivial, male colleagues would disparage her. “I thought you were at a higher math level,” one sniffed.

    In 2016, Hurlburt and a friend, Rich Geldreich, founded a start-up called Binomial, where they created software that helps compress the size of “textures” in graphics-heavy software. Being self-employed, she figured, would mean not having to deal with belittling bosses. But when she and Geldreich went to sell their product, some customers assumed that she was just the marketing person. “I don’t know how you got this product off the ground when you only have one programmer!” she recalls one client telling Geldreich.

    In 2014, an informal analysis by a tech entrepreneur and former academic named Kieran Snyder of 248 corporate performance reviews for tech engineers determined that women were considerably more likely than men to receive reviews with negative feedback; men were far more likely to get reviews that had only constructive feedback, with no negative material. In a 2016 experiment conducted by the tech recruiting firm Speak With a Geek, 5,000 résumés with identical information were submitted to firms. When identifying details were removed from the résumés, 54 percent of the women received interview offers; when gendered names and other biographical information were given, only 5 percent of them did.

    Lurking beneath some of this sexist atmosphere is the phantasm of sociobiology. As this line of thinking goes, women are less suited to coding than men because biology better endows men with the qualities necessary to excel at programming. Many women who work in software face this line of reasoning all the time. Cate Huston, a software engineer at Google from 2011 to 2014, heard it from colleagues there when they pondered why such a low percentage of the company’s programmers were women. Peers would argue that Google hired only the best — that if women weren’t being hired, it was because they didn’t have enough innate logic or grit, she recalls.

    In the summer of 2017, a Google employee named James Damore suggested in an internal email that several qualities more commonly found in women — including higher rates of anxiety — explained why they weren’t thriving in a competitive world of coding; he cited the cognitive neuroscientist Simon Baron-Cohen, who theorizes that the male brain is more likely to be “systemizing,” compared with women’s “empathizing” brains. Google fired Damore, saying it could not employ someone who would argue that his female colleagues were inherently unsuited to the job. But on Google’s internal boards, other male employees backed up Damore, agreeing with his analysis. The assumption that the makeup of the coding work force reflects a pure meritocracy runs deep among many Silicon Valley men; for them, sociobiology offers a way to explain things, particularly for the type who prefers to believe that sexism in the workplace is not a big deal, or even doubts it really exists.

    But if biology were the reason so few women are in coding, it would be impossible to explain why women were so prominent in the early years of American programming, when the work could be, if anything, far harder than today’s programming. It was an uncharted new field, in which you had to do math in binary and hexadecimal formats, and there were no helpful internet forums, no Google to query, for assistance with your bug. It was just your brain in a jar, solving hellish problems.

    If biology limited women’s ability to code, then the ratio of women to men in programming ought to be similar in other countries. It isn’t. In India, roughly 40 percent of the students studying computer science and related fields are women. This is despite even greater barriers to becoming a female coder there; India has such rigid gender roles that female college students often have an 8 p.m. curfew, meaning they can’t work late in the computer lab, as the social scientist Roli Varma learned when she studied them in 2015. The Indian women had one big cultural advantage over their American peers, though: They were far more likely to be encouraged by their parents to go into the field, Varma says. What’s more, the women regarded coding as a safer job because it kept them indoors, lessening their exposure to street-level sexual harassment. It was, in other words, considered normal in India that women would code. The picture has been similar in Malaysia, where in 2001 — precisely when the share of American women in computer science had slid into a trough — women represented 52 percent of the undergraduate computer-science majors and 39 percent of the Ph.D. candidates at the University of Malaya in Kuala Lumpur.

    Today, when midcareer women decide that Silicon Valley’s culture is unlikely to change, many simply leave the industry. When Sue Gardner surveyed those 1,400 women in 2014, they told her the same story: In the early years, as junior coders, they looked past the ambient sexism they encountered. They loved programming and were ambitious and excited by their jobs. But over time, Gardner says, “they get ground down.” As they rose in the ranks, they found few, if any, mentors. Nearly two-thirds either experienced or witnessed harassment, she read in “The Athena Factor” (a 2008 study of women in tech); in Gardner’s survey, one-third reported that their managers were more friendly toward and gave more support to their male co-workers. It’s often assumed that having children is the moment when women are sidelined in tech careers, as in many others, but Gardner discovered that wasn’t often the breaking point for these women. They grew discouraged seeing men with no better or even lesser qualifications get superior opportunities and treatment.

    “What surprised me was that they felt, ‘I did all that work!’ They were angry,” Gardner says. “It wasn’t like they needed a helping hand or needed a little extra coaching. They were mad. They were not leaving because they couldn’t hack it. They were leaving because they were skilled professionals who had skills that were broadly in demand in the marketplace, and they had other options. So they’re like, ‘[expletive] it — I’ll go somewhere where I’m seen as valuable.’ ”

    The result is an industry that is drastically more male than it was decades ago, and far more so than the workplace at large. In 2018, according to data from the Bureau of Labor Statistics, about 26 percent of the workers in “computer and mathematical occupations” were women. The percentages for people of color are similarly low: Black employees were 8.4 percent, Latinos 7.5 percent. (The Census Bureau’s American Community Survey put black coders at only 4.7 percent in 2016.) In the more rarefied world of the top Silicon Valley tech firms, the numbers are even more austere: A 2017 analysis by Recode, a news site that covers the technology industry, revealed that 20 percent of Google’s technical employees were women, while only 1 percent were black and 3 percent were Hispanic. Facebook was nearly identical; the numbers at Twitter were 15 percent, 2 percent and 4 percent, respectively.

    The reversal has been profound. In the early days of coding, women flocked to programming because it offered more opportunity and reward for merit, more than fields like law. Now software has the closed door.

    In the late 1990s, Allan Fisher decided that Carnegie Mellon would try to address the male-female imbalance in its computer-science program. Prompted by Jane Margolis’s findings, Fisher and his colleagues instituted several changes. One was the creation of classes that grouped students by experience: The kids who had been coding since youth would start on one track; the newcomers to coding would have a slightly different curriculum, allowing them more time to catch up. Carnegie Mellon also offered extra tutoring to all students, which was particularly useful for the novice coders. If Fisher could get them to stay through the first and second years, he knew, they would catch up to their peers.

    5
    Components from four of the earliest electronic computers, held by Patsy Boyce Simmers, Gail Taylor, Millie Beck and Norma Stec, employees at the United States Army’s Ballistics Research Laboratory.Credit Science Source

    They also modified the courses in order to show how code has impacts in the real world, so a new student’s view of programming wouldn’t just be an endless vista of algorithms disconnected from any practical use. Fisher wanted students to glimpse, earlier on, what it was like to make software that works its way into people’s lives. Back in the ’90s, before social media and even before the internet had gone mainstream, the influence that code could have on daily life wasn’t so easy to see.

    Faculty members, too, adopted a different perspective. For years some had tacitly endorsed the idea that the students who came in already knowing code were born to it. Carnegie Mellon “rewarded the obsessive hacker,” Fisher told me. But the faculty now knew that their assumptions weren’t true; they had been confusing previous experience with raw aptitude. They still wanted to encourage those obsessive teenage coders, but they had come to understand that the neophytes were just as likely to bloom rapidly into remarkable talents and deserved as much support. “We had to broaden how faculty sees what a successful student looks like,” he says. The admissions process was adjusted, too; it no longer gave as much preference to students who had been teenage coders.

    No single policy changed things. “There’s really a virtuous cycle,” Fisher says. “If you make the program accommodate people with less experience, then people with less experience come in.” Faculty members became more used to seeing how green coders evolve into accomplished ones, and they learned how to teach that type.

    Carnegie Mellon’s efforts were remarkably successful. Only a few years after these changes, the percentage of women entering its computer-science program boomed, rising to 42 percent from 7 percent; graduation rates for women rose to nearly match those of the men. The school vaulted over the national average. Other schools concerned about the low number of female students began using approaches similar to Fisher’s. In 2006, Harvey Mudd College tinkered with its Introduction to Computer Science course, creating a track specifically for novices, and rebranded it as Creative Problem Solving in Science and Engineering Using Computational Approaches — which, the institution’s president, Maria Klawe, told me, “is actually a better description of what you’re actually doing when you’re coding.” By 2018, 54 percent of Harvey Mudd’s graduates who majored in computer science were women.

    A broader cultural shift has accompanied the schools’ efforts. In the last few years, women’s interest in coding has begun rapidly rising throughout the United States. In 2012, the percentage of female undergraduates who plan to major in computer science began to rise at rates not seen for 35 years [Computing Research News], since the decline in the mid-’80s, according to research by Linda Sax, an education professor at U.C.L.A. There has also been a boomlet of groups and organizations training and encouraging underrepresented cohorts to enter the field, like Black Girls Code and Code Newbie. Coding has come to be seen, in purely economic terms, as a bastion of well-paying and engaging work.

    In an age when Instagram and Snapchat and iPhones are part of the warp and weft of life’s daily fabric, potential coders worry less that the job will be isolated, antisocial and distant from reality. “Women who see themselves as creative or artistic are more likely to pursue computer science today than in the past,” says Sax, who has pored over decades of demographic data about the students in STEM fields. They’re still less likely to go into coding than other fields, but programming is increasingly on their horizon. This shift is abetted by the fact that it’s much easier to learn programming without getting a full degree, through free online coding schools, relatively cheaper “boot camps” or even meetup groups for newcomers — opportunities that have emerged only in the last decade.

    Changing the culture at schools is one thing. Most female veterans of code I’ve spoken to say that what is harder is shifting the culture of the industry at large, particularly the reflexive sexism and racism still deeply ingrained in Silicon Valley. Some, like Sue Gardner, sometimes wonder if it’s even ethical for her to encourage young women to go into tech. She fears they’ll pour out of computer-science programs in increasing numbers, arrive at their first coding job excited and thrive early on, but then gradually get beaten down by industry. “The truth is, we can attract more and different people into the field, but they’re just going to hit that wall in midcareer, unless we change how things happen higher up,” she says.

    On a spring weekend in 2017, more than 700 coders and designers were given 24 hours to dream up and create a new product at a hackathon in New York hosted by TechCrunch, a news site devoted to technology and Silicon Valley. At lunchtime on Sunday, the teams presented their creations to a panel of industry judges, in a blizzard of frantic elevator pitches. There was Instagrammie, a robot system that would automatically recognize the mood of an elderly relative or a person with limited mobility; there was Waste Not, an app to reduce food waste. Most of the contestants were coders who worked at local high-tech firms or computer-science students at nearby universities.

    6
    Despite women’s historical role in the vanguard of computer programing, some female veterans of code wonder if it’s even ethical to encourage young women to go into tech because of the reflexive sexism in the current culture of Silicon Valley.CreditApic/Getty Images

    The winning team, though, was a trio of high school girls from New Jersey: Sowmya Patapati, Akshaya Dinesh and Amulya Balakrishnan. In only 24 hours, they created reVIVE, a virtual-reality app that tests children for signs of A.D.H.D. After the students were handed their winnings onstage — a trophy-size check for $5,000 — they flopped into chairs in a nearby room to recuperate. They had been coding almost nonstop since noon the day before and were bleary with exhaustion.

    “Lots of caffeine,” Balakrishnan, 17, said, laughing. She wore a blue T-shirt that read WHO HACK THE WORLD? GIRLS. The girls told me that they had impressed even themselves by how much they accomplished in 24 hours. “Our app really does streamline the process of detecting A.D.H.D.,” said Dinesh, who was also 17. “It usually takes six to nine months to diagnose, and thousands of dollars! We could do it digitally in a much faster way!”

    They all became interested in coding in high school, each of them with strong encouragement from immigrant parents. Balakrishnan’s parents worked in software and medicine; Dinesh’s parents came to the United States from India in 2000 and worked in information technology. Patapati immigrated from India as an infant with her young mother, who never went to college, and her father, an information-tech worker who was the first in his rural family to go to college.

    Drawn to coding in high school, the young hackers got used to being the lone girl nerds at school, as Dinesh told me.

    “I tried so hard to get other girls interested in computer science, and it was like, the interest levels were just so low,” she says. “When I walked into my first hackathon, it was the most intimidating thing ever. I looked at a room of 80 kids: Five were girls, and I was probably the youngest person there.” But she kept on competing in 25 more hackathons, and her confidence grew. To break the isolation and meet more girls in coding, she attended events by organizations like #BuiltByGirls, which is where, a few days previously, she had met Patapati and Balakrishnan and where they decided to team up. To attend TechCrunch, Patapati, who was 16, and Balakrishnan skipped a junior prom and a friend’s birthday party. “Who needs a party when you can go to a hackathon?” Patapati said.

    Winning TechCrunch as a group of young women of color brought extra attention, not all of it positive. “I’ve gotten a lot of comments like: ‘Oh, you won the hackathon because you’re a girl! You’re a diversity pick,” Balakrishnan said. After the prize was announced online, she recalled later, “there were quite a few engineers who commented, ‘Oh, it was a girl pick; obviously that’s why they won.’ ”

    Nearly two years later, Balakrishnan was taking a gap year to create a heart-monitoring product she invented, and she was in the running for $100,000 to develop it. She was applying to college to study computer science and, in her spare time, competing in a beauty pageant, inspired by Miss USA 2017, Kara McCullough, who was a nuclear scientist. “I realized that I could use pageantry as a platform to show more girls that they could embrace their femininity and be involved in a very technical, male-dominated field,” she says. Dinesh, in her final year at high school, had started an all-female hackathon that now takes place annually in New York. (“The vibe was definitely very different,” she says, more focused on training newcomers.)

    Patapati and Dinesh enrolled at Stanford last fall to study computer science; both are interested deeply in A.I. They’ve noticed the subtle tensions for women in the coding classes. Patapati, who founded a Women in A.I. group with an Apple tech lead, has watched as male colleagues ignore her raised hand in group discussions or repeat something she just said as if it were their idea. “I think sometimes it’s just a bias that people don’t even recognize that they have,” she says. “That’s been really upsetting.”

    Dinesh says “there’s absolutely a difference in confidence levels” between the male and female newcomers. The Stanford curriculum is so intense that even the relative veterans like her are scrambling: When we spoke recently, she had just spent “three all-nighters in a row” on a single project, for which students had to engineer a “print” command from scratch. At 18, she has few illusions about the road ahead. When she went to a blockchain conference, it was a sea of “middle-aged white and Asian men,” she says. “I’m never going to one again,” she adds with a laugh.

    “My dream is to work on autonomous driving at Tesla or Waymo or some company like that. Or if I see that there’s something missing, maybe I’ll start my own company.” She has begun moving in that direction already, having met one venture capitalist via #BuiltByGirls. “So now I know I can start reaching out to her, and I can start reaching out to other people that she might know,” she says.

    Will she look around, 20 years from now, to see that software has returned to its roots, with women everywhere? “I’m not really sure what will happen,” she admits. “But I do think it is absolutely on the upward climb.”

    Correction: Feb. 14, 2019
    An earlier version of this article misidentified the institution Ellen Spertus was affiliated with when she published a 1991 report on women’s experiences in programming classes. Spertus was at M.I.T. when she published the report, not Mills College, where she is currently a professor.

    Correction: Feb. 14, 2019
    An earlier version of this article misstated Akshaya Dinesh’s current age. She is 18, not 19.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 3:58 pm on February 17, 2019 Permalink | Reply
    Tags: Asgardia, , , , , , , , , See the full blog post for images of all of the spacecraft involved and the Heliopause and Heliosphere, Which Spacecraft Will Reach Interstellar Space Next?   

    From Asgardia via Medium: “Which Spacecraft Will Reach Interstellar Space Next?” 

    From Asgardia

    via

    Medium

    2

    NASA’s Voyager 2spacecraft reached interstellar space in December 2018, following in the footsteps of its sister, Voyager 1. Currently, only five spacecraft have been launched that can make such a grand exit, including the Voyagers. The other three are Pioneers 10 and 11, and New Horizons. Which one will make a great escape next?

    NASA/Voyager 2

    NASA/Voyager 1

    NASA Pioneer 10

    NASA Pioneer 11

    NASA/New Horizons spacecraft

    Reaching interstellar space is a milestone that is thought of as leaving the solar system by a specific definition. In 1990, the New York Times reported that Pioneer left the solar system when it flew past Neptune’s orbit. But that’s not what Voyager 2’s scientists used as their definition. Instead, the more recent measurements said the crossing of the sun’s heliopause, the theoretical boundary to its heliosphere, is the determining factor for entering interstellar space.

    The heliosphere is a bubble of charged particles that are created by and flows past the sun. It is used by scientists to mark where interstellar space starts.

    NASA Heliosphere

    However, the heliosphere is tricky, and there are many changes such as the sun’s 22-year solar cycle, the shrinking and growing with the solar wind, and stretching out behind the sun in the star’s direction of travel. It’s not something that can be measured easily from Earth. Thus, NASA’s Interstellar Boundary Explorer (IBEX) mission is trying to define the edges of the bubble remotely.

    Observations from the Voyager probes’ indicate that they’ve pierced this bubble. However, since researchers think the Oort Cloud also surrounds the sun, an area of icy bodies that is estimated to stretch from 1,000 to 100,000 astronomical units — far beyond the heliopause — the Voyager probes cannot be considered entirely outside the solar system. (One astronomical unit, or AU, is the distance between the Earth and the sun — 93 million miles, or 150 million kilometres).

    Oort cloud Image by TypePad, http://goo.gl/NWlQz6

    Oort Cloud, The layout of the solar system, including the Oort Cloud, on a logarithmic scale. Credit: NASA, Universe Today

    When Voyager 1 and 2 crossed the heliopause, their still-working particle instruments unveiled the historical events. The heliosphere functions as a shield, keeping out many of the higher-energy particles created by the cosmic rays generated by other stars.

    Magnetosphere of Earth, original bitmap from NASA. SVG rendering by Aaron Kaase

    By tracking both the low-energy particles found inside the solar system and the high-energy particles from outside of it, the instruments could reveal a sudden surge of cosmic rays alerting scientists that the spacecraft had left the solar system.

    The ever-changing nature of the heliosphere makes it impossible to tell when Pioneer 10 and 11 will enter interstellar space. It’s even possible that one of them may have already.

    As per NASA’s e-book Beyond Earth: A Chronicle of Deep Space Exploration, from Nov. 5, 2017, Pioneer 10 was approximately 118.824 AUs from Earth, farther than any craft besides Voyager 1. H(?), Although Pioneer 11 and the Voyager twins were all heading in the direction of the sun’s apparent travel, Pioneer 10 is headed toward the trailing side. 2017 research showed that the tail of the heliosphere is around 220 AU from the sun. Since Pioneer 10 travels about 2.5 AU/year, it will take Pioneer until roughly 2057–40 years — to reach the changing boundary.

    Pioneer 11 was thought to be approximately 97.6 AUs from Earth as of Nov. 5, 2017, according to the same e-book. Unlike its twin, the spacecraft is travelling in about the same direction as the Voyagers. Voyager 2 crossed into interstellar medium at approximately 120 AUs. Since Pioneer 11 is moving at 2.3 AU/year, it should reach interstellar space in about eight years, around 2027 — assuming the boundary doesn’t change, which it probably will.

    On Jan. 1, 2019, New Horizons made its most recent flyby of a solar system object, and it was launched much later than the other four. During this flyby, New Horizons was 43 AU from the sun. The mission’s principal investigator, Alan Stern, told Space.com that the spacecraft was travelling approximately 3.1 AU each year, or 31 AU in ten years. In another two decades, the spacecraft has a good chance of reaching interstellar space. If New Horizons crossed at Voyager 2’s same border (it won’t, but just consider as a baseline), it would make the trip in just under 24 years, in 2043. But it’s possible the ISM line will move inward, allowing it to cross sooner.

    Although there won’t be a direct confirmation of crossing the heliopause with the Pioneer spacecraft, it’s possible that New Horizons will still be working, and will give us a detailed study of interstellar space. The particle detectors that it holds are much more potent than the ones on Voyager, Stern said. Moreover, New Horizons holds a dust detector that would offer insight into the area beyond the heliosphere.

    However, whether or not they will still be functioning remains to be seen. As per Stern, power is the limiting factor. New Horizons runs off of decaying plutonium dioxide. Presently, the spacecraft has enough power to work until the late 2030s, said Stern, and it is currently in good working order.

    If in the unlikely event that the ever-changing heliosphere remains static Pioneer 11 will be the next to cross the heliopause in 2027, followed by New Horizons in 2043. Pioneer 10, the first of the five spacecraft to launch, will be the last to leave the heliosphere, in 2057. Once again, this assumes the extremely unrealistic chance that the heliopause remaining static for the next four decades.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Medium

    Medium is an online publishing platform developed by Evan Williams, and launched in August 2012. It is owned by A Medium Corporation. The platform is an example of social journalism, having a hybrid collection of amateur and professional people and publications, or exclusive blogs or publishers on Medium, and is regularly regarded as a blog host.

    Williams developed Medium as a way to publish writings and documents longer than Twitter’s 140-character (now 280-character) maximum.

     
  • richardmitnick 3:02 pm on February 16, 2019 Permalink | Reply
    Tags: Ask Ethan: What Will Our First Direct Image Of An Earth-Like Exoplanet Look Like?, , , , , , You’d be amazed at what you can learn from even one single pixel   

    From Ethan Siegel: “Ask Ethan: What Will Our First Direct Image Of An Earth-Like Exoplanet Look Like?” 

    From Ethan Siegel
    Feb 16, 2019

    You’d be amazed at what you can learn from even one single pixel.

    1
    Left, an image of Earth from the DSCOVR-EPIC camera. Right, the same image degraded to a resolution of 3 x 3 pixels, similar to what researchers will see in future exoplanet observations.(NOAA/NASA/STEPHEN KANE)

    NOAA DISCOVR Deep Space Climate Observatory

    NOAA Deep Space Climate Observatory

    NASA EPIC (Earth Polychromatic Imaging Camera) on NOAA DSCOVR (Deep Space Climate Observatory)

    Over the past decade, owing largely to NASA’s Kepler mission, our knowledge of planets around star systems beyond our own has increased tremendously.

    NASA/Kepler Telescope

    From just a few worlds — mostly massive, with quick, inner orbits, and around lower-mass stars — to literally thousands of widely-varying sizes, we now know that Earth-sized and slightly larger worlds are extremely common. With the next generation of coming observatories from both space (like the James Webb Space Telescope) and the ground (with observatories like GMTand ELT), the closest such worlds will be able to be directly imaged. What will that look like? That’s what Patreon supporter Tim Graham wants to know, asking:

    “[W]hat kind of resolution can we expect? [A] few pixels only or some features visible?”

    The picture itself won’t be impressive. But what it will teach us is everything we could reasonably dream of.

    NASA/ESA/CSA Webb Telescope annotated

    Giant Magellan Telescope, to be at the Carnegie Institution for Science’s Las Campanas Observatory, to be built some 115 km (71 mi) north-northeast of La Serena, Chile, over 2,500 m (8,200 ft) high

    ESO/E-ELT,to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).

    2
    An artist’s rendition of Proxima b orbiting Proxima Centauri. With 30-meter class telescopes like GMT and ELT, we’ll be able to directly image it, as well as any outer, yet-undetected worlds. However, it won’t look anything like this through our telescopes. (ESO/M. KORNMESSER)

    Let’s get the bad news out of the way first. The closest star system to us is the Alpha Centauri system, itself located just over 4 light years away. It consists of three stars:

    Alpha Centauri A, which is a Sun-like (G-class) star,
    Alpha Centauri B, which is a little cooler and less massive (K-class), but orbits Alpha Centauri A at a distance of the gas giants in our Solar System, and
    Proxima Centauri, which is much cooler and less massive (M-class), and is known to have at least one Earth-sized planet.

    Centauris Alpha Beta Proxima 27, February 2012. Skatebiker

    While there might be many more planets around this trinary star system, the fact is that planets are small and the distances to them, particularly beyond our own Solar System, are tremendous.

    3
    This diagram shows the novel 5-mirror optical system of ESO’s Extremely Large Telescope (ELT). Before reaching the science instruments the light is first reflected from the telescope’s giant concave 39-metre segmented primary mirror (M1), it then bounces off two further 4-metre-class mirrors, one convex (M2) and one concave (M3). The final two mirrors (M4 and M5) form a built-in adaptive optics system to allow extremely sharp images to be formed at the final focal plane. This telescope will have more light-gathering power and better angular resolution, down to 0.005″, than any telescope in history. (ESO)

    The largest telescope being built of all, the ELT, will be 39 meters in diameter, meaning it has a maximum angular resolution of 0.005 arc seconds, where 60 arc seconds make up 1 arc minute, and 60 arc minutes make up 1 degree. If you put an Earth-sized planet at the distance of Proxima Centauri, the nearest star beyond our Sun at 4.24 light years, it would have an angular diameter of 67 micro-arc seconds (μas), meaning that even our most powerful upcoming telescope would be about a factor of 74 too small to fully resolve an Earth-sized planet.

    The best we could hope for was a single, saturated pixel, where the light bled into the surrounding, adjacent pixels on our most advanced, highest-resolution cameras. Visually, it’s a tremendous disappointment for anyone hoping to get a spectacular view like the illustrations NASA has been putting out.

    5
    Artist’s conception of the exoplanet Kepler-186f, which may exhibit Earth-like (or early, life-free Earth-like) properties. As imagination-sparking as illustrations like this are, they’re mere speculations, and the incoming data won’t provide any views akin to this at all. (NASA AMES/SETI INSTITUTE/JPL-CALTECH)


    But that’s where the letdown ends. By using coronagraph technology, we’ll be able to block out the light from the parent star, viewing the light from the planet directly. Sure, we’ll only get a pixel’s worth of light, but it won’t be one continuous, steady pixel at all. Instead, we’ll get to monitor that light in three different ways:

    In a variety of colors, photometrically, teaching us what the overall optical properties of any imaged planet are.

    Spectroscopically, which means we can break that light up into its individual wavelengths, and look for signatures of particular molecules and atoms on its surface and in its atmosphere.

    Over time, meaning we can measure how both of the above change as the planet both rotates on its axis and revolves, seasonally, around its parent star.

    From just a single pixel’s worth of light, we can determine a whole slew of properties about any world in question. Here are some of the highlights.

    6
    Illustration of an exoplanetary system, potentially with an exomoon orbiting it. (NASA/DAVID HARDY, VIA ASTROART.ORG)

    By measuring the light reflecting off of a planet over the course of its orbit, we’ll be sensitive to a variety of phenomena, some of which we already see on Earth. If the world has a difference in albedo (reflectivity) from one hemisphere to another, and rotates in any fashion other than one that’s tidally locked to its star in a 1-to-1 resonance, we’ll be able to see a periodic signal emerging as the star-facing side changes with time.

    A world with continents and oceans, for example, would display a signal that rose-and-fell in a variety of wavelengths, corresponding to the portion that was in direct sunlight reflecting that light back to our telescopes here in the Solar System.

    7
    Hundreds of candidate planets have been discovered so far in the data collected and released by NASA’s Transiting Exoplanet Survey Satellite (TESS), with eight of them having been confirmed thus far by follow-up measurements.

    NASA/MIT TESS

    Three of the most unique, interesting exoplanets are illustrated here, with many more to come. Some of the closest worlds to be discovered by TESS will be candidates for being Earth-like and within the reach of direct imaging. (NASA/MIT/TESS)

    Owing to the power of direct imaging, we could directly measure changes in the weather on a planet beyond our own Solar System.

    8
    The 2001–2002 composite images of the Blue Marble, constructed with NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) data.

    NASA Terra MODIS schematic

    NASA Terra satellite

    As an exoplanet rotates and its weather changes, we can tease out or reconstruct variations in the planetary continent/ocean/icecap ratios, as well as the signal of cloud cover.(NASA)

    Life may be a more difficult signal to tease out, but if there were an exoplanet with life on it, similar to Earth, we would see some very specific seasonal changes. On Earth, the fact that our planet rotates on its axis means that in winter, where our hemisphere faces away from the Sun, the icecaps grow larger, the continents grow more reflective with snow extending down to lower latitudes, and the world becomes less green in its overall color.

    Conversely, in the summer, our hemisphere faces towards the Sun. The icecaps shrink while the continents turn green: the dominant color of plant life on our planet. Similar seasonal changes will affect the light coming from any exoplanet we image, allowing us to tease out not only seasonal variations, but the specific percent changes in color distribution and reflectivity.

    8
    In this image of Titan, the methane haze and atmosphere is shown in a near-transparent blue, with surface features beneath the clouds displayed. A composite of ultraviolet, optical, and infrared light was used to construct this view. By combining similar data sets over time for a directly imaged exoplanet, even with just a single pixel, we could reconstruct a huge slew of its atmospheric, surface, and seasonal properties. (NASA/JPL/SPACE SCIENCE INSTITUTE)

    Overall planetary and orbital characteristics should emerge as well. Unless we’ve observed a planetary transit from our point of view — where the planet in question passes between us and the star it orbits — we cannot know the orientation of its orbit.

    Planet transit. NASA/Ames

    This means we can’t know what the planet’s mass is; we can only know some combination of its mass and the angle of its orbit’s tilt.

    But if we can measure how the light from it changes over time, we can infer what its phases must look like, and how those change over time. We can use that information to break that degeneracy, and determine its mass and orbital tilt, as well as the presence or absence of any large moons around that planet. From even just a single pixel, the way the brightness changes once color, cloud cover, rotation, and seasonal changes are subtracted out should allow us to learn all of this.

    9
    The phases of Venus, as viewed from Earth, are analogous to an exoplanet’s phases as it orbits its star. If the ‘night’ side exhibits certain temperature/infrared properties, exactly the ones that James Webb [above] will be sensitive to, we can determine whether they have atmospheres, as well as spectroscopically determining what the atmospheric contents are. This remains true even without measuring them directly via a transit. (WIKIMEDIA COMMONS USERS NICHALP AND SAGREDO)

    This will be important for a huge number of reasons. Yes, the big, obvious hope is that we’ll find an oxygen-rich atmosphere, perhaps even coupled with an inert but common molecule like nitrogen gas, creating a truly Earth-like atmosphere. But we can go beyond that and look for the presence of water. Other signatures of potential life, like methane and carbon dioxide, can be sought out as well. And another fun advance that’s greatly underappreciated today will come in the direct imaging of super-Earth worlds. Which ones have giant hydrogen and helium gas envelopes and which ones don’t? In a direct fashion, we’ll finally be able to draw a conclusive line.

    10
    The classification scheme of planets as either rocky, Neptune-like, Jupiter-like or stellar-like. The border between Earth-like and Neptune-like is murky, but direct imaging of candidate super-Earth worlds should enable us to determine whether there’s a gas envelope around each planet in question or not. (CHEN AND KIPPING, 2016, VIA ARXIV.ORG/PDF/1603.08614V2.PDF)

    If we truly wanted to image features on a planet beyond our Solar System, we’d need a telescope hundreds of times as large as the largest ones currently being planned: multiple kilometers in diameter. Until that day comes, however, we can look forward to learning so many important things about the nearest Earth-like worlds in our galaxy. TESS is out there, finding those planets right now. James Webb is complete, waiting for its 2021 launch date. Three 30-meter class telescopes are in the works, with the first one (GMT) slated to come online in 2024 and the largest one (ELT) to see first light in 2025. By this time a decade from now, we’ll have direct image (optical and infrared) data on dozens of Earth-sized and slightly larger worlds, all beyond our Solar System.

    A single pixel may not seem like much, but when you think about how much we can learn — about seasons, weather, continents, oceans, icecaps, and even life — it’s enough to take your breath away.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 1:34 pm on February 16, 2019 Permalink | Reply
    Tags: , , , , Magnetopause, , ,   

    From Queen Mary University of London: “Earth’s magnetic shield booms like a drum when hit by impulses” 

    From Queen Mary University of London

    12 February 2019

    The Earth’s magnetic shield booms like a drum when it is hit by strong impulses, according to new research from Queen Mary University of London.

    1
    Artist rendition of a plasma jet impact (yellow) generating standing waves at the magnetopause boundary (blue) and in the magnetosphere (green). The outer group of four THEMIS probes witnessed the flapping of the magnetopause over each satellite in succession, confirming the expected behaviour/frequency of the theorised magnetopause eigenmode wave. (Credit: E. Masongsong/UCLA, M. Archer/QMUL, H. Hietala/UTU)

    NASA THEMIS satellite

    As an impulse strikes the outer boundary of the shield, known as the magnetopause, ripples travel along its surface which then get reflected back when they approach the magnetic poles.

    Magnetosphere of Earth, original bitmap from NASA. SVG rendering by Aaron Kaase

    The interference of the original and reflected waves leads to a standing wave pattern, in which specific points appear to be standing still while others vibrate back and forth. A drum resonates like this when struck in exactly the same way.

    This study, published in Nature Communications, describes the first time this effect has been observed after it was theoretically proposed 45 years ago.

    Movements of the magnetopause are important in controlling the flow of energy within our space environment with wide-ranging effects on space weather, which is how phenomena from space can potentially damage technology like power grids, GPS and even passenger airlines.

    The discovery that the boundary moves in this way sheds light on potential global consequences that previously had not been considered.

    Hard to detect

    Dr Martin Archer, space physicist at Queen Mary University of London, and lead author of the paper, said: “There had been speculation that these drum-like vibrations might not occur at all, given the lack of evidence over the 45 years since they were proposed. Another possibility was that they are just very hard to definitively detect.

    “Earth’s magnetic shield is continuously buffeted with turbulence so we thought that clear evidence for the proposed booming vibrations might require a single sharp hit from an impulse. You would also need lots of satellites in just the right places during this event so that other known sounds or resonances could be ruled out. The event in the paper ticked all those quite strict boxes and at last we’ve shown the boundary’s natural response.”

    The researchers used observations from five NASA THEMIS [above] satellites when they were ideally located as a strong isolated plasma jet slammed into the magnetopause. The probes were able to detect the boundary’s oscillations and the resulting sounds within the Earth’s magnetic shield, which agreed with the theory and gave the researchers the ability to rule out all other possible explanations.

    Solar wind impact

    Many impulses which can impact our magnetic shield originate from the solar wind, charged particles in the form of plasma that continually blow off the Sun, or are a result of the complicated interaction of the solar wind with Earth’s magnetic field, as was technically the case for this event.

    The interplay of Earth’s magnetic field with the solar wind forms a magnetic shield around the planet, bounded by the magnetopause, which protects us from much of the radiation present in space.

    Other planets like Mercury, Jupiter and Saturn also have similar magnetic shields and so the same drum-like vibrations may be possible elsewhere.

    Further research is needed to understand how often the vibrations occur at Earth and whether they exist at other planets as well. Their consequences also need further study using satellite and ground-based observations.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    At Queen Mary University of London, we believe that a diversity of ideas helps us achieve the previously unthinkable.

    Throughout our history, we’ve fostered social justice and improved lives through academic excellence. And we continue to live and breathe this spirit today, not because it’s simply ‘the right thing to do’ but for what it helps us achieve and the intellectual brilliance it delivers.

    Our reformer heritage informs our conviction that great ideas can and should come from anywhere. It’s an approach that has brought results across the globe, from the communities of east London to the favelas of Rio de Janeiro.

    We continue to embrace diversity of thought and opinion in everything we do, in the belief that when views collide, disciplines interact, and perspectives intersect, truly original thought takes form.

     
  • richardmitnick 10:11 pm on February 15, 2019 Permalink | Reply
    Tags: "Tidal Tails – The Beginning Of The End Of An Open Star Cluster", , , , , , The Hyades- the star cluster closest to the Sun,   

    From University of Heidelberg: “Tidal Tails – The Beginning Of The End Of An Open Star Cluster” 

    U Heidelberg bloc

    From University of Heidelberg

    15 February 2019

    Heidelberg researchers verify this phenomenon using Gaia data from the Hyades.

    ESA/GAIA satellite

    1
    Image of the Hyades, the star cluster closest to the Sun. Source: NASA, ESA, and STScI

    NASA/ESA Hubble Telescope

    In the course of their life, open star clusters continuously lose stars to their surroundings. The resulting swath of tidal tails provides a glimpse into the evolution and dissolution of a star cluster. Thus far only tidal tails of massive globular clusters and dwarf galaxies have been discovered in the Milky Way system. In open clusters, this phenomenon existed only in theory. Researchers at Heidelberg University have now finally verified the existence of such a tidal tail in the star cluster closest to the Sun, the Hyades. An analysis of measurements from the Gaia satellite led to the discovery.

    Open star clusters are collections of approximately 100 to a few thousand stars that emerge almost simultaneously from a collapsing gas cloud and move through space at about the same speed. Owing to a number of influences, however, they do begin to disperse after a few hundred million years. Among the factors working against the gravitationally bound stars is the tidal force of a galaxy, which pulls the stars out of the cluster. Tidal tails then form during the movement of the star cluster through the Milky Way. It is the beginning of the end of an open star cluster.

    2
    Position of the Hyades and its now observed tidal tails in the sky. The background shows Gaia’s all-sky view of our Milky Way Galaxy. Source: S. Röser, ESA/Gaia/DPAC

    Together with researchers from the Max Planck Institute for Astronomy in Heidelberg, scientists from the Centre for Astronomy of Heidelberg University (ZAH) have detected this phenomenon for the first time in the Hyades, one of the older and best-studied open star clusters in the Milky Way system. They studied the data published in April 2018 from the Gaia satellite, which has been systematically mapping the heavens for five years. Rather than taking direct photographs, Gaia measures the stars’ motion and position.

    From this data, the Heidelberg astronomers identified two tidal tails of the Hyades with a total of approximately 500 stars extending up to 650 light-years from the cluster. Dr Siegfried Röser of the Königstuhl State Observatory of the ZAH explains that one of the tails precedes the open star cluster and the other follows it. “Our discovery shows that it is possible to trace the trajectories of individual stars of the Milky Way back to their point of origin in a star cluster”, states Dr Röser. The astronomer believes that this marks the beginning of many significant discoveries in galactic astronomy. Apart from the Heidelberg astronomers, a team of researchers from Vienna also discovered the tidal tails of the Hyades.

    The research was conducted under the auspices of The Milky Way System Collaborative Research Centre (CRC 881) at Heidelberg University, which is funded by the German Research Foundation.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Heidelberg Campus

    Founded in 1386, Heidelberg University, a state university of BadenWürttemberg, is Germany’s oldest university. In continuing its timehonoured tradition as a research university of international standing the Ruprecht-Karls-University’s mission is guided by the following principles:
    Firmly rooted in its history, the University is committed to expanding and disseminating our knowledge about all aspects of humanity and nature through research and education. The University upholds the principle of freedom of research and education, acknowledging its responsibility to humanity, society, and nature.

     
  • richardmitnick 9:52 pm on February 15, 2019 Permalink | Reply
    Tags: "Space Cow Mystifies Astronomers", , , , , Could we be witnessing a dying star giving birth to an X-ray engine?, , ,   

    From ESOblog: “Space Cow Mystifies Astronomers” 

    ESO 50 Large

    From ESOblog

    1
    Science Snapshots – ALMA

    Could we be witnessing a dying star giving birth to an X-ray engine?

    15 February 2019

    One night in June 2018, telescopes spotted an extremely bright point of light in the sky that had seemingly appeared out of nowhere. Observations across the electromagnetic spectrum, made using telescopes from around the world, suggest that the light is likely to be the explosive death of a star giving birth to a neutron star or black hole. If so, this would be the first time ever that this has been observed. We find out more from Anna Ho, who led a team that used a variety of telescopes to figure out what exactly this mysterious object — classified as a transient and nicknamed The Cow — is.

    2
    Anna Ho

    Q. What is a transient, and why it is interesting to study them?

    A. The night sky appears calm but it is actually incredibly dynamic, with stars exploding in distant galaxies, visible through our telescopes as flashes of light. The word “transient” refers to a short-lived phenomenon in the night sky, which could be the explosion of a dying star, a tidal disruption event, or a flare from a star in the Milky Way. And there are probably many other types of transients out there that we have not even discovered!

    Q. So given that transients are sudden phenomena that you can’t predict, how can you possibly plan for studying them?

    A. It’s kind of a case of reacting to their appearance. In the past few years, we’ve entered this amazing new era of astronomy where telescopes can map out the entire sky every night. By comparing tonight’s map to last night’s map, we can see exactly what has changed over the previous 24 hours. The transients I study are very short-lived explosions — lasting between a few hours and a few months — so when an interesting one happens, we have to drop everything and react. Luckily I love my research enough to do this!

    It is only by using lots of different telescopes that we can really get a full picture of a transient.

    3
    ALMA and Very Large Array (VLA) images of the mysterious transient, The Cow.
    Credit: Sophia Dagnello, NRAO/AUI/NSF; R. Margutti, W.M. Keck Observatory; Ho, et al.

    NRAO/Karl V Jansky Expanded Very Large Array, on the Plains of San Agustin fifty miles west of Socorro, NM, USA, at an elevation of 6970 ft (2124 m)

    Q. In June 2018, you observed an unusual transient that was named AT2018cow, or The Cow. Can you describe this phenomenon? What made it so remarkable?

    A. One night, astronomers saw a point of light in the sky that had not been there before: a new transient! The Cow was particularly special for two reasons: firstly, it was VERY bright, and secondly, it had achieved that brightness VERY quickly. This was exciting, because usually if a transient appears very quickly, it is not so bright, and a very bright transient takes a long time to become bright. So we realised immediately that this was something strange.

    Q. You chose to study this transient with two millimetre telescopes: the Submillimeter Array (SMA) and ALMA (Atacama Large Millimeter/Submillimeter Array). What do millimetre telescopes offer over other telescopes?

    CfA Submillimeter Array Mauna Kea, Hawaii, USA, Altitude 4,080 m (13,390 ft)

    A. In the early stages of a transient (in its first few weeks of existence), we can see the shockwave emitted by an explosion by capturing light at millimetre wavelengths — this is exactly what SMA and ALMA can see. In particular, thanks to ALMA we were able to learn that in the case of The Cow, the shockwave was travelling at one-tenth of the speed of light, that it is very energetic, and that it is travelling into a very dense environment.

    We also used the Australia Telescope Compact Array to look at light from the transient with longer wavelengths. It is only by using lots of different telescopes that we can really get a full picture of a transient.

    CSIRO Australia Compact Array, six radio telescopes at the Paul Wild Observatory, is an array of six 22-m antennas located about twenty five kilometres (16 mi) west of the town of Narrabri in Australia.

    By combining ALMA data with publicly available X-ray data, we were also able to conclude that there must be some ongoing energy production — a kind of continuously-running “engine” at the heart of the explosion. This could be an accreting black hole or a rapidly-spinning neutron star with a strong magnetic field (a magnetar). If The Cow does turn out to have either of these at its centre, it would be very exciting, since it would be the first time that astronomers have witnessed the birth of a central engine.

    Q. It seems that nobody’s quite sure what The Cow is. Why is there so much uncertainty still surrounding this object?

    A. It’s because the combination of The Cow’s properties is so unusual. It’s like that parable of the blind man and the elephant — where several blind men each feel a different part of an elephant and come to different conclusions about what it might look like. If you look at the visible light from The Cow, you might conclude that it is a tidal disruption event. On the other hand, if you look at the longer-wavelength light you see the properties of the shockwave and the density of the surrounding matter, and might conclude that it’s a stellar explosion. It’s incredibly difficult to reconcile all of the properties into one big picture.

    4
    Artist’s impression of a cosmic blast with a “central engine,” such as that suggested for The Cow. At the moment, the central engine is surrounded by dust and gas.
    Credit: Bill Saxton, NRAO/AUI/NSF

    Q. How will you find out what The Cow really is?

    A. Right now, the heart of the explosion is shrouded in gas and dust so it’s difficult to see it. Over the next months, this gas and dust will expand out into space, becoming thinner and more transparent, and allowing us to peer inside. When we are able to see into that central engine, we will be able to learn more about what it there, whether it’s a black hole, a neutron star, or something else entirely.

    Q. What do you think The Cow is, and why?

    A. Personally, I think it’s most likely to be a stellar explosion. Our ALMA observations enabled us to measure the surrounding environment to be incredibly dense — 300 000 particles per cubic centimetre! This kind of density is typical of a stellar explosion. Some people suggest it’s a tidal disruption event, but I think this would be difficult to explain. That said, I’m far from an expert on tidal disruption, so I look forward to hearing more from theorists on how to reconcile that model with our observations.

    Q. So what are the implications of this discovery? What does The Cow teach us about transients?

    A. From my perspective, The Cow is incredibly exciting for two reasons. One is astrophysical — what it can teach us about the death of stars. We think we’ve witnessed the birth of a central engine, an accreting black hole or a spinning neutron star, for the first time.

    The second reason is technological — we learned that this is a member of a whole class of explosions that in their youth emitted bright light at millimetre wavelengths. In the past, millimetre observatories like ALMA were rarely used to study cosmic explosions, but this study has opened the curtain on a new class of transients that are prime targets for millimetre observatories. Over the next few years, we hope to discover many more members of this class, and now we know that we should use millimetre telescopes to study them!

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Visit ESO in Social Media-

    Facebook

    Twitter

    YouTube

    ESO Bloc Icon

    ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 16 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom, along with the host state of Chile. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is a major partner in ALMA, the largest astronomical project in existence. And on Cerro Armazones, close to Paranal, ESO is building the 39-metre European Extremely Large Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo,

    ESO LaSilla
    ESO/Cerro LaSilla 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO VLT 4 lasers on Yepun


    ESO Vista Telescope
    ESO/Vista Telescope at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level.

    ESO NTT
    ESO/NTT at Cerro LaSilla 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO VLT Survey telescope
    VLT Survey Telescope at Cerro Paranal with an elevation of 2,635 metres (8,645 ft) above sea level.

    ALMA Array
    ALMA on the Chajnantor plateau at 5,000 metres.

    ESO/E-ELT,to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).


    ESO APEX
    APEX Atacama Pathfinder 5,100 meters above sea level, at the Llano de Chajnantor Observatory in the Atacama desert.

    Leiden MASCARA instrument, La Silla, located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft)

    Leiden MASCARA cabinet at ESO Cerro la Silla located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft)

    ESO Next Generation Transit Survey at Cerro Paranel, 2,635 metres (8,645 ft) above sea level

    SPECULOOS four 1m-diameter robotic telescopes 2016 in the ESO Paranal Observatory, 2,635 metres (8,645 ft) above sea level

    ESO TAROT telescope at Paranal, 2,635 metres (8,645 ft) above sea level

    ESO ExTrA telescopes at Cerro LaSilla at an altitude of 2400 metres

     
  • richardmitnick 9:08 pm on February 15, 2019 Permalink | Reply
    Tags: , , LIGO Receives New Funding to Search for More Extreme Cosmic Events   

    From Caltech: “LIGO Receives New Funding to Search for More Extreme Cosmic Events” 

    Caltech Logo

    From Caltech

    02/14/2019

    Whitney Clavin
    (626) 395-1856
    wclavin@caltech.edu

    1
    Engineers installing Advanced LIGO upgrades.
    Credit: Caltech/MIT/LIGO Lab

    Grants from the U.S., United Kingdom, and Australia will fund next-generation improvements to LIGO.

    The National Science Foundation (NSF) is awarding Caltech and MIT $20.4 million to upgrade the Laser Interferometer Gravitational-wave Observatory (LIGO), an NSF-funded project that made history in 2015 after making the first direct detection of ripples in space and time, called gravitational waves.


    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project


    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    The investment is part of a joint international effort in collaboration with UK Research and Innovation and the Australian Research Council, which are contributing additional funds. While LIGO is scheduled to turn back on this spring, in its third run of the “Advanced LIGO” phase, the new funding will go toward “Advanced LIGO Plus.” Advanced LIGO Plus is expected to commence operations in 2024 and to increase the volume of deep space the observatory can survey by as much as seven times.

    “I’m extremely excited about the future prospects that the Advanced LIGO Plus upgrade affords gravitational-wave astrophysics,” said Caltech’s David Reitze, executive director of LIGO. “With it we expect to detect gravitational waves from black hole mergers on a daily basis, greatly increasing our understanding of this dark sector of the universe. Gravitational-wave observations of neutron star collisions, now very rare, will become much more frequent, allowing us to more deeply probe the structure of their exotic interiors.”

    Since LIGO’s first detection of gravitational waves from the violent collision of two black holes, it has observed nine additional black hole mergers and one collision of two dense, dead stars called neutron stars. The neutron star merger gave off not just gravitational waves but light waves, detected by dozens of telescopes in space and on the ground. The observations confirmed that heavy elements in our universe, such as platinum and gold, are created in neutron star smashups like this one.

    “This award ensures that NSF’s LIGO, which made the first historic detection of gravitational waves in 2015, will continue to lead in gravitational-wave science for the next decade,” said Anne Kinney, assistant director for NSF’s Mathematical and Physical Sciences Directorate, in a statement. “With improvements to the detectors—which include techniques from quantum mechanics that refine laser light and new mirror coating technology—the twin LIGO observatories will significantly increase the number and strength of their detections. Advanced LIGO Plus will reveal gravity at its strongest and matter at its densest in some of the most extreme environments in the cosmos. These detections may reveal secrets from inside supernovae and teach us about extreme physics from the first seconds after the universe’s birth.”

    Michael Zucker, the Advanced LIGO Plus leader and co-principal investigator, and a scientist at the LIGO Laboratory, operated by Caltech and MIT, said, “I’m thrilled that NSF, UK Research, and Innovation and the Australian Research Council are joining forces to make this key investment possible. Advanced LIGO has altered the course of astrophysics with 11 confirmed gravitational-wave events over the last three years. Advanced LIGO Plus can expand LIGO’s horizons enough to capture this many events each week, and it will enable powerful new probes of extreme nuclear matter as well as Albert Einstein’s general theory of relativity.”

    LIGO is funded by NSF and operated by Caltech and MIT, which conceived of LIGO and led the Initial and Advanced LIGO projects. Financial support for the Advanced LIGO project was led by the NSF, with Germany (Max Planck Society), the U.K. (Science and Technology Facilities Council), and Australia (Australian Research Council-OzGrav) making significant commitments and contributions to the project.

    More than 1,200 scientists from around the world participate in the effort through the LIGO Scientific Collaboration, which includes the GEO Collaboration. A list of additional partners is available at https://my.ligo.org/census.php. LIGO partners with the European Virgo gravitational-wave detector and its collaboration, consisting of more than 300 physicists and engineers belonging to 28 different European research groups.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus


    Caltech campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: