Tagged: NYT Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:15 am on March 18, 2019 Permalink | Reply
    Tags: "Space Is Very Big. Some of Its New Explorers Will Be Tiny", , , , , NASA MarCO cubesats, , NYT   

    From The New York Times: “Space Is Very Big. Some of Its New Explorers Will Be Tiny” 

    New York Times

    From The New York Times

    March 18, 2019
    Shannon Stirone

    The success of NASA’s MarCO mission means that so-called cubesats likely will travel to distant reaches of our solar system.

    NASA JPL MarCO cubesat replica

    Last year, two satellites the size of cereal boxes sped toward Mars as though they were on an invisible track in space. Officially called MarCO A and MarCO B, engineers at NASA had nicknamed them Wall-E and EVE, after the cartoon robots from the Pixar movie.

    They were just as endearing and vulnerable as their namesakes. The satellites, known as cubesats, were sent to watch over NASA’s larger InSight spacecraft as it attempted a perilous landing on the surface of Mars at the end of November.

    NASA/Mars InSight Lander

    Constellations of small satellites like the MarCOs now orbit Earth, used by scientists, private companies, high school students and even governments seeking low-budget eyes in the skies. But never before had a cubesat traveled 90 million miles into space.

    On Nov. 26, as the InSight lander touched down, its status was swiftly relayed back to Earth by the two trailing cubesats. The operation was a success, and the performance of the MarCO satellites may change the way missions operate, enabling cubesats to become deep space travelers in their own right.

    NASA engineers weren’t sure what to expect when the MarCO mission launched last May. “I think it’s opened up so many doors and kind of shattered expectations,” said Anne Marinan, a systems engineer at the Jet Propulsion Laboratory in Pasadena, Calif. “The fact that we actually got as far as we did with both satellites working was huge.”

    About a month after dropping InSight onto Mars, NASA lost contact with the MarCOs. The agency may attempt to wake them up someday, but for now Wall-E and EVE are silently roaming the solar system, proof of a new space exploration technology that almost never got to the launchpad.

    Uncanceling the cubesat program

    The MarCO mission was canceled repeatedly. After all, the primary goal of NASA’s InSight mission was to land a stationary spacecraft on Mars and listen for marsquakes, giving scientists an improved picture of the red planet’s internal makeup.

    And multiple spacecraft orbiting Mars already relay information from its surface back to Earth. The cubesats wouldn’t play a direct role in InSight’s success or failure, so it was a challenge to persuade NASA to support a nonessential program using unproven technology.

    The MarCO team fought hard, prevailing at last with the argument that at a cost of only $18 million, the idea was worth taking a chance on. If these two tiny satellites worked well, it would not only mean that similar spacecraft could support big planetary missions in the future, but also that cubesats might carry instruments of their own.

    Proving the technology’s reach could stretch NASA’s funding, the engineers said, while creating opportunities for wider exploration of the solar system.

    As InSight safely touched down on Mars, the MarCOs were zipping past the planet, collecting readings from the landing and relaying them home more swiftly than the satellites currently orbiting Mars could.

    “We had some astonishing statistics,” said John Baker, manager of the SmallSat program at J.P.L. “We ended up getting 97 percent of all the InSight data back. And that’s because we had two small spacecraft at exactly the right position over the planet to receive the signals.”

    2
    A picture taken by the InSight lander on Mars’s surface in December.Credit NASA, via Associated Press

    3
    Mars, seen by the MarCO B cubesat, about 4,700 miles from the planet in November. Credit Agence France-Presse — Getty Images

    3
    From left, John Baker, Anne Marinan and Andrew Klesh, engineers who led the MarCO mission at J.P.L. Credit Rozette Rago for The New York Times.

    Having custom cubesats overhead meant that NASA did not need to use other Martian satellites or worry about their alignment at the time of landing. If future missions tow along their own MarCOs, teams back on Earth may always know how their spacecraft are doing.

    The creativity of their design contributed to the cubesats’ success. Before they began constructing the MarCOs, the team made 3D models and used yarn to plan how best to run the guts and wiring inside. The MacGyver-like improvisation resulted in part from the program’s low budget.

    The cubesats run on solar power, and their propellant is fire extinguisher fluid. Lining the front of both spacecraft are eight pen-width nozzles that spray cold gas. The cameras onboard are off-the-shelf, and the radio is similar to that in an iPhone.

    But it wasn’t all easy. On their six-month journey to Mars, both cubesats occasionally lost contact with Earth. A couple of months after launch, MarCO B sprang a fuel leak and started spinning out of control. The team thought they’d lost it.

    “Management is slowly encroaching upon the room,” said Andrew Klesh, MarCO’s chief engineer, describing the scene. “We started to look at all the data. We broke apart the problem, and within about 24 hours we had MarCO B back under control.”

    Just a day before landing, MarCO B stopped communicating with Earth again. The cubesat came back online just in time. The InSight probe moved into the Martian landing phase that NASA officials know as “seven minutes of terror,” and both spacecraft spoke to Earth the entire time.

    The future is getting smaller

    NASA JPL Misson Control. Rozette Rago for The New York Times

    While inexpensive cubesats like the MarCOs may serve as real-time communication relays for future deep-space missions, NASA has more adventurous goals in mind, some of which were hinted at in last week’s budget proposals by the Trump administration.

    “When we have big spacecraft, you don’t want to necessarily take it into a very risky situation,” said Mr. Baker. “But you can take an inexpensive probe and send it down to search or to get up close to something and examine it.”

    Mr. Baker and others at J.P.L. are currently working on planetary cubesat missions. One proposal, nicknamed Cupid’s Arrow, envisions using the spacecraft to study the opaque atmosphere of Venus.

    In other proposals, the next iteration of interplanetary cubesats would be scouts deployed by larger spacecraft studying worlds that could be hospitable to life. They could be sent into the plumes of Enceladus, Saturn’s icy moon, which ejects water vapor into space. Or cubesats could descend toward the surface of Europa, the ocean moon of Jupiter.

    “These spacecraft will allow us to act as the Star Trek probes to go down to the surface of challenging worlds where we might not be able to take the risk of a much larger mission,” said Dr. Klesh.

    When NASA’s next-generation rocket, the Space Launch System, heads for its first practice orbit around the moon (a launch which is facing delays), it will carry 13 cubesats, some as tests of technology and others as science experiments.

    NASA Space Launch System depiction

    One cubesat, for example, will be tasked with mapping sources of water on the moon for future human exploration. Another, called NEA Scout, is being designed by Dr. Marinan to monitor nearby asteroids that could pose potential hazards to our planet.

    Private companies are working on shrinking scientific instruments to be placed aboard the next generation of Earth-orbiting satellites. And as instruments become smaller, the options for singular scientific missions in deep space become greater, as does the potential for whole fleets of MarCO-like satellites.

    Toughening tiny travelers

    But much work remains before more cubesats can travel beyond the moon. The challenges that come with operating full-size planetary missions apply to small satellites, too.

    If you want to go to the Jovian system, you need heavy radiation shielding. If you want to go to Saturn, you need more efficient solar panels and ways to keep the tiny spacecraft warm.

    “We think we can actually send a small spacecraft all the way to Jupiter,” said Dr. Baker. “The problem is, I have to come up with a way of automating the onboard spacecraft so that it can fly itself to Jupiter or you only have to talk to it once a month. Or we create a way for it to only radio home when it needs help.”

    These are the kinds of engineering challenges the MarCO team worked to overcome with the journey to Mars.

    “It’s really opened a door of possibilities now that we have shown that this has actually worked,” said Dr. Marinan. “It’s not an impossible concept anymore”

    The engineers even managed to get around one of the tricker issues with how to collect data and talk to the cubesats. Typically, when a spacecraft calls home, it will spend several hours using NASA’s Deep Space Network, the very expensive phone system for calls beyond the moon.

    NASA Deep Space Network


    NASA Deep Space Network Madrid Spain


    NASA Deep Space Network dish, Goldstone, CA, USA


    NASA Canberra, AU, Deep Space Network

    But these long-distance conversations weren’t an option for the MarCOs. So the team at J.P.L. created new ways of monitoring the spacecraft that allowed them to collect in a one-hour period the data that would usually take eight hours.

    “MarCO is a herald of new things to come,” said Dr. Klesh. “Not necessarily better things, but different, and a new way of space exploration that will complement all the larger missions that we do.”

    As it passed Mars, MarCO B returned the first photo ever taken from a cubesat in deep space. It revealed the copper-colored entirety of Mars in the dark of space, and a small section of the spacecraft’s antenna.

    The angle of the photo was intentional — not only to show where we’ve been, but a hint at where these tiny wanderers could go next.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:29 am on March 17, 2019 Permalink | Reply
    Tags: "Ebola Epidemic in Congo Could Last Another Year C.D.C. Director Warns", , , NYT   

    From The New York Times: “Ebola Epidemic in Congo Could Last Another Year, C.D.C. Director Warns” 

    New York Times

    From The New York Times

    March 16, 2019
    Denise Grady

    1
    A health care worker washing protective gear in Beni, Democratic Republic of Congo, in December. Credit Goran Tomasevic/Reuters

    The Ebola outbreak in the Democratic Republic of Congo is not under control and could continue for another year, Dr. Robert R. Redfield, director of the Centers for Disease Control and Prevention, said in an interview on Friday.

    “Let’s not underestimate this outbreak,” he said.

    His outlook was less optimistic than that of the director general of the World Health Organization, Dr. Tedros Adhanom Ghebreyesus, who said at a news conference on Thursday that his goal was to end the outbreak in six months.

    Dr. Redfield has just returned from a trip to the region that included a visit on March 9 to a treatment center in Butembo that, just hours before, had come under gunfire by attackers who killed a police officer. It was the second attack on that center.

    Another was attacked on Thursday.

    Also on Thursday, Dr. Redfield related his observations from the region, telling a Senate subcommittee that sometime between May and mid-September, Congo could run out of an Ebola vaccine that is widely believed to have kept the epidemic from becoming even worse.

    More than 87,000 people have received the vaccine, which is being donated by its manufacturer, Merck. The vaccine is not yet licensed and cannot be sold. So far, Merck has donated 133,000 doses.

    In response to Dr. Redfield’s warning that vaccine supplies could become dangerously low, Pamela L. Eisele, a spokeswoman for Merck, said in an email that the company could not comment on the C.D.C.’s projections. She also said that Merck keeps a stockpile of 300,000 doses, which it replenishes by making more vaccine whenever doses are deployed for outbreaks.

    “Our commitment is to keep at least 300,000 doses,” she said.

    This outbreak began in August. There had been 932 cases and 587 deaths as of Wednesday, according to the World Health Organization. The epidemic is the second largest ever, after the one in West Africa from 2014 to 2016, which killed more than 11,000 people.

    The disease has struck two of Congo’s northeastern provinces, North Kivu and Ituri, a conflict zone where people have for decades lived in fear of armed militias, the police and soldiers. The most heavily affected areas are urban, with a surrounding population of about one million.

    The region is close to Rwanda, South Sudan and Uganda, and tens of thousands of people cross those borders every day. Some 20 million have gone back and forth from the outbreak zone since August, Dr. Redfield estimated, and added, “Truly, it’s nearly miraculous that we haven’t seen cross-border spread yet.”

    The C.D.C. has worked with the neighboring countries to set up screening stations to stop the disease from reaching them. Some travelers with suspicious symptoms have been tested, but so far none have been infected.

    Dr. Redfield said that experts from his agency could do more to help stop the disease, but that so far, because of violence in the area, the United States government had not permitted them to work where they are needed most, in the epicenters of the outbreak. Some were deployed in August to Beni, but were quickly relocated because of unrest in the area. C.D.C. employees are working in other parts of Congo, however, to train health workers and help coordinate the response.

    The State Department decides whether it is safe for government employees to work in other countries.

    “We’re ready to deploy as soon as they tell us it’s time,” Dr. Redfield said.

    He noted that health workers from the World Health Organization, Doctors Without Borders, Alima and other aid groups, had been working nonstop in the region for more than seven months. Fatigue was setting in, he said, and workers needed reinforcements, especially leaders with deep experience in this kind of outbreak.

    Several red flags indicate that the outbreak is not under control, Dr. Redfield said. One is that too many people — about 40 percent — are dying at home and never going to treatment centers. There is a high risk that they have infected family members, health workers and other patients at local clinics they might have gone to for help. The disease is spread by bodily fluids and becomes highly contagious when symptoms start.

    Corpses are very infectious and pose a big risk to relatives who may wash, dress and prepare them for burial.

    To control an outbreak, at least 70 percent of patients need to be isolated and treated safely in isolation units so that they do not infect anyone else, and that percentage needs to be maintained for several months. In the epicenters in Congo now, that figure is only about 58 percent, Dr. Redfield said.

    Another bad sign is that too many new cases are turning up who were not known contacts of patients and were not being monitored, meaning they could have infected yet more unknown people.

    Also problematic is that a high percentage of patients, about 25 percent, became infected at local health centers, and about 75 health workers from those centers have also been infected. Rates that high indicate that information about the disease and how to avoid spreading it have not reached those clinics.

    Many patients in the current outbreak, about 30 percent, have been children, and doctors say they think some caught Ebola when they were taken to local clinics for other illnesses.

    In addition, the contact tracing has not always been effective. In some cases, if contacts missed a scheduled appointment to be checked for symptoms, their names were simply dropped from the list, Dr. Redfield said.

    He said one incentive to encourage contacts to cooperate was to offer food if they showed up. But then a decision was made locally to hand out the food at a central location, which defeated the purpose of using it as an incentive.

    He said local workers needed on-the-ground training in person from experts in this kind of epidemiologic work — something the C.D.C. can offer if its employees are given permission to deploy into the hot zones.

    Dr. Redfield also echoed a concern expressed by Dr. Joanne Liu, the president of Doctors Without Borders, that medical teams had not fully gained the trust of the affected communities. Without that connection, people will continue to avoid testing and treatment, and decline help in carrying out safe funerals and burials.

    “How exactly to accomplish that is going to take some time, some thought,” Dr. Redfield said. “I haven’t seen evidence to date that we have an effective partnership with the community.”

    Speaking to the Senate subcommittee, he said: “The community doesn’t trust its own government. And it sure doesn’t trust outsiders.”

    2
    Health workers with a coffin of a child suspected of dying from Ebola in Beni, Democratic Republic of Congo, in December. Credit Goran Tomasevic/Reuters

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:52 am on March 9, 2019 Permalink | Reply
    Tags: "Bit by Bit Scientists Gain Ground on AIDS", A a relatively new drug dolutegravir was better than the standard treatment for women about to give birth, A study of the “test and treat” strategy in one million people in South Africa and Zambia — the largest H.I.V. prevention study ever conducted — produced mixed results., Both studies tested monthly injections of cabotegravir and rilpivirine deep into the buttocks., But offering immediate treatment to all did not help as much as had been expected., But other research released this week showed that scientists are making slow but steady progress on the tactics and medicines needed to fight the epidemic especially in Africa, But other studies have suggested that Descovy is more likely to raise cholesterol., Doctors working in poor countries are eager for injections or implants that will release small daily doses of antiretroviral drugs because the devices can be used in secrecy., In another study Descovy a new formulation of the H.I.V. treatment Truvada proved just as effective at suppressing the virus, In poor countries cabotegravir may be especially useful because it does not need to be refrigerated., It was assumed that communities where patients were offered treatment immediately would have by far the lowest rates of new infection. But they did not., Long-lasting contraceptive injections like Depo-Provera are much more popular in Africa than in the United States because many women must conceal birth control from their partners, Many gay or bisexual men would welcome a discreet way to take H.I.V. drugs, Monthly injections of long-acting H.I.V. drugs proved as good as daily pills at suppressing the virus, NYT, Offering treatment without offering PrEP at the same time “is not the way to epidemic control. Frustrating!, Offering widespread home testing plus treatment to the sickest patients did reduce the number of new infections., Only 43 percent of the women who got older efavirenz-based combinations reached that benchmark., People who take Truvada every day as PrEP or pre-exposure prophylaxis are almost 100 percent protected against getting H.I.V. whether from unprotected sex or drug injection., Post-trial surveys found that 98 percent of the subjects preferred injections to pills., Providing injections may be harder than handing out pills but the option may attract patients with H.I.V. who would otherwise stay away., Proving that injectable H.I.V. drugs work is important because many people forget to take their daily pills or cannot keep pills in their homes., The clinical trial involving Descovy showed that it suppressed the virus just as well as Truvada did., The pregnancy trial called Dolphin-2 showed that 74 percent of women who got dolutegravir-based drug cocktails in their third trimester had no H.I.V. in their blood when they gave birth., The shots worked and only a handful of participants dropped out complaining they were too painful., The success of the two injectable-drug studies named Atlas and Flair raised hopes among H.I.V. experts that these shots may eventually be used to protect the uninfected., The trial known as Discover found that Descovy was slightly less likely than Truvada to harm kidneys or bone density., The unnamed “London patient” — the second person apparently cured of H.I.V. — earned all the headlines, Truvada has been very safe for most patients but its high price now about $20000 a year and the red tape needed to help the uninsured pay for it have become major obstacles to ending the AIDS epidemic   

    From The New York Times: “Bit by Bit, Scientists Gain Ground on AIDS” 

    New York Times

    From The New York Times

    March 8, 2019
    Donald G. McNeil Jr.

    1
    Students from the University of the Witwatersrand in Johannesburg explained how to use a self-testing H.I.V. kit last year. Credit Mujahid Safodien/Agence France-Presse — Getty Images

    The unnamed “London patient” — the second person apparently cured of H.I.V. — earned all the headlines. But other research released this week at the Conference on Retroviruses and Opportunistic Infections showed that scientists are making slow but steady progress on the tactics and medicines needed to fight the epidemic, especially in Africa.

    Monthly injections of long-acting H.I.V. drugs proved as good as daily pills at suppressing the virus, according to two trials involving more than 1,000 patients. In another study, Descovy, a new formulation of the H.I.V. treatment Truvada, proved just as effective at suppressing the virus, and may have fewer — or at least different — side effects.

    A study of the “test and treat” strategy in one million people in South Africa and Zambia — the largest H.I.V. prevention study ever conducted — produced mixed results.

    Offering widespread home testing plus treatment to the sickest patients did reduce the number of new infections. But offering immediate treatment to all did not help as much as had been expected.

    And a study of pregnant women in Uganda and South Africa showed that a relatively new drug, dolutegravir, was better than the standard treatment for women about to give birth.

    The results of those trials were revealed at the C.R.O.I. meeting in Seattle, a scientific conference held each year in the United States. It tends to offer more research and fewer theatrics than the International AIDS Society conferences that move to new cities around the globe every two years.

    Proving that injectable H.I.V. drugs work is important because many people forget to take their daily pills or cannot keep pills in their homes.

    The success of the two injectable-drug studies named Atlas and Flair [nam] raised hopes among H.I.V. experts that these shots may eventually be used to protect the uninfected. (Trials testing that idea are underway now, but results are not expected for about three years.)

    Doctors working in poor countries are eager for injections or implants that will release small daily doses of antiretroviral drugs because the devices can be used in secrecy. Providing injections may be harder than handing out pills, but the option may attract patients with H.I.V. who would otherwise stay away.

    3
    A study of pregnant women in Uganda and South Africa found dolutegravir to be more effective in fighting H.I.V. than the standard treatment. Credit Baz Ratner/Reuters

    African women often say they cannot be caught with pills, microbicides, vaginal rings or other anti-H.I.V. measures because they fear that their husbands, lovers, family members or neighbors will mistakenly assume they are infected.

    Long-lasting contraceptive injections like Depo-Provera are much more popular in Africa than in the United States because many women must conceal birth control from their partners, who may get angry that they do not want more children.

    Similarly, many gay or bisexual men would welcome a discreet way to take H.I.V. drugs because they are hiding from their spouses or families that they have sex with men.

    Both studies tested monthly injections of cabotegravir and rilpivirine deep into the buttocks. The shots worked, and only a handful of participants dropped out complaining they were too painful. Post-trial surveys found that 98 percent of the subjects preferred injections to pills.

    In poor countries, cabotegravir may be especially useful because it does not need to be refrigerated.

    The clinical trial involving Descovy, a new pill from Gilead Sciences containing a form of tenofovir known as TAF — instead of TDF, the form in Truvada — showed that it suppressed the virus just as well as Truvada did.

    People who take Truvada every day as PrEP, or pre-exposure prophylaxis, are almost 100 percent protected against getting H.I.V., whether from unprotected sex or drug injection.

    The trial, known as Discover, found that Descovy was slightly less likely than Truvada to harm kidneys or bone density, but other studies have suggested that Descovy is more likely to raise cholesterol.

    Gilead said it will soon ask the Food and Drug Administration to let it market Descovy as PrEP. Some AIDS activists worry that people at risk will be urged to switch to Descovy just as low-cost generic versions of Truvada become available.

    Truvada has been very safe for most patients, but its high price — now about $20,000 a year — and the red tape needed to help the uninsured pay for it have become major obstacles to ending the AIDS epidemic in the United States.

    Gilead has already sold $33 billion worth of tenofovir; it is now shifting its new H.I.V. drug cocktails to TAF, which will remain patented — and, presumably, expensive — for many more years.

    The trial in 21 neighborhoods in Zambia and South Africa — a region where H.I.V. infection rates are the world’s highest — was designed to see whether infection rates could be dramatically [nam] cut if teams of counselors went door-to-door, testing anyone who agreed and offering pills to anyone testing positive. Counselors also offered advice, condoms, circumcisions, tuberculosis tests and other incentives to lower infection rates during the trial, which is known as PopArt and ran from 2013 to 2018.

    It was assumed that communities where patients were offered treatment immediately would have by far the lowest rates of new infection. But they did not, even though tests suggested that more people there were taking their pills; further analysis of that quandary will be done, the investigators said.

    “PopArt is a head-scratcher,” Mitchell J. Warren, executive director of A.V.A.C., an advocacy group for H.I.V. prevention, said in an email.

    Combining the results of the two main subgroups — those offered pills immediately and those offered pills only when they showed early signs of illness — showed that these strategies lowered new infections by about 20 percent.

    Therefore, Mr. Warren said, offering treatment without offering PrEP at the same time “is not the way to epidemic control. Frustrating!”

    The pregnancy trial, called Dolphin-2, showed that 74 percent of women who got dolutegravir-based drug cocktails in their third trimester had no H.I.V. in their blood when they gave birth. Only 43 percent of the women who got older efavirenz-based combinations reached that benchmark.

    That was a “highly significant” difference in how fast each drug drove the virus out of the blood, said Dr. Saye Khoo, an H.I.V. specialist at the University of Liverpool who led the trial.

    That is important because many women in Africa find out they are infected late in pregnancy, and it can be hard to prevent them from infecting their babies.

    Some babies in each test group died, and a few were born infected anyway. The investigators believed the deaths were from unrelated causes like sepsis or pneumonia, and that the rare H.I.V. infections occurred early in the pregnancies, before either drug regimen could kick in.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 1:41 pm on March 6, 2019 Permalink | Reply
    Tags: "Another Obstacle for Women in Science: Men Get More Federal Grant Money", Among the top 50 institutions funded by the N.I.H. the researchers found that women received median awards of $94000 compared with $135000 for men, At the Big Ten schools including Penn State the University of Michigan and Northwestern female principal investigators received a median grant of $66000 compared with $148000 for men, “It means women are working harder with less money to get to the same level as men” said Dr. Woodruff a researcher at the Northwestern University Feinberg School of Medicine, “That first grant is monumentally important and determines your trajectory” said Carolina Abdala a head and neck specialist at the University of Southern California who won her first N.I.H. grant , But when it comes to the size of those awards men are often rewarded with bigger grants than women according to a study published Tuesday in JAMA, For ambitious young scientists trying to start their own research labs winning a prestigious grant from the National Institutes of Health can be career making, Having less money put women at a disadvantage making it harder to hire graduate students and buy lab equipment, Identifying the problem is a step toward solving the problem, NYT, Only one in five applicants for an N.I.H. grant lands one, Over all the median N.I.H. award for female researchers at universities was roughly $126600 compared with $167700 for men., The disparity was even greater at the nation’s top universities, The N.I.H. did not dispute the study’s findings and said it was working to address the funding disparities and more broadly the gender inequities that bedevil women in the fields, The study analyzed 54000 grants awarded from 2006 to 2017 and used key benchmarks to ensure recipients were at similar points in their careers, The study by researchers at Northwestern University confirms longstanding disparities between men and women in the fields of science, There was one exception to the pattern- the study found that women who were applying for individual research grants received nearly $16000 more than male applicants 11% of grants,   

    From The New York Times: Women in STEM-“Another Obstacle for Women in Science: Men Get More Federal Grant Money” 

    New York Times

    From The New York Times

    March 5, 2019
    Andrew Jacobs

    1
    A scientist working with radioactive material in the isotope laboratory of the National Institutes of Health, circa 1950. Credit National Institutes of Health.

    For ambitious young scientists trying to start their own research labs, winning a prestigious grant from the National Institutes of Health can be career making.

    But when it comes to the size of those awards, men are often rewarded with bigger grants than women, according to a study published Tuesday in JAMA, which found that men who were the principal investigators on research projects received $41,000 more than women.

    The disparity was even greater at the nation’s top universities. At Yale, women received $68,800 less than men, and at Brown, the median disparity was $76,500. Over all, the median N.I.H. award for female researchers was roughly $126,600, compared with $167,700 for men.

    The study, by researchers at Northwestern University, confirms longstanding disparities between men and women in the field of science. In recent years, a cavalcade of studies has documented biases that favor male researchers in hiring, pay, prize money, speaking invitations and even the effusiveness displayed in letters of recommendation.

    “It’s disappointing, but identifying the problem is a step toward solving the problem,” said Cori Bargmann, a neuroscientist who runs the $3 billion science arm of the Chan Zuckerberg Initiative, a philanthropic organization, and who was not involved in the study.

    In a statement, the N.I.H. did not dispute the study’s findings and said it was working to address the funding disparities and, more broadly, the gender inequities that bedevil women in the field.

    “We have and continue to support efforts to understand the barriers and factors faced by women scientists and to implement interventions to overcome them,” it said.

    Only one in five applicants for an N.I.H. grant lands one, an achievement that can be crucial in whether a young researcher succeeds or drops out of the field.

    “That first grant is monumentally important and determines your trajectory,” said Carolina Abdala, a head and neck specialist at the University of Southern California, who won her first N.I.H. grant in 1998. “It can help get you on the tenure track and it gets you into that club of successful scientists who can procure their own funding, which makes it easier to change jobs.”

    But the size of the grant can also be important in determining the scale and ambition of a junior researcher’s first lab. Teresa K. Woodruff, a co-author of the JAMA study, said that having less money put women at a disadvantage, making it harder to hire graduate students and buy lab equipment.

    “It means women are working harder with less money to get to the same level as men,” said Dr. Woodruff, a researcher at the Northwestern University Feinberg School of Medicine. “If we had the same footing, the engine of science would move a little faster toward the promise of basic science and medical cures.”

    The study analyzed 54,000 grants awarded from 2006 to 2017 and used key benchmarks to ensure recipients were at similar points in their careers. Among the top 50 institutions funded by the N.I.H., the researchers found that women received median awards of $94,000 compared with $135,000 for men. At the Big Ten schools, including Penn State, the University of Michigan and Northwestern, female principal investigators received a median grant of $66,000 compared with $148,000 for men.

    There was one exception to the pattern; in a curious twist, the study found that women who were applying for individual research grants received nearly $16,000 more than male applicants. Dr. Woodruff noted that such grants made up only 11 percent of N.I.H. grant money, but said more research was needed into funding disparities.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 7:01 pm on February 25, 2019 Permalink | Reply
    Tags: A disturbance in the Force, Adam Riess [High-Z Supernova Search Team] Saul Perlmutter [Supernova Cosmology Project] and Brian Schmidt [High-Z Supernova Search Team]shared the Nobel Prize in physics awarded in 2011 for proving th, As space expands it carries galaxies away from each other like the raisins in a rising cake. The farther apart two galaxies are the faster they will fly away from each other. The Hubble constant simpl, , , Axions? Phantom energy? Astrophysicists scramble to patch a hole in the universe- rewriting cosmic history in the process, , , , Dark energy might be getting stronger and denser leading to a future in which atoms are ripped apart and time ends, , NYT, The Hubble constant- named after Edwin Hubble the Mount Wilson astronomer who in 1929 discovered that the universe is expanding, Thus far there is no evidence for most of these ideas, Under the influence of dark energy the cosmos is now doubling in size every 10 billion years   

    From The New York Times: “Have Dark Forces Been Messing With the Cosmos?” 

    New York Times

    From The New York Times

    Feb. 25, 2019
    Dennis Overbye

    1
    Brian Stauffer

    Axions? Phantom energy? Astrophysicists scramble to patch a hole in the universe, rewriting cosmic history in the process.

    There was, you might say, a disturbance in the Force.

    Long, long ago, when the universe was only about 100,000 years old — a buzzing, expanding mass of particles and radiation — a strange new energy field switched on. That energy suffused space with a kind of cosmic antigravity, delivering a not-so-gentle boost to the expansion of the universe.

    Then, after another 100,000 years or so, the new field simply winked off, leaving no trace other than a speeded-up universe.

    So goes the strange-sounding story being promulgated by a handful of astronomers from Johns Hopkins University. In a bold and speculative leap into the past, the team has posited the existence of this field to explain an astronomical puzzle: the universe seems to be expanding faster than it should be.

    The cosmos is expanding only about 9 percent more quickly than theory prescribes. But this slight-sounding discrepancy has intrigued astronomers, who think it might be revealing something new about the universe.

    And so, for the last couple of years, they have been gathering in workshops and conferences to search for a mistake or loophole in their previous measurements and calculations, so far to no avail.

    “If we’re going to be serious about cosmology, this is the kind of thing we have to be able to take seriously,” said Lisa Randall, a Harvard theorist who has been pondering the problem.

    At a recent meeting in Chicago, Josh Frieman, a theorist at the Fermi National Accelerator Laboratory in Batavia, Ill., asked: “At what point do we claim the discovery of new physics?”

    Now ideas are popping up. Some researchers say the problem could be solved by inferring the existence of previously unknown subatomic particles. Others, such as the Johns Hopkins group, are invoking new kinds of energy fields.

    Adding to the confusion, there already is a force field — called dark energy — making the universe expand faster. And a new, controversial report suggests that this dark energy might be getting stronger and denser, leading to a future in which atoms are ripped apart and time ends.

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    Thus far, there is no evidence for most of these ideas. If any turn out to be right, scientists may have to rewrite the story of the origin, history and, perhaps, fate of the universe.

    Or it could all be a mistake. Astronomers have rigorous methods to estimate the effects of statistical noise and other random errors on their results; not so for the unexamined biases called systematic errors.

    As Wendy L. Freedman, of the University of Chicago, said at the Chicago meeting, “The unknown systematic is what gets you in the end.”

    2
    Edwin Hubble in 1949, two decades after he discovered that the universe is expanding.CreditBoyer/Roger Viollet, via Getty Images

    Hubble trouble

    Generations of great astronomers have come to grief trying to measure the universe. At issue is a number called the Hubble constant, named after Edwin Hubble, the Mount Wilson astronomer who in 1929 discovered that the universe is expanding.

    Edwin Hubble looking through a 100-inch Hooker telescope at Mount Wilson in Southern California

    Mt Wilson 100 inch Hooker Telescope, perched atop the San Gabriel Mountains outside Los Angeles, CA, USA, Mount Wilson, California, US, Altitude 1,742 m (5,715 ft)

    As space expands, it carries galaxies away from each other like the raisins in a rising cake. The farther apart two galaxies are, the faster they will fly away from each other. The Hubble constant simply says by how much.

    But to calibrate the Hubble constant, astronomers depend on so-called standard candles: objects, such as supernova explosions and certain variable stars, whose distances can be estimated by luminosity or some other feature. This is where the arguing begins.

    Standard Candles to measure age and distance of the universe NASA

    Until a few decades ago, astronomers could not agree on the value of the Hubble constant within a factor of two: either 50 or 100 kilometers per second per megaparsec. (A megaparsec is 3.26 million light years.)

    But in 2001, a team using the Hubble Space Telescope, and led by Dr. Freedman, reported a value of 72. For every megaparsec farther away from us that a galaxy is, it is moving 72 kilometers per second faster.

    NASA/ESA Hubble Telescope

    More recent efforts by Adam G. Riess, of Johns Hopkins and the Space Telescope Science Institute, and others have obtained similar numbers, and astronomers now say they have narrowed the uncertainty in the Hubble constant to just 2.4 percent.

    But new precision has brought new trouble. These results are so good that they now disagree with results from the European Planck spacecraft, which predict a Hubble constant of 67.

    ESA/Planck 2009 to 2013

    4
    Workers with the European Planck spacecraft at the European Space Agency spaceport in Kourou, French Guiana, in 2009.CreditESA – S. Corvaja

    The discrepancy — 9 percent — sounds fatal but may not be, astronomers contend, because Planck and human astronomers do very different kinds of observations.

    Planck is considered the gold standard of cosmology. It spent four years studying the cosmic bath of microwaves [CMB] left over from the end of the Big Bang, when the universe was just 380,000 years old.

    CMB per ESA/Planck

    But it did not measure the Hubble constant directly. Rather, the Planck group derived the value of the constant, and other cosmic parameters, from a mathematical model largely based on those microwaves.

    In short, Planck’s Hubble constant is based on a cosmic baby picture. In contrast, the classical astronomical value is derived from what cosmologists modestly call “local measurements,” a few billion light-years deep into a middle-aged universe.

    What if that baby picture left out or obscured some important feature of the universe?

    ‘Cosmological Whac-a-Mole’

    And so cosmologists are off to the game that Lloyd Knox, an astrophysicist from the University of California, Davis, called “cosmological Whac-a-Mole” at the recent Chicago meeting: attempting to fix the model of the early universe, to make it expand a little faster without breaking what the model already does well.

    One approach, some astrophysicists suggest, is to add more species of lightweight subatomic particles, such as the ghostlike neutrinos, to the early universe. (Physicists already recognize three kinds of neutrinos, and argue whether there is evidence for a fourth variety.) These would give the universe more room to stash energy, in the same way that more drawers in your dresser allow you to own more pairs of socks. Thus invigorated, the universe would expand faster, according to the Big Bang math, and hopefully not mess up the microwave baby picture.

    A more drastic approach, from the Johns Hopkins group, invokes fields of exotic anti-gravitational energy. The idea exploits an aspect of string theory, the putative but unproven “theory of everything” that posits that the elementary constituents of reality are very tiny, wriggling strings.

    String theory suggests that space could be laced with exotic energy fields associated with lightweight particles or forces yet undiscovered. Those fields, collectively called quintessence, could act in opposition to gravity, and could change over time — popping up, decaying or altering their effect, switching from repulsive to attractive.

    The team focused in particular on the effects of fields associated with hypothetical particles called axions. Had one such field arisen when the universe was about 100,000 years old, it could have produced just the right amount of energy to fix the Hubble discrepancy, the team reported in a paper late last year. They refer to this theoretical force as “early dark energy.”

    “I was surprised how it came out,” said Marc Kamionkowski, a Johns Hopkins cosmologist who was part of the study. “This works.”

    The jury is still out. Dr. Riess said that the idea seems to work, which is not to say that he agrees with it, or that it is right. Nature, manifest in future observations, will have the final say.

    Dr. Knox called the Johns Hopkins paper “an existence proof” that the Hubble problem could be solved. “I think that’s new,” he said.

    Dr. Randall, however, has taken issue with aspects of the Johns Hopkins calculations. She and a trio of Harvard postdocs are working on a similar idea that she says works as well and is mathematically consistent. “It’s novel and very cool,” Dr. Randall said.

    So far, the smart money is still on cosmic confusion. Michael Turner, a veteran cosmologist at the University of Chicago and the organizer of a recent airing of the Hubble tensions, said, “Indeed, all of this is going over all of our heads. We are confused and hoping that the confusion will lead to something good!”

    Doomsday? Nah, nevermind

    Early dark energy appeals to some cosmologists because it hints at a link to, or between, two mysterious episodes in the history of the universe. As Dr. Riess said, “This is not the first time the universe has been expanding too fast.”

    The first episode occurred when the universe was less than a trillionth of a trillionth of a second old. At that moment, cosmologists surmise, a violent ballooning propelled the Big Bang; in a fraction of a trillionth of a second, this event — named “inflation” by the cosmologist Alan Guth, of M.I.T. — smoothed and flattened the initial chaos into the more orderly universe observed today. Nobody knows what drove inflation.

    The second episode is unfolding today: cosmic expansion is speeding up. But why? The issue came to light in 1998, when two competing teams of astronomers asked whether the collective gravity of the galaxies might be slowing the expansion enough to one day drag everything together into a Big Crunch.

    To great surprise, they discovered the opposite: the expansion was accelerating under the influence of an anti-gravitational force later called dark energy. The two teams won a Nobel Prize.

    Studies of Universe’s Expansion Win Physics Nobel

    By DENNIS OVERBYE OCT. 4, 2011

    3
    From left, Adam Riess [High-Z Supernova Search Team], Saul Perlmutter [Supernova Cosmology Project] and Brian Schmidt [High-Z Supernova Search Team]shared the Nobel Prize in physics awarded Tuesday. Credit Johns Hopkins University; University Of California At Berkeley; Australian National University

    Dark energy comprises 70 percent of the mass-energy of the universe. And, spookily, it behaves very much like a fudge factor known as the cosmological constant, a cosmic repulsive force that Einstein inserted in his equations a century ago thinking it would keep the universe from collapsing under its own weight. He later abandoned the idea, perhaps too soon.

    Under the influence of dark energy, the cosmos is now doubling in size every 10 billion years — to what end, nobody knows.

    Early dark energy, the force invoked by the Johns Hopkins group, might represent a third episode of antigravity taking over the universe and speeding it up. Perhaps all three episodes are different manifestations of the same underlying tendency of the universe to go rogue and speed up occasionally. In an email, Dr. Riess said, “Maybe the universe does this from time-to-time?”

    If so, it would mean that the current manifestation of dark energy is not Einstein’s constant after all. It might wink off one day. That would relieve astronomers, and everybody else, of an existential nightmare regarding the future of the universe. If dark energy remains constant, everything outside our galaxy eventually will be moving away from us faster than the speed of light, and will no longer be visible. The universe will become lifeless and utterly dark.

    But if dark energy is temporary — if one day it switches off — cosmologists and metaphysicians can all go back to contemplating a sensible tomorrow.

    “An appealing feature of this is that there might be a future for humanity,” said Scott Dodelson, a theorist at Carnegie Mellon who has explored similar scenarios [Physical Review D].

    The phantom cosmos

    But the future is still up for grabs.

    Far from switching off, the dark energy currently in the universe actually has increased over cosmic time, according to a recent report in Nature Astronomy. If this keeps up, the universe could end one day in what astronomers call the Big Rip, with atoms and elementary particles torn asunder — perhaps the ultimate cosmic catastrophe.

    This dire scenario emerges from the work of Guido Risaliti, of the University of Florence in Italy, and Elisabeta Lusso, of Durham University in England. For the last four years, they have plumbed the deep history of the universe, using violent, faraway cataclysms called quasars as distance markers.

    Quasars arise from supermassive black holes at the centers of galaxies; they are the brightest objects in nature, and can be seen clear across the universe. As standard candles, quasars aren’t ideal because their masses vary widely. Nevertheless, the researchers identified some regularities in the emissions from quasars, allowing the history of the cosmos to be traced back nearly 12 billion years. The team found that the rate of cosmic expansion deviated from expectations over that time span.

    One interpretation of the results is that dark energy is not constant after all, but is changing, growing denser and thus stronger over cosmic time. It so happens that this increase in dark energy also would be just enough to resolve the discrepancy in measurements of the Hubble constant.

    The bad news is that, if this model is right, dark energy may be in a particularly virulent and — most physicists say — implausible form called phantom energy. Its existence would imply that things can lose energy by speeding up, for instance. Robert Caldwell, a Dartmouth physicist, has referred to it as “bad news stuff.”

    As the universe expands, the push from phantom energy would grow without bounds, eventually overcoming gravity and tearing apart first Earth, then atoms.

    The Hubble-constant community responded to the new report with caution. “If it holds up, this is a very interesting result,” said Dr. Freedman.

    Astronomers have been trying to take the measure of this dark energy for two decades. Two space missions — the European Space Agency’s Euclid and NASA’s Wfirst — have been designed to study dark energy and hopefully deliver definitive answers in the coming decade. The fate of the universe is at stake.

    ESA/Euclid spacecraft

    NASA/WFIRST

    In the meantime, everything, including phantom energy, is up for consideration, according to Dr. Riess.

    “In a list of possible solutions to the tension via new physics, mentioning weird dark energy like this would seem appropriate,” he wrote in an email. “Heck, at least their dark energy goes in the right direction to solve the tension. It could have gone the other way and made it worse!”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 2:16 pm on February 22, 2019 Permalink | Reply
    Tags: "DNA Gets a New — and Bigger — Genetic Alphabet", DNA is spelled out with four letters or bases. Researchers have now built a system with eight. It may hold clues to the potential for life elsewhere in the universe and could also expand our capacity , Natural DNA is spelled out with four different letters known as bases — A C G and T. Dr. Benner and his colleagues have built DNA with eight bases — four natural and four unnatural. They named the, NYT   

    From The New York Times: “DNA Gets a New — and Bigger — Genetic Alphabet” 

    New York Times

    From The New York Times

    Feb. 21, 2019
    Carl Zimmer

    DNA is spelled out with four letters, or bases. Researchers have now built a system with eight. It may hold clues to the potential for life elsewhere in the universe and could also expand our capacity to store digital data on Earth.

    1
    Animation by Millie Georgiadis/Indiana University School of Medicine

    In 1985, the chemist Steven A. Benner sat down with some colleagues and a notebook and sketched out a way to expand the alphabet of DNA. He has been trying to make those sketches real ever since.

    On Thursday, Dr. Benner and a team of scientists reported success: in a paper, published in Science, they said they have in effect doubled the genetic alphabet.

    Natural DNA is spelled out with four different letters known as bases — A, C, G and T. Dr. Benner and his colleagues have built DNA with eight bases — four natural, and four unnatural. They named their new system Hachimoji DNA (hachi is Japanese for eight, moji for letter).

    Crafting the four new bases that don’t exist in nature was a chemical tour-de-force. They fit neatly into DNA’s double helix, and enzymes can read them as easily as natural bases, in order to make molecules.

    “We can do everything here that is necessary for life,” said Dr. Benner, now a distinguished fellow at the Foundation for Applied Molecular Evolution in Florida.

    Hachimoji DNA could have many applications, including a far more durable way to store digital data that could last for centuries. “This could be huge that way,” said Dr. Nicholas V. Hud, a biochemist at Georgia Institute of Technology who was not involved in research.

    It also raises a profound question about the nature of life elsewhere in the universe, offering the possibility that the four-base DNA we are familiar with may not be the only chemistry that could support life.

    The four natural bases of DNA are all anchored to molecular backbones. A pair of backbones can join into a double helix because their bases are attracted to each other. The bases form a bond with their hydrogen atoms.

    But bases don’t stick together at random. C can only bond to G, and A can only bond to T. These strict rules help ensure that DNA strands don’t clump together into a jumble. No matter what sequence of bases are contained in natural DNA, it still keeps its shape.

    But those four bases are not the only compounds that can attach to DNA’s backbone and link to another base — at least on paper. Dr. Benner and his colleagues thought up a dozen alternatives.

    Working at the Swiss university ETH Zurich at the time, Dr. Benner tried to make some of those imaginary bases real.

    “Of course, the first thing you discover is your design theory is not terribly good,” said Dr. Benner.

    Once Dr. Benner and his colleagues combined real atoms, according to his designs, the artificial bases didn’t work as he had hoped.

    Nevertheless, Dr. Benner’s initial forays impressed other chemists. “His work was a real inspiration for me,” said Floyd E. Romesberg, now of the Scripps Research Institute in San Diego. Reading about Dr. Benner’s early experiments, Dr. Romesberg decided to try to create his own bases.

    Dr. Romesberg chose not to make bases that linked together with hydrogen bonds; instead, he fashioned a pair of oily compounds that repelled water. That chemistry brought his unnatural pair of bases together. “Oil doesn’t like to mix with water, but it does like to mix with oil,” said Dr. Romesberg.

    In the years that followed, Dr. Romesberg and his colleagues fashioned enzymes that could copy DNA made from both natural bases and unnatural, oily ones. In 2014, the scientists engineered bacteria that could make new copies of these hybrid genes.

    In recent years, Dr. Romesberg’s team has begun making unnatural proteins from these unnatural genes. He founded a company, Synthorx, to develop some of these proteins as cancer drugs.

    At the same time, Dr. Benner continued with his own experiments. He and his colleagues succeeded in creating one pair of new bases.

    Like Dr. Romesberg, they found an application for their unnatural DNA. Their six-base DNA became the basis of a new, sensitive test for viruses in blood samples.

    They then went on to create a second pair of new bases. Now with eight bases to play with, the researchers started building DNA molecules with a variety of different sequences. The researchers found that no matter which sequence they created, the molecules still formed the standard double helix.

    Because Hachimoji DNA held onto this shape, it could act like regular DNA: it could store information, and that information could be read to make a molecule.

    For a cell, the first step in making a molecule is to read a gene using special enzymes. They make a copy of the gene in a single-stranded version of DNA, called RNA.

    Depending on the gene, the cell will then do one of two things with that RNA. In some cases, it will use the RNA as a guide to build a protein. But in other cases, the RNA molecule floats off to do a job of its own.

    Dr. Benner and his colleagues created a Hachimoji gene for an RNA molecule. They predicted that the RNA molecule would be able to grab a molecule called a fluorophore. Cradled by the RNA molecule, the fluorophore would absorb light and release it as a green flash.

    Andrew Ellington, an evolutionary engineer at the University of Texas, led the effort to find an enzyme that could read Hachimoji DNA. He and his colleagues found a promising one made by a virus, and they tinkered with it until the enzyme could easily read all eight bases.

    They mixed the enzyme in test tubes with the Hachimoji gene. As they had hoped, their test tubes began glowing green.

    “Here you have it from start to finish,” said Dr. Benner. “We can store information, we can transfer it to another molecule and that other molecule has a function — and here it is, glowing.”

    In the future, Hachimoji DNA may store information of a radically different sort. It might someday encode a movie or a spreadsheet.

    Today, movies, spreadsheets and other digital files are typically stored on silicon chips or magnetic tapes. But those kinds of storage have serious shortcomings. For one thing, they can deteriorate in just years.

    DNA, by contrast, can remain intact for centuries. Last year, researchers at Microsoft and the University of Washington managed to encode 35 songs, videos, documents, and other files, totaling 200 megabytes, in a batch of DNA molecules.

    With eight bases instead of four, Hachimoji DNA could potentially encode far more information. “DNA capable of twice as much storage? That’s pretty amazing in my view,” said Dr. Ellington.

    Beyond our current need for storage, Hachimoji DNA also offers some clues about life itself. Scientists have long wondered if our DNA evolved only four bases because they’re the only ones that can work in genes. Could life have taken a different path?

    “Steve’s work goes a long way to say that it could have — it just didn’t,” said Dr. Romesberg.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:05 pm on February 17, 2019 Permalink | Reply
    Tags: "The Secret History of Women in Coding", , , NYT   

    From The New York Times: Women In STEM-“The Secret History of Women in Coding” 

    New York Times

    From The New York Times

    Feb. 13, 2019
    Clive Thompson

    Computer programming once had much better gender balance than it does today. What went wrong?

    1
    2
    Mary Allen Wilkes with a LINC at M.I.T., where she was a programmer. Credit Joseph C. Towler, Jr.

    As a teenager in Maryland in the 1950s, Mary Allen Wilkes had no plans to become a software pioneer — she dreamed of being a litigator. One day in junior high in 1950, though, her geography teacher surprised her with a comment: “Mary Allen, when you grow up, you should be a computer programmer!” Wilkes had no idea what a programmer was; she wasn’t even sure what a computer was. Relatively few Americans were. The first digital computers had been built barely a decade earlier at universities and in government labs.

    By the time she was graduating from Wellesley College in 1959, she knew her legal ambitions were out of reach. Her mentors all told her the same thing: Don’t even bother applying to law school. “They said: ‘Don’t do it. You may not get in. Or if you get in, you may not get out. And if you get out, you won’t get a job,’ ” she recalls. If she lucked out and got hired, it wouldn’t be to argue cases in front of a judge. More likely, she would be a law librarian, a legal secretary, someone processing trusts and estates.

    But Wilkes remembered her junior high school teacher’s suggestion. In college, she heard that computers were supposed to be the key to the future. She knew that the Massachusetts Institute of Technology had a few of them.


    So on the day of her graduation, she had her parents drive her over to M.I.T. and marched into the school’s employment office. “Do you have any jobs for computer programmers?” she asked. They did, and they hired her.

    It might seem strange now that they were happy to take on a random applicant with absolutely no experience in computer programming. But in those days, almost nobody had any experience writing code. The discipline did not yet really exist; there were vanishingly few college courses in it, and no majors. (Stanford, for example, didn’t create a computer-science department until 1965.) So instead, institutions that needed programmers just used aptitude tests to evaluate applicants’ ability to think logically. Wilkes happened to have some intellectual preparation: As a philosophy major, she had studied symbolic logic, which can involve creating arguments and inferences by stringing together and/or statements in a way that resembles coding.

    Wilkes quickly became a programming whiz. She first worked on the IBM 704, which required her to write in an abstruse “assembly language.”

    7
    An IBM 704 computer, with IBM 727 tape drives and IBM 780 CRT display. (Image courtesy of LLNL.)

    (A typical command might be something like “LXA A, K,” telling the computer to take the number in Location A of its memory and load it into to the “Index Register” K.) Even getting the program into the IBM 704 was a laborious affair. There were no keyboards or screens; Wilkes had to write a program on paper and give it to a typist, who translated each command into holes on a punch card. She would carry boxes of commands to an “operator,” who then fed a stack of such cards into a reader. The computer executed the program and produced results, typed out on a printer.

    Often enough, Wilkes’s code didn’t produce the result she wanted. So she had to pore over her lines of code, trying to deduce her mistake, stepping through each line in her head and envisioning how the machine would execute it — turning her mind, as it were, into the computer. Then she would rewrite the program. The capacity of most computers at the time was quite limited; the IBM 704 could handle only about 4,000 “words” of code in its memory. A good programmer was concise and elegant and never wasted a word. They were poets of bits. “It was like working logic puzzles — big, complicated logic puzzles,” Wilkes says. “I still have a very picky, precise mind, to a fault. I notice pictures that are crooked on the wall.”

    What sort of person possesses that kind of mentality? Back then, it was assumed to be women. They had already played a foundational role in the prehistory of computing: During World War II, women operated some of the first computational machines used for code-breaking at Bletchley Park in Britain.

    9
    A Colossus Mark 2 computer being operated by Wrens. The slanted control panel on the left was used to set the “pin” (or “cam”) patterns of the Lorenz. The “bedstead” paper tape transport is on the right.

    Develope-Tommy Flowers, assisted by Sidney Broadhurst, William Chandler and for the Mark 2 machines, Allen Coombs
    Manufacturer-Post Office Research Station
    Type-Special-purpose electronic digital programmable computer
    Generation-First-generation computer
    Release date Mk 1: December 1943 Mk 2: 1 June 1944
    Discontinued 1960

    8
    The Lorenz SZ machines had 12 wheels, each with a different number of cams (or “pins”).
    Wheel number 1 2 3 4 5 6 7 8 9 10 11 12
    BP wheel name[13] ψ1 ψ2 ψ3 ψ4 ψ5 μ37 μ61 χ1 χ2 χ3 χ4 χ5
    Number of cams (pins) 43 47 51 53 59 37 61 41 31 29 26 23

    Colossus was a set of computers developed by British codebreakers in the years 1943–1945 to help in the cryptanalysis of the Lorenz cipher. Colossus used thermionic valves (vacuum tubes) to perform Boolean and counting operations. Colossus is thus regarded as the world’s first programmable, electronic, digital computer, although it was programmed by switches and plugs and not by a stored program.

    Colossus was designed by research telephone engineer Tommy Flowers to solve a problem posed by mathematician Max Newman at the Government Code and Cypher School (GC&CS) at Bletchley Park. Alan Turing’s use of probability in cryptanalysis (see Banburismus) contributed to its design. It has sometimes been erroneously stated that Turing designed Colossus to aid the cryptanalysis of the Enigma.Turing’s machine that helped decode Enigma was the electromechanical Bombe, not Colossus.

    In the United States, by 1960, according to government statistics, more than one in four programmers were women. At M.I.T.’s Lincoln Labs in the 1960s, where Wilkes worked, she recalls that most of those the government categorized as “career programmers” were female. It wasn’t high-status work — yet.

    In 1961, Wilkes was assigned to a prominent new project, the creation of the LINC.

    LINC from MIT Lincoln Lab


    Wesley Clark in 1962 at a demonstration of the first Laboratory Instrument Computer, or LINC. Credit MIT Lincoln Laboratory

    As one of the world’s first interactive personal computers, it would be a breakthrough device that could fit in a single office or lab. It would even have its own keyboard and screen, so it could be programmed more quickly, without awkward punch cards or printouts. The designers, who knew they could make the hardware, needed Wilkes to help write the software that would let a user control the computer in real time.

    For two and a half years, she and a team toiled away at flow charts, pondering how the circuitry functioned, how to let people communicate with it. “We worked all these crazy hours; we ate all kinds of terrible food,” she says. There was sexism, yes, especially in the disparity between how men and women were paid and promoted, but Wilkes enjoyed the relative comity that existed among the men and women at Lincoln Labs, the sense of being among intellectual peers. “We were a bunch of nerds,” Wilkes says dryly. “We were a bunch of geeks. We dressed like geeks. I was completely accepted by the men in my group.” When they got an early prototype of the LINC working, it solved a fiendish data-processing problem for a biologist, who was so excited that he danced a happy jig around the machine.

    In late 1964, after Wilkes returned from traveling around the world for a year, she was asked to finish writing the LINC’s operating system. But the lab had been relocated to St. Louis, and she had no desire to move there. Instead, a LINC was shipped to her parents’ house in Baltimore. Looming in the front hall near the foot of the stairs, a tall cabinet of whirring magnetic tapes across from a refrigerator-size box full of circuitry, it was an early glimpse of a sci-fi future: Wilkes was one of the first people on the planet to have a personal computer in her home. (Her father, an Episcopal clergyman, was thrilled. “He bragged about it,” she says. “He would tell anybody who would listen, ‘I bet you don’t have a computer in your living room.’ ”) Before long, LINC users around the world were using her code to program medical analyses and even create a chatbot that interviewed patients about their symptoms.

    But even as Wilkes established herself as a programmer, she still craved a life as a lawyer. “I also really finally got to the point where I said, ‘I don’t think I want to do this for the rest of my life,’ ” she says. Computers were intellectually stimulating but socially isolating. In 1972, she applied and got in to Harvard Law School, and after graduating, she spent the next four decades as a lawyer. “I absolutely loved it,” she says.

    Today Wilkes is retired and lives in Cambridge, Mass. White-haired at 81, she still has the precise mannerisms and the ready, beaming smile that can be seen in photos from the ’60s, when she posed, grinning, beside the LINC. She told me that she occasionally gives talks to young students studying computer science. But the industry they’re heading into is, astonishingly, less populated with women — and by many accounts less welcoming to them — than it was in Wilkes’s day. In 1960, when she started working at M.I.T., the proportion of women in computing and mathematical professions (which are grouped together in federal government data) was 27 percent. It reached 35 percent in 1990. But, in the government’s published figures, that was the peak. The numbers fell after that, and by 2013, women were down to 26 percent — below their share in 1960.

    When Wilkes talks to today’s young coders, they are often shocked to learn that women were among the field’s earliest, towering innovators and once a common sight in corporate America. “Their mouths are agape,” Wilkes says. “They have absolutely no idea.”

    Almost 200 years ago, the first person to be what we would now call a coder was, in fact, a woman: Lady Ada Lovelace.

    4
    Ada Lovelace aka Augusta Ada Byron-1843 or 1850 a rare daguerreotype by Antoine Claudet. Picture taken in his studio probably near Regents Park in London
    Date 2 January 1843
    Source https://blogs.bodleian.ox.ac.uk/adalovelace/2015/10/14/only-known-photographs-of-ada-lovelace-in-bodleian-display/ Reproduction courtesy of Geoffrey Bond.
    Augusta Ada King, Countess of Lovelace (née Byron; 10 December 1815 – 27 November 1852) was an English mathematician and writer, chiefly known for her work on Charles Babbage’s proposed mechanical general-purpose computer, the Analytical Engine [below]. She was the first to recognise that the machine had applications beyond pure calculation, and published the first algorithm intended to be carried out by such a machine. As a result, she is sometimes regarded as the first to recognise the full potential of a “computing machine” and the first computer programmer.

    Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage’s difference engine

    As a young mathematician in England in 1833, she met Charles Babbage, an inventor who was struggling to design what he called the Analytical Engine, which would be made of metal gears and able to execute if/then commands and store information in memory. Enthralled, Lovelace grasped the enormous potential of a device like this. A computer that could modify its own instructions and memory could be far more than a rote calculator, she realized. To prove it, Lovelace wrote what is often regarded as the first computer program in history, an algorithm with which the Analytical Engine would calculate the Bernoulli sequence of numbers. (She wasn’t shy about her accomplishments: “That brain of mine is something more than merely mortal; as time will show,” she once wrote.) But Babbage never managed to build his computer, and Lovelace, who died of cancer at 36, never saw her code executed.

    Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage’s difference engine

    When digital computers finally became a practical reality in the 1940s, women were again pioneers in writing software for the machines. At the time, men in the computing industry regarded writing code as a secondary, less interesting task. The real glory lay in making the hardware. Software? “That term hadn’t yet been invented,” says Jennifer S. Light, a professor at M.I.T. who studies the history of science and technology.

    This dynamic was at work in the development of the first programmable digital computer in the United States, the Electronic Numerical Integrator and Computer, or Eniac, during the 1940s.

    3
    Computer operators with an Eniac — the world’s first programmable general-purpose computer. Credit Corbis/Getty Images

    ENIAC progamming. Columbia University

    Funded by the military, the thing was a behemoth, weighing more than 30 tons and including 17,468 vacuum tubes. Merely getting it to work was seen as the heroic, manly engineering feat. In contrast, programming it seemed menial, even secretarial. Women had long been employed in the scut work of doing calculations. In the years leading up to the Eniac, many companies bought huge electronic tabulating machines — quite useful for tallying up payroll, say — from companies like IBM; women frequently worked as the punch-card operators for these overgrown calculators. When the time came to hire technicians to write instructions for the Eniac, it made sense, to the men in charge, to pick an all-female team: Kathleen McNulty, Jean Jennings, Betty Snyder, Marlyn Wescoff, Frances Bilas and Ruth Lichterman. The men would figure out what they wanted Eniac to do; the women “programmed” it to execute the instructions.

    “We could diagnose troubles almost down to the individual vacuum tube,” Jennings later told an interviewer for the IEEE Annals of the History of Computing. Jennings, who grew up as the tomboy daughter of low-income parents near a Missouri community of 104 people, studied math at college. “Since we knew both the application and the machine, we learned to diagnose troubles as well as, if not better than, the engineer.”

    The Eniac women were among the first coders to discover that software never works right the first time — and that a programmer’s main work, really, is to find and fix the bugs. Their innovations included some of software’s core concepts. Betty Snyder realized that if you wanted to debug a program that wasn’t running correctly, it would help to have a “break point,” a moment when you could stop a program midway through its run. To this day, break points are a key part of the debugging process.

    In 1946, Eniac’s creators wanted to show off the computer to a group of leaders in science, technology and the military. They asked Jennings and Snyder to write a program that calculated missile trajectories. After weeks of intense effort, they and their team had a working program, except for one glitch: It was supposed to stop when the missile landed, but for some reason it kept running. The night before the demo, Snyder suddenly intuited the problem. She went to work early the next day, flipped a single switch inside the Eniac and eliminated the bug. “Betty could do more logical reasoning while she was asleep than most people can do awake,” Jennings later said. Nonetheless, the women got little credit for their work. At that first official demonstration to show off Eniac, the male project managers didn’t mention, much less introduce, the women.

    After the war, as coding jobs spread from the military into the private sector, women remained in the coding vanguard, doing some of the highest-profile work.

    3
    Rear Admiral Grace M. Hopper, 1984

    Grace Brewster Murray Hopper (née Murray; December 9, 1906 – January 1, 1992) was an American computer scientist and United States Navy rear admiral. One of the first programmers of the Harvard Mark I computer, she was a pioneer of computer programming who invented one of the first compiler related tools. She popularized the idea of machine-independent programming languages, which led to the development of COBOL, an early high-level programming language still in use today.

    The pioneering programmer Grace Hopper is frequently credited with creating the first “compiler,” a program that lets users create programming languages that more closely resemble regular written words: A coder could thus write the English-like code, and the compiler would do the hard work of turning it into ones and zeros for the computer. Hopper also developed the “Flowmatic” language for nontechnical businesspeople. Later, she advised the team that created the Cobol language, which became widely used by corporations. Another programmer from the team, Jean E. Sammet, continued to be influential in the language’s development for decades. Fran Allen was so expert in optimizing Fortran, a popular language for performing scientific calculations, that she became the first female IBM fellow.

    NERSC Hopper Cray XE6 supercomputer

    When the number of coding jobs exploded in the ’50s and ’60s as companies began relying on software to process payrolls and crunch data, men had no special advantage in being hired. As Wilkes had discovered, employers simply looked for candidates who were logical, good at math and meticulous. And in this respect, gender stereotypes worked in women’s favor: Some executives argued that women’s traditional expertise at painstaking activities like knitting and weaving manifested precisely this mind-set. (The 1968 book Your Career in Computers stated that people who like “cooking from a cookbook” make good programmers.)

    The field rewarded aptitude: Applicants were often given a test (typically one involving pattern recognition), hired if they passed it and trained on the job, a process that made the field especially receptive to neophytes. “Know Nothing About Computers? Then We’ll Teach You (and Pay You While Doing So),” one British ad promised in 1965. In a 1957 recruiting pitch in the United States, IBM’s brochure titled My Fair Ladies specifically encouraged women to apply for coding jobs.

    Such was the hunger for programming talent that a young black woman named Arlene Gwendolyn Lee [no photo available] could become one of the early female programmers in Canada, despite the open discrimination of the time. Lee was half of a biracial couple to whom no one would rent, so she needed money to buy a house. According to her son, who has described his mother’s experience in a blog post, Lee showed up at a firm after seeing its ad for data processing and systems analytics jobs in a Toronto newspaper sometime in the early 1960s. Lee persuaded the employers, who were all white, to let her take the coding aptitude test. When she placed in the 99th percentile, the supervisors grilled her with questions before hiring her. “I had it easy,” she later told her son. “The computer didn’t care that I was a woman or that I was black. Most women had it much harder.”

    Elsie Shutt learned to code during her college summers while working for the military at the Aberdeen Proving Ground, an Army facility in Maryland.

    8
    Elsie Shutt founded one of the first software businesses in the U.S. in 1958

    In 1953, while taking time off from graduate school, she was hired to code for Raytheon, where the programmer work force “was about 50 percent men and 50 percent women,” she told Janet Abbate, a Virginia Tech historian and author of the 2012 book Recoding Gender. “And it really amazed me that these men were programmers, because I thought it was women’s work!”

    When Shutt had a child in 1957, state law required her to leave her job; the ’50s and ’60s may have been welcoming to full-time female coders, but firms were unwilling to offer part-time work, even to superb coders. So Shutt founded Computations Inc., a consultancy that produced code for corporations. She hired stay-at-home mothers as part-time employees; if they didn’t already know how to code, she trained them. They cared for their kids during the day, then coded at night, renting time on local computers. “What it turned into was a feeling of mission,” Shutt told Abbate, “in providing work for women who were talented and did good work and couldn’t get part-time jobs.” Business Week called the Computations work force the “pregnant programmers” in a 1963 article illustrated with a picture of a baby in a bassinet in a home hallway, with the mother in the background, hard at work writing software. (The article’s title: Mixing Math and Motherhood.)

    By 1967, there were so many female programmers that Cosmopolitan magazine published an article about The Computer Girls, accompanied by pictures of beehived women at work on computers that evoked the control deck of the U.S.S. Enterprise. The story noted that women could make $20,000 a year doing this work (or more than $150,000 in today’s money). It was the rare white-collar occupation in which women could thrive. Nearly every other highly trained professional field admitted few women; even women with math degrees had limited options: teaching high school math or doing rote calculations at insurance firms.

    “Women back then would basically go, ‘Well, if I don’t do programming, what else will I do?’ ” Janet Abbate says. “The situation was very grim for women’s opportunities.”

    If we want to pinpoint a moment when women began to be forced out of programming, we can look at one year: 1984. A decade earlier, a study revealed that the numbers of men and women who expressed an interest in coding as a career were equal. Men were more likely to enroll in computer-science programs, but women’s participation rose steadily and rapidly through the late ’70s until, by the 1983-84 academic year, 37.1 percent of all students graduating with degrees in computer and information sciences were women. In only one decade, their participation rate more than doubled.

    But then things went into reverse. From 1984 onward, the percentage dropped; by the time 2010 rolled around, it had been cut in half. Only 17.6 percent of the students graduating from computer-science and information-science programs were women.

    One reason for this vertiginous decline has to do with a change in how and when kids learned to program. The advent of personal computers in the late ’70s and early ’80s remade the pool of students who pursued computer-science degrees. Before then, pretty much every student who showed up at college had never touched a computer or even been in the room with one. Computers were rare and expensive devices, available for the most part only in research labs or corporate settings. Nearly all students were on equal footing, in other words, and new to programming.

    Once the first generation of personal computers, like the Commodore 64 or the TRS-80, found their way into homes, teenagers were able to play around with them, slowly learning the major concepts of programming in their spare time.

    9
    Commodore 64

    10
    Radio Shack Tandy TRS80

    By the mid-’80s, some college freshmen were showing up for their first class already proficient as programmers. They were remarkably well prepared for and perhaps even a little jaded about what Computer Science 101 might bring. As it turned out, these students were mostly men, as two academics discovered when they looked into the reasons women’s enrollment was so low.

    5
    Keypunch operators at IBM in Stockholm in the 1930s. Credit IBM

    One researcher was Allan Fisher, then the associate dean of the computer-science school at Carnegie Mellon University. The school established an undergraduate program in computer science in 1988, and after a few years of operation, Fisher noticed that the proportion of women in the major was consistently below 10 percent. In 1994, he hired Jane Margolis, a social scientist who is now a senior researcher in the U.C.L.A. School of Education and Information Studies, to figure out why. Over four years, from 1995 to 1999, she and her colleagues interviewed and tracked roughly 100 undergraduates, male and female, in Carnegie Mellon’s computer-science department; she and Fisher later published the findings in their 2002 book “Unlocking the Clubhouse: Women in Computing.”

    What Margolis discovered was that the first-year students arriving at Carnegie Mellon with substantial experience were almost all male. They had received much more exposure to computers than girls had; for example, boys were more than twice as likely to have been given one as a gift by their parents. And if parents bought a computer for the family, they most often put it in a son’s room, not a daughter’s. Sons also tended to have what amounted to an “internship” relationship with fathers, working through Basic-language manuals with them, receiving encouragement from them; the same wasn’t true for daughters. “That was a very important part of our findings,” Margolis says. Nearly every female student in computer science at Carnegie Mellon told Margolis that her father had worked with her brother — “and they had to fight their way through to get some attention.”

    Their mothers were typically less engaged with computers in the home, they told her. Girls, even the nerdy ones, picked up these cues and seemed to dial back their enthusiasm accordingly. These were pretty familiar roles for boys and girls, historically: Boys were cheered on for playing with construction sets and electronics kits, while girls were steered toward dolls and toy kitchens. It wasn’t terribly surprising to Margolis that a new technology would follow the same pattern as it became widely accepted.

    At school, girls got much the same message: Computers were for boys. Geeky boys who formed computer clubs, at least in part to escape the torments of jock culture, often wound up, whether intentionally or not, reproducing the same exclusionary behavior. (These groups snubbed not only girls but also black and Latino boys.) Such male cliques created “a kind of peer support network,” in Fisher’s words.

    This helped explain why Carnegie Mellon’s first-year classes were starkly divided between the sizable number of men who were already confident in basic programming concepts and the women who were frequently complete neophytes. A cultural schism had emerged. The women started doubting their ability. How would they ever catch up?

    What Margolis heard from students — and from faculty members, too — was that there was a sense in the classroom that if you hadn’t already been coding obsessively for years, you didn’t belong. The “real programmer” was the one who “had a computer-screen tan from being in front of the monitor all the time,” as Margolis puts it. “The idea was, you just have to love being with a computer all the time, and if you don’t do it 24/7, you’re not a ‘real’ programmer.” The truth is, many of the men themselves didn’t fit this monomaniacal stereotype. But there was a double standard: While it was O.K. for the men to want to engage in various other pursuits, women who expressed the same wish felt judged for not being “hard core” enough. By the second year, many of these women, besieged by doubts, began dropping out of the program. (The same was true for the few black and Latino students who also arrived on campus without teenage programming experience.)

    A similar pattern took hold at many other campuses. Patricia Ordóñez, a first-year student at Johns Hopkins University in 1985, enrolled in an Introduction to Minicomputers course. She had been a math whiz in high school but had little experience in coding; when she raised her hand in class at college to ask a question, many of the other students who had spent their teenage years programming — and the professor — made her feel singled out. “I remember one day he looked at me and said, ‘You should already know this by now,’ ” she told me. “I thought, I’m never going to succeed.” She switched majors as a result.

    Yet a student’s decision to stick with or quit the subject did not seem to be correlated with coding talent. Many of the women who dropped out were getting perfectly good grades, Margolis learned. Indeed, some who left had been top students. And the women who did persist and made it to the third year of their program had by then generally caught up to the teenage obsessives. The degree’s coursework was, in other words, a leveling force. Learning Basic as a teenage hobby might lead to lots of fun and useful skills, but the pace of learning at college was so much more intense that by the end of the degree, everyone eventually wound up graduating at roughly the same levels of programming mastery.

    5
    An E.R.A./Univac 1103 computer in the 1950s.Credit Hum Images/Alamy

    “It turned out that having prior experience is not a great predictor, even of academic success,” Fisher says. Ordóñez’s later experience illustrates exactly this: After changing majors at Johns Hopkins, she later took night classes in coding and eventually got a Ph.D. in computer science in her 30s; today, she’s a professor at the University of Puerto Rico Río Piedras, specializing in data science.

    By the ’80s, the early pioneering work done by female programmers had mostly been forgotten. In contrast, Hollywood was putting out precisely the opposite image: Computers were a male domain. In hit movies like Revenge of the Nerds, Weird Science, Tron, WarGames and others, the computer nerds were nearly always young white men. Video games, a significant gateway activity that led to an interest in computers, were pitched far more often at boys, as research in 1985 by Sara Kiesler [Psychology of Women Quartly], a professor at Carnegie Mellon, found. “In the culture, it became something that guys do and are good at,” says Kiesler, who is also a program manager at the National Science Foundation. “There were all kinds of things signaling that if you don’t have the right genes, you’re not welcome.”

    A 1983 study involving M.I.T. students produced equally bleak accounts. Women who raised their hands in class were often ignored by professors and talked over by other students. They would be told they weren’t aggressive enough; if they challenged other students or contradicted them, they heard comments like “You sure are bitchy today — must be your period.” Behavior in some research groups “sometimes approximates that of the locker room,” the report concluded, with men openly rating how “cute” their female students were. (“Gee, I don’t think it’s fair that the only two girls in the group are in the same office,” one said. “We should share.”) Male students mused about women’s mediocrity: “I really don’t think the woman students around here are as good as the men,” one said.

    By then, as programming enjoyed its first burst of cultural attention, so many students were racing to enroll in computer science that universities ran into a supply problem: They didn’t have enough professors to teach everyone. Some added hurdles, courses that students had to pass before they could be accepted into the computer-science major. Punishing workloads and classes that covered the material at a lightning pace weeded out those who didn’t get it immediately. All this fostered an environment in which the students mostly likely to get through were those who had already been exposed to coding — young men, mostly. “Every time the field has instituted these filters on the front end, that’s had the effect of reducing the participation of women in particular,” says Eric S. Roberts, a longtime professor of computer science, now at Reed College, who first studied this problem and called it the “capacity crisis.”

    When computer-science programs began to expand again in the mid-’90s, coding’s culture was set. Most of the incoming students were men. The interest among women never recovered to the levels reached in the late ’70s and early ’80s. And the women who did show up were often isolated. In a room of 20 students, perhaps five or even fewer might be women.

    In 1991, Ellen Spertus, now a computer scientist at Mills College, published a report on women’s experiences in programming classes. She cataloged a landscape populated by men who snickered about the presumed inferiority of women and by professors who told female students that they were “far too pretty” to be studying electrical engineering; when some men at Carnegie Mellon were asked to stop using pictures of naked women as desktop wallpaper on their computers, they angrily complained that it was censorship of the sort practiced by “the Nazis or the Ayatollah Khomeini.”

    As programming was shutting its doors to women in academia, a similar transformation was taking place in corporate America. The emergence of what would be called “culture fit” was changing the who, and the why, of the hiring process. Managers began picking coders less on the basis of aptitude and more on how well they fit a personality type: the acerbic, aloof male nerd.

    The shift actually began far earlier, back in the late ’60s, when managers recognized that male coders shared a growing tendency to be antisocial isolates, lording their arcane technical expertise over that of their bosses. Programmers were “often egocentric, slightly neurotic,” as Richard Brandon, a well-known computer-industry analyst, put it in an address at a 1968 conference, adding that “the incidence of beards, sandals and other symptoms of rugged individualism or nonconformity are notably greater among this demographic.”

    In addition to testing for logical thinking, as in Mary Allen Wilkes’s day, companies began using personality tests to select specifically for these sorts of caustic loner qualities. “These became very powerful narratives,” says Nathan Ensmenger, a professor of informatics at Indiana University, who has studied [Gender and Computing] this transition. The hunt for that personality type cut women out. Managers might shrug and accept a man who was unkempt, unshaven and surly, but they wouldn’t tolerate a women who behaved the same way. Coding increasingly required late nights, but managers claimed that it was too unsafe to have women working into the wee hours, so they forbid them to stay late with the men.

    At the same time, the old hierarchy of hardware and software became inverted. Software was becoming a critical, and lucrative, sector of corporate America. Employers increasingly hired programmers whom they could envision one day ascending to key managerial roles in programming. And few companies were willing to put a woman in charge of men. “They wanted people who were more aligned with management,” says Marie Hicks, a historian at the Illinois Institute of Technology. “One of the big takeaways is that technical skill does not equate to success.”

    By the 1990s and 2000s, the pursuit of “culture fit” was in full force, particularly at start-ups, which involve a relatively small number of people typically confined to tight quarters for long hours. Founders looked to hire people who were socially and culturally similar to them.

    “It’s all this loosey-goosey ‘culture’ thing,” says Sue Gardner, former head of the Wikimedia Foundation, the nonprofit that hosts Wikipedia and other sites. After her stint there, Gardner decided to study why so few women were employed as coders. In 2014, she surveyed more than 1,400 women in the field and conducted sit-down interviews with scores more. It became clear to her that the occupation’s takeover by men in the ’90s had turned into a self-perpetuating cycle. Because almost everyone in charge was a white or Asian man, that was the model for whom to hire; managers recognized talent only when it walked and talked as they did. For example, many companies have relied on whiteboard challenges when hiring a coder — a prospective employee is asked to write code, often a sorting algorithm, on a whiteboard while the employers watch. This sort of thing bears almost no resemblance to the work coders actually do in their jobs. But whiteboard questions resemble classroom work at Ivy League institutions. It feels familiar to the men doing the hiring, many of whom are only a few years out of college. “What I came to realize,” Gardner says, “is that it’s not that women are excluded. It’s that practically everyone is excluded if you’re not a young white or Asian man who’s single.”

    One coder, Stephanie Hurlburt, was a stereotypical math nerd who had deep experience working on graphics software. “I love C++, the low-level stuff,” she told me, referring to a complex language known for allowing programmers to write very fast-running code, useful in graphics. Hurlburt worked for a series of firms this decade, including Unity (which makes popular software for designing games), and then for Facebook on its Oculus Rift VR headset, grinding away for long hours in the run-up to the release of its first demo. Hurlburt became accustomed to shrugging off negative attention and crude sexism. She heard, including from many authority figures she admired, that women weren’t wired for math. While working as a coder, if she expressed ignorance of any concept, no matter how trivial, male colleagues would disparage her. “I thought you were at a higher math level,” one sniffed.

    In 2016, Hurlburt and a friend, Rich Geldreich, founded a start-up called Binomial, where they created software that helps compress the size of “textures” in graphics-heavy software. Being self-employed, she figured, would mean not having to deal with belittling bosses. But when she and Geldreich went to sell their product, some customers assumed that she was just the marketing person. “I don’t know how you got this product off the ground when you only have one programmer!” she recalls one client telling Geldreich.

    In 2014, an informal analysis by a tech entrepreneur and former academic named Kieran Snyder of 248 corporate performance reviews for tech engineers determined that women were considerably more likely than men to receive reviews with negative feedback; men were far more likely to get reviews that had only constructive feedback, with no negative material. In a 2016 experiment conducted by the tech recruiting firm Speak With a Geek, 5,000 résumés with identical information were submitted to firms. When identifying details were removed from the résumés, 54 percent of the women received interview offers; when gendered names and other biographical information were given, only 5 percent of them did.

    Lurking beneath some of this sexist atmosphere is the phantasm of sociobiology. As this line of thinking goes, women are less suited to coding than men because biology better endows men with the qualities necessary to excel at programming. Many women who work in software face this line of reasoning all the time. Cate Huston, a software engineer at Google from 2011 to 2014, heard it from colleagues there when they pondered why such a low percentage of the company’s programmers were women. Peers would argue that Google hired only the best — that if women weren’t being hired, it was because they didn’t have enough innate logic or grit, she recalls.

    In the summer of 2017, a Google employee named James Damore suggested in an internal email that several qualities more commonly found in women — including higher rates of anxiety — explained why they weren’t thriving in a competitive world of coding; he cited the cognitive neuroscientist Simon Baron-Cohen, who theorizes that the male brain is more likely to be “systemizing,” compared with women’s “empathizing” brains. Google fired Damore, saying it could not employ someone who would argue that his female colleagues were inherently unsuited to the job. But on Google’s internal boards, other male employees backed up Damore, agreeing with his analysis. The assumption that the makeup of the coding work force reflects a pure meritocracy runs deep among many Silicon Valley men; for them, sociobiology offers a way to explain things, particularly for the type who prefers to believe that sexism in the workplace is not a big deal, or even doubts it really exists.

    But if biology were the reason so few women are in coding, it would be impossible to explain why women were so prominent in the early years of American programming, when the work could be, if anything, far harder than today’s programming. It was an uncharted new field, in which you had to do math in binary and hexadecimal formats, and there were no helpful internet forums, no Google to query, for assistance with your bug. It was just your brain in a jar, solving hellish problems.

    If biology limited women’s ability to code, then the ratio of women to men in programming ought to be similar in other countries. It isn’t. In India, roughly 40 percent of the students studying computer science and related fields are women. This is despite even greater barriers to becoming a female coder there; India has such rigid gender roles that female college students often have an 8 p.m. curfew, meaning they can’t work late in the computer lab, as the social scientist Roli Varma learned when she studied them in 2015. The Indian women had one big cultural advantage over their American peers, though: They were far more likely to be encouraged by their parents to go into the field, Varma says. What’s more, the women regarded coding as a safer job because it kept them indoors, lessening their exposure to street-level sexual harassment. It was, in other words, considered normal in India that women would code. The picture has been similar in Malaysia, where in 2001 — precisely when the share of American women in computer science had slid into a trough — women represented 52 percent of the undergraduate computer-science majors and 39 percent of the Ph.D. candidates at the University of Malaya in Kuala Lumpur.

    Today, when midcareer women decide that Silicon Valley’s culture is unlikely to change, many simply leave the industry. When Sue Gardner surveyed those 1,400 women in 2014, they told her the same story: In the early years, as junior coders, they looked past the ambient sexism they encountered. They loved programming and were ambitious and excited by their jobs. But over time, Gardner says, “they get ground down.” As they rose in the ranks, they found few, if any, mentors. Nearly two-thirds either experienced or witnessed harassment, she read in “The Athena Factor” (a 2008 study of women in tech); in Gardner’s survey, one-third reported that their managers were more friendly toward and gave more support to their male co-workers. It’s often assumed that having children is the moment when women are sidelined in tech careers, as in many others, but Gardner discovered that wasn’t often the breaking point for these women. They grew discouraged seeing men with no better or even lesser qualifications get superior opportunities and treatment.

    “What surprised me was that they felt, ‘I did all that work!’ They were angry,” Gardner says. “It wasn’t like they needed a helping hand or needed a little extra coaching. They were mad. They were not leaving because they couldn’t hack it. They were leaving because they were skilled professionals who had skills that were broadly in demand in the marketplace, and they had other options. So they’re like, ‘[expletive] it — I’ll go somewhere where I’m seen as valuable.’ ”

    The result is an industry that is drastically more male than it was decades ago, and far more so than the workplace at large. In 2018, according to data from the Bureau of Labor Statistics, about 26 percent of the workers in “computer and mathematical occupations” were women. The percentages for people of color are similarly low: Black employees were 8.4 percent, Latinos 7.5 percent. (The Census Bureau’s American Community Survey put black coders at only 4.7 percent in 2016.) In the more rarefied world of the top Silicon Valley tech firms, the numbers are even more austere: A 2017 analysis by Recode, a news site that covers the technology industry, revealed that 20 percent of Google’s technical employees were women, while only 1 percent were black and 3 percent were Hispanic. Facebook was nearly identical; the numbers at Twitter were 15 percent, 2 percent and 4 percent, respectively.

    The reversal has been profound. In the early days of coding, women flocked to programming because it offered more opportunity and reward for merit, more than fields like law. Now software has the closed door.

    In the late 1990s, Allan Fisher decided that Carnegie Mellon would try to address the male-female imbalance in its computer-science program. Prompted by Jane Margolis’s findings, Fisher and his colleagues instituted several changes. One was the creation of classes that grouped students by experience: The kids who had been coding since youth would start on one track; the newcomers to coding would have a slightly different curriculum, allowing them more time to catch up. Carnegie Mellon also offered extra tutoring to all students, which was particularly useful for the novice coders. If Fisher could get them to stay through the first and second years, he knew, they would catch up to their peers.

    5
    Components from four of the earliest electronic computers, held by Patsy Boyce Simmers, Gail Taylor, Millie Beck and Norma Stec, employees at the United States Army’s Ballistics Research Laboratory.Credit Science Source

    They also modified the courses in order to show how code has impacts in the real world, so a new student’s view of programming wouldn’t just be an endless vista of algorithms disconnected from any practical use. Fisher wanted students to glimpse, earlier on, what it was like to make software that works its way into people’s lives. Back in the ’90s, before social media and even before the internet had gone mainstream, the influence that code could have on daily life wasn’t so easy to see.

    Faculty members, too, adopted a different perspective. For years some had tacitly endorsed the idea that the students who came in already knowing code were born to it. Carnegie Mellon “rewarded the obsessive hacker,” Fisher told me. But the faculty now knew that their assumptions weren’t true; they had been confusing previous experience with raw aptitude. They still wanted to encourage those obsessive teenage coders, but they had come to understand that the neophytes were just as likely to bloom rapidly into remarkable talents and deserved as much support. “We had to broaden how faculty sees what a successful student looks like,” he says. The admissions process was adjusted, too; it no longer gave as much preference to students who had been teenage coders.

    No single policy changed things. “There’s really a virtuous cycle,” Fisher says. “If you make the program accommodate people with less experience, then people with less experience come in.” Faculty members became more used to seeing how green coders evolve into accomplished ones, and they learned how to teach that type.

    Carnegie Mellon’s efforts were remarkably successful. Only a few years after these changes, the percentage of women entering its computer-science program boomed, rising to 42 percent from 7 percent; graduation rates for women rose to nearly match those of the men. The school vaulted over the national average. Other schools concerned about the low number of female students began using approaches similar to Fisher’s. In 2006, Harvey Mudd College tinkered with its Introduction to Computer Science course, creating a track specifically for novices, and rebranded it as Creative Problem Solving in Science and Engineering Using Computational Approaches — which, the institution’s president, Maria Klawe, told me, “is actually a better description of what you’re actually doing when you’re coding.” By 2018, 54 percent of Harvey Mudd’s graduates who majored in computer science were women.

    A broader cultural shift has accompanied the schools’ efforts. In the last few years, women’s interest in coding has begun rapidly rising throughout the United States. In 2012, the percentage of female undergraduates who plan to major in computer science began to rise at rates not seen for 35 years [Computing Research News], since the decline in the mid-’80s, according to research by Linda Sax, an education professor at U.C.L.A. There has also been a boomlet of groups and organizations training and encouraging underrepresented cohorts to enter the field, like Black Girls Code and Code Newbie. Coding has come to be seen, in purely economic terms, as a bastion of well-paying and engaging work.

    In an age when Instagram and Snapchat and iPhones are part of the warp and weft of life’s daily fabric, potential coders worry less that the job will be isolated, antisocial and distant from reality. “Women who see themselves as creative or artistic are more likely to pursue computer science today than in the past,” says Sax, who has pored over decades of demographic data about the students in STEM fields. They’re still less likely to go into coding than other fields, but programming is increasingly on their horizon. This shift is abetted by the fact that it’s much easier to learn programming without getting a full degree, through free online coding schools, relatively cheaper “boot camps” or even meetup groups for newcomers — opportunities that have emerged only in the last decade.

    Changing the culture at schools is one thing. Most female veterans of code I’ve spoken to say that what is harder is shifting the culture of the industry at large, particularly the reflexive sexism and racism still deeply ingrained in Silicon Valley. Some, like Sue Gardner, sometimes wonder if it’s even ethical for her to encourage young women to go into tech. She fears they’ll pour out of computer-science programs in increasing numbers, arrive at their first coding job excited and thrive early on, but then gradually get beaten down by industry. “The truth is, we can attract more and different people into the field, but they’re just going to hit that wall in midcareer, unless we change how things happen higher up,” she says.

    On a spring weekend in 2017, more than 700 coders and designers were given 24 hours to dream up and create a new product at a hackathon in New York hosted by TechCrunch, a news site devoted to technology and Silicon Valley. At lunchtime on Sunday, the teams presented their creations to a panel of industry judges, in a blizzard of frantic elevator pitches. There was Instagrammie, a robot system that would automatically recognize the mood of an elderly relative or a person with limited mobility; there was Waste Not, an app to reduce food waste. Most of the contestants were coders who worked at local high-tech firms or computer-science students at nearby universities.

    6
    Despite women’s historical role in the vanguard of computer programing, some female veterans of code wonder if it’s even ethical to encourage young women to go into tech because of the reflexive sexism in the current culture of Silicon Valley.CreditApic/Getty Images

    The winning team, though, was a trio of high school girls from New Jersey: Sowmya Patapati, Akshaya Dinesh and Amulya Balakrishnan. In only 24 hours, they created reVIVE, a virtual-reality app that tests children for signs of A.D.H.D. After the students were handed their winnings onstage — a trophy-size check for $5,000 — they flopped into chairs in a nearby room to recuperate. They had been coding almost nonstop since noon the day before and were bleary with exhaustion.

    “Lots of caffeine,” Balakrishnan, 17, said, laughing. She wore a blue T-shirt that read WHO HACK THE WORLD? GIRLS. The girls told me that they had impressed even themselves by how much they accomplished in 24 hours. “Our app really does streamline the process of detecting A.D.H.D.,” said Dinesh, who was also 17. “It usually takes six to nine months to diagnose, and thousands of dollars! We could do it digitally in a much faster way!”

    They all became interested in coding in high school, each of them with strong encouragement from immigrant parents. Balakrishnan’s parents worked in software and medicine; Dinesh’s parents came to the United States from India in 2000 and worked in information technology. Patapati immigrated from India as an infant with her young mother, who never went to college, and her father, an information-tech worker who was the first in his rural family to go to college.

    Drawn to coding in high school, the young hackers got used to being the lone girl nerds at school, as Dinesh told me.

    “I tried so hard to get other girls interested in computer science, and it was like, the interest levels were just so low,” she says. “When I walked into my first hackathon, it was the most intimidating thing ever. I looked at a room of 80 kids: Five were girls, and I was probably the youngest person there.” But she kept on competing in 25 more hackathons, and her confidence grew. To break the isolation and meet more girls in coding, she attended events by organizations like #BuiltByGirls, which is where, a few days previously, she had met Patapati and Balakrishnan and where they decided to team up. To attend TechCrunch, Patapati, who was 16, and Balakrishnan skipped a junior prom and a friend’s birthday party. “Who needs a party when you can go to a hackathon?” Patapati said.

    Winning TechCrunch as a group of young women of color brought extra attention, not all of it positive. “I’ve gotten a lot of comments like: ‘Oh, you won the hackathon because you’re a girl! You’re a diversity pick,” Balakrishnan said. After the prize was announced online, she recalled later, “there were quite a few engineers who commented, ‘Oh, it was a girl pick; obviously that’s why they won.’ ”

    Nearly two years later, Balakrishnan was taking a gap year to create a heart-monitoring product she invented, and she was in the running for $100,000 to develop it. She was applying to college to study computer science and, in her spare time, competing in a beauty pageant, inspired by Miss USA 2017, Kara McCullough, who was a nuclear scientist. “I realized that I could use pageantry as a platform to show more girls that they could embrace their femininity and be involved in a very technical, male-dominated field,” she says. Dinesh, in her final year at high school, had started an all-female hackathon that now takes place annually in New York. (“The vibe was definitely very different,” she says, more focused on training newcomers.)

    Patapati and Dinesh enrolled at Stanford last fall to study computer science; both are interested deeply in A.I. They’ve noticed the subtle tensions for women in the coding classes. Patapati, who founded a Women in A.I. group with an Apple tech lead, has watched as male colleagues ignore her raised hand in group discussions or repeat something she just said as if it were their idea. “I think sometimes it’s just a bias that people don’t even recognize that they have,” she says. “That’s been really upsetting.”

    Dinesh says “there’s absolutely a difference in confidence levels” between the male and female newcomers. The Stanford curriculum is so intense that even the relative veterans like her are scrambling: When we spoke recently, she had just spent “three all-nighters in a row” on a single project, for which students had to engineer a “print” command from scratch. At 18, she has few illusions about the road ahead. When she went to a blockchain conference, it was a sea of “middle-aged white and Asian men,” she says. “I’m never going to one again,” she adds with a laugh.

    “My dream is to work on autonomous driving at Tesla or Waymo or some company like that. Or if I see that there’s something missing, maybe I’ll start my own company.” She has begun moving in that direction already, having met one venture capitalist via #BuiltByGirls. “So now I know I can start reaching out to her, and I can start reaching out to other people that she might know,” she says.

    Will she look around, 20 years from now, to see that software has returned to its roots, with women everywhere? “I’m not really sure what will happen,” she admits. “But I do think it is absolutely on the upward climb.”

    Correction: Feb. 14, 2019
    An earlier version of this article misidentified the institution Ellen Spertus was affiliated with when she published a 1991 report on women’s experiences in programming classes. Spertus was at M.I.T. when she published the report, not Mills College, where she is currently a professor.

    Correction: Feb. 14, 2019
    An earlier version of this article misstated Akshaya Dinesh’s current age. She is 18, not 19.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 1:15 pm on January 25, 2019 Permalink | Reply
    Tags: , , , NYT, , , , , Wish list of particle colliders   

    From The New York Times- “Opinion: The Uncertain Future of Particle Physics” 

    New York Times

    From The New York Times

    Jan. 23, 2019
    Sabine Hossenfelder

    Ten years in, the Large Hadron Collider has failed to deliver the exciting discoveries that scientists promised.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE

    CERN/ALICE Detector


    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    The Large Hadron Collider is the world’s largest particle accelerator. It’s a 16-mile-long underground ring, located at CERN in Geneva, in which protons collide at almost the speed of light. With a $5 billion price tag and a $1 billion annual operation cost, the L.H.C. is the most expensive instrument ever built — and that’s even though it reuses the tunnel of an earlier collider.

    CERN Large Electron Positron Collider

    The L.H.C. has collected data since September 2008. Last month, the second experimental run completed, and the collider will be shut down for the next two years for scheduled upgrades. With the L.H.C. on hiatus, particle physicists are already making plans to build an even larger collider. Last week, CERN unveiled plans to build an accelerator that is larger and far more powerful than the L.H.C. — and would cost over $10 billion.

    CERN FCC Future Circular Collider map

    I used to be a particle physicist. For my Ph.D. thesis, I did L.H.C. predictions, and while I have stopped working in the field, I still believe that slamming particles into one another is the most promising route to understanding what matter is made of and how it holds together. But $10 billion is a hefty price tag. And I’m not sure it’s worth it.

    In 2012, experiments at the L.H.C. confirmed the discovery of the Higgs boson — a prediction that dates back to the 1960s — and it remains the only discovery made at the L.H.C.

    Peter Higgs

    CERN CMS Higgs Event

    CERN ATLAS Higgs Event

    Particle physicists are quick to emphasize that they have learned other things: For example, they now have better knowledge about the structure of the proton, and they’ve seen new (albeit unstable) composite particles. But let’s be honest: It’s disappointing.

    Before the L.H.C. started operation, particle physicists had more exciting predictions than that. They thought that other new particles would also appear near the energy at which the Higgs boson could be produced. They also thought that the L.H.C. would see evidence for new dimensions of space. They further hoped that this mammoth collider would deliver clues about the nature of dark matter (which astrophysicists think constitutes 85 percent of the matter in the universe) or about a unified force.

    The stories about new particles, dark matter and additional dimensions were repeated in countless media outlets from before the launch of the L.H.C. until a few years ago. What happened to those predictions? The simple answer is this: Those predictions were wrong — that much is now clear.

    The trouble is, a “prediction” in particle physics is today little more than guesswork. (In case you were wondering, yes, that’s exactly why I left the field.) In the past 30 years, particle physicists have produced thousands of theories whose mathematics they can design to “predict” pretty much anything. For example, in 2015 when a statistical fluctuation in the L.H.C. data looked like it might be a new particle, physicists produced more than 500 papers in eight months to explain what later turned out to be merely noise. The same has happened many other times for similar fluctuations, demonstrating how worthless those predictions are.

    To date, particle physicists have no reliable prediction that there should be anything new to find until about 15 orders of magnitude above the currently accessible energies. And the only reliable prediction they had for the L.H.C. was that of the Higgs boson. Unfortunately, particle physicists have not been very forthcoming with this information. Last year, Nigel Lockyer, the director of Fermilab, told the BBC, “From a simple calculation of the Higgs’ mass, there has to be new science.” This “simple calculation” is what predicted that the L.H.C. should already have seen new science.

    I recently came across a promotional video for the Future Circular Collider that physicists have proposed to build at CERN. This video, which is hosted on the CERN website, advertises the planned machine as a test for dark matter and as a probe for the origin of the universe. It is extremely misleading: Yes, it is possible that a new collider finds a particle that makes up dark matter, but there is no particular reason to think it will. And such a machine will not tell us anything about the origin of the universe. Paola Catapano, head of audiovisual productions at CERN, informed me that this video “is obviously addressed to politicians and not fellow physicists and uses the same arguments as those used to promote the L.H.C. in the ’90s.”

    But big science experiments are investments in our future. Decisions about what to fund should be based on facts, not on shiny advertising. For this, we need to know when a prediction is just a guess. And if particle physicists have only guesses, maybe we should wait until they have better reasons for why a larger collider might find something new.

    It is correct that some technological developments, like strong magnets, benefit from these particle colliders and that particle physics positively contributes to scientific education in general. These are worthy investments, but if that’s what you want to spend money on, you don’t also need to dig a tunnel.

    And there are other avenues to pursue. For example, the astrophysical observations pointing toward dark matter should be explored further; better understanding those observations would help us make more reliable predictions about whether a larger collider can produce the dark matter particle — if it even is a particle.

    There are also medium-scale experiments that tend to fall off the table because giant projects eat up money. One important medium-scale project is the interface between the quantum realm and gravity, which is now accessible to experimental testing. Another place where discoveries could be waiting is in the foundations of quantum mechanics. These could have major technological impacts.

    Now that the L.H.C. is being upgraded and particle physics experiments at the detector are taking a break, it’s time for particle physicists to step back and reflect on the state of the field. It’s time for them to ask why none of the exciting predictions they promised have resulted in discoveries. Money will not solve this problem. And neither will a larger particle collider.

    See the full article here .

    See also From Science News: “Physicists aim to outdo the LHC with this wish list of particle colliders

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:48 am on December 31, 2018 Permalink | Reply
    Tags: , , , , New Horizons, NYT,   

    From The New York Times: “NASA’s New Horizons Will Visit Ultima Thule on New Year’s Day” 

    New York Times

    From The New York Times

    Dec. 31, 2018
    Kenneth Chang

    The probe that visited Pluto will study a mysterious icy world just after midnight. Ultima Thule will be the most distant object ever visited by a spacecraft.

    1
    We should get a clearer look at the Kuiper Belt object, Ultima Thule, when the New Horizons spacecraft, which took this composite image between August and mid-December, flies by on Jan. 1. Credit NASA/Johns Hopkins Applied Physics Laboratory/Southwest Research Institute

    NASA’s New Horizons spacecraft, which flew past Pluto in 2015, will zip past another icy world nicknamed Ultima Thule on New Year’s Day, gathering information on what is believed to be a pristine fragment from the earliest days of the solar system.

    NASA New Horizons spacecraft

    It will be the most distant object ever visited by a spacecraft.

    At 12:33 a.m. Eastern time, New Horizons will pass within about 2,200 miles of Ultima Thule, speeding at 31,500 m.p.h.

    How do I watch the flyby?

    Though it is a NASA spacecraft, the New Horizons mission is operated by the Johns Hopkins Applied Physics Laboratory in Maryland. Coverage of the flyby will be broadcast on the lab’s website and YouTube channel as well as NASA TV. On Twitter, updates will appear on @NewHorizons2015, the account maintained by S. Alan Stern, the principal investigator for the mission, and on NASA’s @NASANewHorizons account.

    While the scientists will celebrate the moment of flyby as if it were New Year’s, they will have no idea how the mission is actually going at that point. The spacecraft, busy making its science observations, will not turn to send a message back to Earth until a few hours later. Then it will take six hours for that radio signal, traveling at the speed of light, to reach Earth.

    Tell me about this small frozen world

    Based on suggestions from the public, the New Horizons team chose a nickname for the world: Ultima Thule, which means “distant places beyond the known world.” Officially, it is 2014 MU69, a catalog designation assigned by the International Astronomical Union’s Minor Planet Center. The “2014” refers to the year it was discovered, the result of a careful scan of the night sky by the Hubble Space Telescope for targets that New Horizons might be able to fly by after its Pluto encounter.

    No telescope on Earth has been able to clearly spot MU69. Even sharp-eyed Hubble can make out only a dot of light. Scientists estimate that it is 12 to 22 miles wide, and that it is dark, reflecting about 10 percent of the light that hits it.

    Four billion miles from the sun, MU69 is a billion miles farther out than Pluto, part of the ring of icy worlds beyond Neptune known as the Kuiper belt. Its orbit, nearly circular, suggests that it has been undisturbed since the birth of the solar system 4.5 billion years ago.

    Why do planetary scientists care about this small thing 4 billion miles from the sun?

    Every time a spacecraft visits an asteroid or a comet, planetary scientists talk about how it is a precious time capsule from the solar system’s baby days when the planets were forming. That is true, but especially true for Ultima Thule.

    Asteroids around the solar system have collided with each other and broken apart. Comets partially vaporize each time they pass close to the sun. But Ultima Thule may have instead been in a deep freeze the whole time, perhaps essentially pristine since it formed 4.5 billion years ago.

    Will there be pictures of Ultima Thule?

    New Horizons has been taking pictures for months, but for most of that time Ultima Thule has been little more than a dot in any of these images.

    At a news conference on Tuesday morning after the flyby, the scientists expect to release a picture taken before the flyby. Ultima Thule is expected to be a mere six pixels wide in that picture — enough to get a rough idea of its shape but not much more.

    The first set of images captured by New Horizons during the flyby should be back on Earth by Tuesday evening, and those are to be shown at news conferences describing the science results on Wednesday and Thursday.

    But when the pictures come, they could be striking — in case you forgot what kind of pictures New Horizons took when it flew past Pluto, here are some highlights of its findings.

    Isn’t NASA closed?

    Yes, NASA is one of the agencies affected by the partial federal government shutdown, and most NASA employees are currently furloughed. However, missions in space, including New Horizons, are considered essential activities. (It would be a shame if NASA had to throw away spacecraft costing hundreds of millions of dollars.)

    NASA will not be issuing news releases, but the Johns Hopkins Applied Physics Laboratory public affairs staff will get the news out, and on Friday, NASA Administrator Jim Bridenstine indicated that the agency would continue providing information on New Horizons as well as Osiris-Rex, a mission that is exploring a near-earth asteroid, Bennu.

    NASA OSIRIS-REx Spacecraft

    What happens after the flyby?

    Because New Horizons is so far away, its radio signal is weak, and the data will trickle back over the next 20 months. At the same time, it will make observations of other objects in the Kuiper belt to compare with Ultima Thule.

    The spacecraft has enough propellant left to possibly head to a third target, but that depends on whether there is anything close enough along its path. Astronomers, busy with Ultima Thule, have yet to start that new search.

    Beyond that, New Horizons will continue heading out of the solar system. Powered by a plutonium power source, it will to take data and communicate home with Earth for perhaps another 20 years, headed out of the solar system. However, it is not moving quite as fast as the Voyager 1 and Voyager 2 spacecraft that have now both entered interstellar space, so it is unclear whether New Horizons will make a similar crossing before its power runs out.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:06 am on December 25, 2018 Permalink | Reply
    Tags: , , , , , NYT, , ,   

    From The New York Times: “It’s Intermission for the Large Hadron Collider” 

    New York Times

    From The New York Times

    This is a special Augmented reality production of the NYT. Please view the original full article to take advantage of the 360 degree images inside the LHC.

    DEC. 21, 2018
    Dennis Overbye

    The largest machine ever built is shutting down for two years of upgrades. Take an immersive tour of the collider and study the remnants of a Higgs particle in augmented reality.

    4

    CERN Control Center

    MEYRIN, Switzerland — There is silence on the subatomic firing range.

    A quarter-century ago, the physicists of CERN, the European Center for Nuclear Research, bet their careers and their political capital on the biggest and most expensive science experiment ever built, the Large Hadron Collider.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE

    CERN/ALICE Detector

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    The collider is a kind of microscope that works by flinging subatomic particles around a 17-mile electromagnetic racetrack beneath the French-Swiss countryside, smashing them together 600 million times a second and sifting through the debris for new particles and forces of nature. The instrument is also a time machine, providing a glimpse of the physics that prevailed in the early moments of the universe and laid the foundation for the cosmos as we see it today.

    The reward came in 2012 with the discovery of the Higgs boson, a long-sought particle that helps explain why there is mass, diversity and life in the cosmos.

    CERN CMS Higgs Event

    CERN ATLAS Higgs Event

    The discovery was celebrated with champagne and a Nobel prize.

    The collider will continue smashing particles and expectations for another 20 years. But first, an intermission. On December 3rd, the particle beams stopped humming. The giant magnets that guide the whizzing protons sighed and released their grip. The underground detectors that ring the tunnel stood down from their watch.

    Over the next two years, during the first of what will be a series of shutdowns, engineers will upgrade the collider to make its beams more intense and its instruments more sensitive and discerning. And theoretical physicists will pause to make sense of the tantalizing, bewildering mysteries that the Large Hadron Collider has generated so far.

    When protons collide

    The collider gets its mojo from Einstein’s dictum that mass and energy are the same. The more energy that the collider can produce, the more massive are the particles created by the collisions. With every increase in the energy of their collider, CERN physicists are able to edge farther and farther back in time, closer to the physics of the Big Bang, when the universe was much hotter than today.

    Inside CERN’s subterranean ring, some 10,000 superconducting electromagnets, powered by a small city’s worth of electricity, guide two beams of protons in opposite directions around the tunnel at 99.99999 percent of the speed of light, or an energy of 7 trillion electron volts. Those protons make the 17-mile circuit 11,000 times a second. (In physics, mass and energy are both expressed in terms of units called electron volts. A single proton, the building block of ordinary atoms, weighs about a billion electron volts.)

    The protons enter the collider as atoms in a puff of hydrogen gas squirted from a bottle. As the atoms travel, electrical fields strip them of electrons, leaving bare, positively charged protons. These are sped up by a series of increasingly larger and more energetic electromagnets, until they are ready to enter the main ring of the collider.

    When protons finally enter the main ring, they have been boosted into flying bombs of primordial energy, primed to smash apart — and recombine — when they strike their opposite numbers head-on, coming from the other direction.

    The protons circulate inside vacuum pipes – one running clockwise, the other counterclockwise – and these are surrounded by superconducting electromagnets strung together around the tunnel like sausages. To generate enough force to bend the speeding protons, the magnets must be uncommonly strong: 8.3 Tesla, or more than a hundred thousand times stronger than Earth’s magnetic field — and more than strong enough to wreck a fancy Swiss watch.

    Such a field in turn requires an electrical current of 12,000 amperes. That’s only feasible if the magnets are superconducting, meaning that electricity flows without expensive resistance. For that to happen, the magnets must be supercold; they are bathed in 150 tons of superfluid helium at a temperature of 1.9 Kelvin, making the Large Hadron Collider literally one of the coldest places in the universe.

    If things go wrong down here, they can go very wrong. In 2008, as the collider was still being tuned up, the link between a pair of magnets exploded, delaying operations for almost two years.

    The energy stored in the magnetic fields is equivalent to a fully loaded jumbo jet going 500 miles per hour; if a magnet loses its cool and heats up, all that energy must go someplace. And the proton beam itself can cut through many feet of steel.

    A tale of four detectors

    The beams cross at four points around the racetrack.

    At each juncture, gigantic detectors — underground mountains of electronics, cables, computers, pipes, magnets and even more magnets — have been erected. The two biggest and most expensive experiments, CMS (the Compact Muon Solenoid) and Atlas (A Toroidal L.H.C. Apparatus) sit, respectively, at the noon and 6 o’clock positions of the circular track.

    Wrapped around them, like the layers of an onion, are instruments designed to measure every last spark of energy or matter that might spew from the collision. Silicon detectors track the paths of lightweight, charged particles such as electrons. Scintillation crystals capture the energies of gamma rays. Chambers of electrified gas track more far-flung particles. And powerful magnets bend the paths of these particles so that their charges and masses can be determined.

    The proton beams cross 40 million times per second in each of the four detectors, resulting in about a billion actual collisions every second.

    What’s the antimatter?

    Why is there something instead of nothing in the universe?

    Answering that question is the mission of the detector known as LHCb, which sits at about 4 o’clock on the collider dial. The “b” stands for beauty — and for the B meson, a subatomic particle that is crucial to the experiment.

    When matter is created — in a collider, in the Big Bang — equal amounts of matter and its opposite, antimatter, should be formed, according to the laws of physics As We Know Them. When matter and antimatter meet, they annihilate each other, producing energy.

    By that logic, when matter and antimatter formed in the Big Bang, they should have cancelled out each other, leaving behind an empty universe. But it’s not empty: We are here, and our antimatter is not.

    Why not? Physicists suspect that some subtle imbalance between matter and antimatter is responsible. The LHCb experiment looks for that imbalance in the behavior of B mesons, which are often sprayed from the proton collisions.

    B mesons have an exotic property: They flicker back and forth between being matter and antimatter. Sensors record their passage through the LHCb room, seeking differences between the particles and their antimatter twins. Any discrepancy between the two could be a clue to why matter flourished billions of years ago and antimatter perished.

    Turning back the cosmic clock

    At about 8 o’clock on the collider dial is Alice, another detector with a special purpose. It, too, is fixed on the distant past: the brief moment a couple of microseconds after the Big Bang, before the first protons and neutrons congealed out of a “primordial soup” of quarks and gluons.

    Alice’s job is to study tiny droplets of that distant past that are created when the collider bangs together lead ions instead of protons. Researchers expected this material, known in the lingo as a quark-gluon plasma, to behave like a gas, but it turns out to behave more like a liquid.

    Sifting the data

    The collider’s enormous detectors are like 100 megapixel cameras that take 40 million pictures a second. Most of the data from that deluge is immediately thrown away. Triggers, programmed to pick out events that physicists thought might be interesting, save only about a thousand collision events per second. Even still, an enormous pool of data winds up in the CERN computer banks.

    CERN DATA Center

    According to the casino rules of modern quantum physics, anything that can happen will happen eventually. Before a single proton is fired through the collider, computers have calculated all the possible outcomes of a collision according to known physics. Any unexpected bump in the real data at some energy could be a signal of unknown physics, a new particle.

    That was how the Higgs was discovered, emerging from the statistical noise in the autumn of 2011. Only one of every 10 billion collisions creates a Higgs boson. The Higgs vanishes instantly and can’t be observed directly, but it decays into fragments that can be measured and identified.

    What eventually stood out from the data was evidence for a particle that weighs all by itself as much as an iodine atom: a flake of an invisible force field that permeates space like molasses, impeding motion and assigning mass to objects that pass through it.

    And so in 2012, after half a century and billions of dollars, thousands of physicists toasted over champagne. Peter Higgs, for whom the elusive boson was named, shared the Nobel prize with François Englert, who had independently predicted the particle’s existence.

    Peter Higgs

    François Englert

    An intermission underground

    The current shutdown is the first of a pair of billion-dollar upgrades intended to boost the productivity of the Large Hadron Collider tenfold by the end of the decade.

    The first shutdown will last for two years, until 2021; during that time, engineers will improve the series of smaller racetracks that speed up protons and inject them into the main collider. The collider then will run for two years and shut down again, in 2024, for two more years, so that engineers can install new magnets to intensify the proton beams and collisions.

    Reincarnated in 2026 as the High Luminosity L.H.C., the collider is scheduled to run for another decade, until 2035 or so, which means its career probing the edge of human knowledge is still beginning.

    Judging by the collider’s productivity, measured in terms of trillions of subatomic smashups, more than 95 percent of its scientific potential lies ahead.

    Both the Atlas and CMS experiments will receive major upgrades during the next two shutdowns, including new silicon trackers, to replace the olds ones burned out by radiation.

    To keep up with the increased collision rate, both Atlas and CMS have had to upgrade the finicky trigger systems that decide which collision events to keep and study. Currently, of a billion events per second, they can keep 1,500; the upgrade will raise that figure to 10,000.

    And what a flow of collisions it will be. Physicists measure the productivity, or luminosity, of their colliders in terms of collisions. It took about 3,000 trillion collisions to confirm the Higgs boson. As of the December shutdown the collider had logged about 20,000 trillion collisions. But those were, and are, early days.

    By 2037, the Large Hadron Collider should have produced roughly 4 million trillion primordial fireballs, bristling with who knows what. The whole universe is still up for grabs.

    After the Higgs

    Discovering the Higgs was an auspicious start. But the champagne came with a mystery.

    Over the last century, physicists have learned to explain some of the grandest and subtlest phenomena in nature — the arc of a rainbow, the scent of a gardenia, the twitch of a cat’s whiskers — as a handful of elementary particles interacting through four basic forces, playing a game of catch with force-carrying particles called bosons according to a set of equations called the Standard Model.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    But why these particles and these forces? Why is the universe made of matter but not antimatter? What happens at the center of a black hole, or happened at the first instant of the Big Bang? If the Higgs boson determines the masses of particles, what determines the mass of the Higgs?

    Who, in other words, watches the watchman?

    The Standard Model, for all its brilliance and elegance, does not say. Particles that might answer these questions have not shown up yet in the collider. Fabiola Gianotti, the director-general of CERN, expressed surprise. “I would have expected new physics to manifest itself at the energy scale of the Large Hadron Collider,” she said.

    Some physicists have responded by speculating about multiple universes and other exotic phenomena. Some clues, Dr. Gianotti said, might come from studying the new particle on the block, the Higgs.

    “We physicists are happy when we understand things, but we are even happier when we don’t understand,” she said. “And today we know that we don’t understand everything. We know that we are missing something important and fundamental. And this is very exciting.”

    Colliders of tomorrow

    Humans soon must decide which machines, if any, will be built to augment or replace the Large Hadron Collider. That collider had a “killer app” of sorts: it was designed to achieve an energy at which, according to the prediction of the Standard Model, the Higgs or something like it would become evident and provide an explanation for particle masses.

    But the Standard Model doesn’t predict a new keystone particle in the next higher energy range. Luckily, nobody believes the Standard Model is the last word about the universe, but as the machines increase in energy, particle physicists will be shooting in the dark.

    For a long time, the leading candidate for Next Big Physics Machine has been the International Linear Collider, which would fire electrons and their antimatter opposites, positrons, at each other.

    ILC schematic, being planned for the Kitakami highland, in the Iwate prefecture of northern Japan

    The collisions would produce showers of Higgs bosons. The experiment would be built in Japan, if it is built at all, but Japan has yet to commit to hosting the project, which would require them to pay for about half of the $5.5 billion cost- see https://sciencesprings.wordpress.com/2018/12/21/from-nature-via-ilc-plans-for-worlds-next-major-particle-collider-dealt-big-blow.

    In the meantime, Europe has convened meetings and workshops to decide on a plan for the future of particle physics there. “If there is no word from Japan by the end of the year, then the I.L.C. will not figure in the next five-year plan for Europe,” Lyn Evans, a CERN physicist who was in charge of building the Large Hadron Collider, said in an email.

    CERN has proposed its own version of a linear collider, the Compact Linear Collider, that could be scaled up gradually from Higgs bosons to higher energies. Also being considered is a humongous collider, 100 kilometers around, that would lie under Lake Geneva and would reach energies of 100 trillion electron volts — seven times the power of the Large Hadron Collider.

    Cern Compact Linear Collider

    CLC map

    CLC TWO-BEAM ACCELERATION TEST STAND

    And in November the Chinese Academy of Sciences released the design for a next-generation collider of similar size, called the Circular Electron Positron Collider.

    China Circular Electron Positron Collider (CEPC) map

    China Circular Electron-Positron collider depiction

    The machine could be the precursor for a still more powerful machine that has been dubbed the Great Collider. Politics and economics, as well as physics, will decide which, if any, of these machines will see a shovel.

    “If we want a new machine, nothing is possible before 2035,” Frederick Bordry, CERN’s director of accelerators, said of European plans. Building such a machine is a true human adventure, he said: “Twenty-five years to build and another 25 to operate.”

    Noting that he himself is 64, he added, “I’m working for the young people.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: