Tagged: The Atlantic Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:25 am on September 2, 2016 Permalink | Reply
    Tags: , , , Psychiatry, The Atlantic Magazine   

    From The Atlantic: “How Artificial Intelligence Could Help Diagnose Mental Disorders” 

    Atlantic Magazine

    The Atlantic Magazine

    Aug 23, 2016
    Joseph Frankel

    People convey meaning by what they say as well as how they say it: Tone, word choice, and the length of a phrase are all crucial cues to understanding what’s going on in someone’s mind. When a psychiatrist or psychologist examines a person, they listen for these signals to get a sense of their wellbeing, drawing on past experience to guide their judgment. Researchers are now applying that same approach, with the help of machine learning, to diagnose people with mental disorders.

    In 2015, a team of researchers developed an AI model that correctly predicted [Nature Partner Journal] which members of a group of young people would develop psychosis—a major feature of schizophrenia—by analyzing transcripts of their speech. This model focused on tell-tale verbal tics of psychosis: short sentences, confusing, frequent use of words like “this,” “that,” and “a,” as well as a muddled sense of meaning from one sentence to the next.

    Now, Jim Schwoebel, an engineer and CEO of NeuroLex Diagnostics, wants to build on that work to make a tool for primary-care doctors to screen their patients for schizophrenia. NeuroLex’s product would take a recording from a patient during the appointment via a smartphone or other device (Schwoebel has a prototype Amazon Alexa app) mounted out of sight on a nearby wall.

    1
    Adriane Ohanesian / Reuters

    Using the same model from the psychosis paper, the product would then search a transcript of the patient’s speech for linguistic clues. The AI would present its findings as a number—like a blood-pressure reading—that a psychiatrist could take into account when making a diagnosis. And as the algorithm is “trained” on more and more patients, that reading could better reflect a patient’s state of mind.

    In addition to the schizophrenia screener, an idea that earned Schwoebel an award from the American Psychiatric Association, NeuroLex is hoping to develop a tool for psychiatric patients who are already being treated in hospitals. Rather than trying to help diagnose a mental disorder from a single sample, the AI would examine a patient’s speech over time to track their progress.

    For Schwoebel, this work is personal: he thinks this approach may help solve problems his older brother faced in seeking treatment for schizophrenia. Before his first psychotic break, Schwoebel’s brother would send short, one-word responses, or make cryptic to references to going “there” or “here”—worrisome abnormalities that “all made sense” after his brother’s first psychotic episode, he said.

    According to Schwoebel, it took over 10 primary-care appointments before his brother was referred to a psychiatrist and eventually received a diagnosis. After that, he was put on one medication that didn’t work for him, and then another. In the years it took to get Schwoebel’s brother diagnosed and on an effective regimen, he experienced three psychotic breaks. For cases that call for medication, this led Schwoebel to wonder how to get a person on the right prescription, and at the right dose, faster.

    To find out, NeuroLex is planning a “pre-post study” on people who’ve been hospitalized for mental disorders “to see how their speech patterns change during a psychotic stay or a depressive stay in a hospital.” Ideally, the AI would analyze sample recordings from a person under a mental health provider’s care “to see which drugs are working the best” in order “to reduce the time in the hospital,” Schwoebel said.

    If a person’s speech shows fewer signs of depression or bipolar disorder after being given one medication, this tool could help show that it’s working. If there are no changes, the AI might suggest trying another medication sooner, sparing the patient undue suffering. And, once it’s gathered enough data, it could recommend a medication based on what worked for other people with similar speech profiles. Automated approaches to diagnosis have been anticipated in the greater field of medicine for decades: one company claims that its algorithm recognizes lung cancer with 50 percent more accuracy than human radiologists.

    The possibility of bolstering a mental health clinician’s judgment with a more “objective,” “quantitative” assessment appeals to the Massachusetts General Hospital psychiatrist Arshya Vahabzadeh, who has served as a mentor for a start-up accelerator Schwoebel cofounded. “Schizophrenia refers to a cluster of observable or elicitable symptoms” rather than a catchall diagnosis, he said. With a large enough data set, an AI might be able to split diagnoses like schizophrenia into sharper, more helpful categories based off the common patterns it perceives among patients. “I think the data will help us subtype some of these conditions in ways we couldn’t do before.”

    As with any medical intervention, AI aids “have to be researched and validated. That’s my big kind of asterisk,” he said, echoing a sentiment I heard from Schwoebel. And while the psychosis predictor study demonstrates that speech analysis can predict psychosis reasonably well, it’s still just one study. And no one has yet published a proof-of-concept for depression or bipolar disorder.

    Machine learning is a hot field, but it still has a ways to go—both in and outside of medicine. To take one example, Siri has struggled for years to handle questions and commands from Scottish users. For mental health care, small errors like these could be catastrophic. “If you tell me that a piece of technology is wrong 20 percent of the time”—or 80 percent accurate—“I’m not going to want to deploy it to a patient,” Vahabzadeh said.

    This risk becomes more disturbing when considering age, gender, ethnicity, race, or region. If an AI is trained on speech samples that are all from one demographic group, normal samples outside that group might result in false positives.

    “If you’re from a certain culture, you might speak softer and at a lower pitch,” which an AI “might interpret as depression when it’s not,” Schwoebel said.

    Still, Vahabzadeh believes technology like this could someday help clinicians treat more people, and treat them more efficiently. And that could be crucial, given the shortage of mental-health-care providers throughout the U.S., he says. “If humans aren’t going to be the cost-effective solution, we have to leverage tech in some way to extend and augment physicians’ reach.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 1:59 pm on August 3, 2016 Permalink | Reply
    Tags: , , The Atlantic Magazine,   

    From The Atlantic: “There’s Probably Way More Zika in the United States Than Has Been Counted” 

    Atlantic Magazine

    The Atlantic Magazine

    8.3.16
    Adrienne LaFrance

    1
    Mosquito larvae are seen in Guangzhou, China, at the the world’s largest “mosquito factory.”

    New computer modeling suggests the virus has been underestimated by tens of thousands of cases.

    Try as they might, public-health officials can’t really track the Zika virus in real time. There is inevitably a lag between how a disease spreads and when the public finds out about it.

    Even in Miami, where new updates are being issued every weekday, there’s only so much officials know about how quickly and widely Zika is traveling through the population.

    Then there are the unknowns that are harder to pin down: How many cases of Zika are going uncounted? It turns out, that number may be enormous.

    Researchers at Northeastern University say federal-health officials are likely vastly undercounting Zika in the United States. In a paper that’s still under review for journal publication, they describe computer modeling that suggests there were nearly 30,000 cases of travel-related Zika in the country in mid-June, about 25 times more cases than the 1,200 or so reported by the CDC at the time.

    Researchers found the undercounting occurred in at least nine states: Florida, California, Texas, Georgia, Illinois, North Carolina, Ohio, Indiana, and Oregon.

    “CDC is doing a great job, but it is really hard to detect cases,” said Alessandro Vespignani, one of the authors of the paper. The federal agency is faced with an exceedingly difficult task, in part because it is cobbling together data from various monitoring systems in different states and jurisdictions. The nature of the virus presents additional challenges, making it more complicated to track than other epidemics. “You have to ingest much more data and deal with another level of complexity as well as other sources of uncertainties,” Vespignani said.

    Because Zika is transmitted by mosquitoes (as well as spread between humans), researchers trying to model or predict its path have to take into consideration the presence of certain mosquito species, mosquito populations in different areas, that population’s variability with weather conditions, and so on. (Northeastern’s computer model does not take sexual transmission of Zika into consideration, even though it’s one of the ways the virus is transmitted.)

    Vespignani and his colleagues also used their model to predict how Zika will continue to move through the Americas through the end of 2016, based on how it has spread globally since 2013. (They also took into account the rate of transmission of Dengue in various regions, since that virus has much in common with Zika.)

    The modeling suggests that while the Zika epidemic has already peaked in Brazil, the number of cases is still growing rapidly in Puerto Rico, and will continue to climb well into the fall. And while the researchers say their findings should be interpreted cautiously, given the complexity of the modeling, they believe their projections offer important indications of “the magnitude and timing” of the epidemic as it progresses.

    3
    Zhang et al

    There are other computer-modeled predictions that could be useful—the estimated number of cases of Zika-related Microcephaly, a brain defect in which newborns have abnormally small heads, for example. But modeling such outcomes, especially when so much remains unknown about Zika, is difficult if not impossible without more robust clinical data. “Models can be only as good as the data they ingest,” Vespignani said.

    For the CDC, good data may be the central challenge in tracking Zika. Because the agency only counts confirmed cases of the disease, and because people who catch Zika are usually asymptomatic, there are almost certainly a significant number of people who have had the virus without knowing it.

    “Like the [Northeastern University] team, when we work on estimating components of the epidemic, we try to understand the dynamics of infection relative to the available information, always under the assumption that what we ‘see’ through surveillance is only the tip of the iceberg,” said Michael Johansson, a biologist in the CDC Division of Vector-Borne Diseases, in a statement provided to The Atlantic by a spokesman. “Many infections are asymptomatic, some are mild with symptoms that do not cause people to seek care, some cases are mistaken as other diseases, and then we get to the diagnostics which are also challenging.”

    “All of those components contribute to many fewer cases being reported than the number of infections that actually occurs,” Johansson said.

    What does all of this mean for people who just want to protect themselves from the virus? Zika should be taken as the serious threat to public health that officials have said it is. Though many cases of Zika are mild, scientists are just beginning to understand how devastating it can be—including among children and adults sickened by the disease, not just fetuses. In Utah, one man died from the virus. (And officials still don’t understand how a family member who cared for him contracted Zika.)

    The CDC has clear guidelines on how people—particularly pregnant women—can protect themselves from the virus. Until scientists learn more about how Zika spreads and how it might be stopped, it’s important to understand it could be much more widespread than it appears.

    See the full article here .

    YOU CAN HELP FIND A CURE FOR THE ZIKA VIRUS.

    There is a new project at World Community Grid [WCG] called OpenZika.
    Zika depiction. Image copyright John Liebler, www.ArtoftheCell.com
    Zika depiction. Image copyright John Liebler, www.ArtoftheCell.com

    Rutgers Open Zika

    WCG runs on your home computer or tablet on software from Berkeley Open Infrastructure for Network Computing [BOINC]. Many other scientific projects run on BOINC software.Visit WCG or BOINC, download and install the software, then at WCG attach to the OpenZika project. You will be joining tens of thousands of other “crunchers” processing computational data and saving the scientists literally thousands of hours of work at no real cost to you.

    This project is directed by Dr. Alexander Perryman a senior researcher in the Freundlich lab, with extensive training in developing and applying computational methods in drug discovery and in the biochemical mechanisms of multi-drug-resistance in infectious diseases. He is a member of the Center for Emerging & Re-emerging Pathogens, in the Department of Pharmacology, Physiology, and Neuroscience, at the Rutgers University, New Jersey Medical School. Previously, he was a Research Associate in Prof. Arthur J. Olson’s lab at The Scripps Research Institute (TSRI), where he ran the day-to-day operations of the FightAIDS@Home project, the largest computational drug discovery project devoted to HIV/AIDS, which also runs on WCG. While in the Olson lab, he also designed, led, and ran the largest computational drug discovery project ever performed against malaria, the GO Fight Against Malaria project, also on WCG.

    Rutgers smaller

    WCGLarge
    WCG Logo New

    BOINCLarge
    BOINC WallPaper

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:54 pm on May 31, 2016 Permalink | Reply
    Tags: , , The Atlantic Magazine   

    From The Atlantic: “Galaxy Evolution and the Meaning of Life” 

    Atlantic Magazine

    The Atlantic Magazine

    5.31.16
    Ann Finkbeiner

    1
    Robert Gendler / Roberto Colombari / Hubble Legacy Archive / Subaru Telescope

    Sometimes it’s best to ignore the big stuff.

    A few weeks ago I was at a conference about galaxy evolution. In the titles of many talks was the puzzling phrase, “secular evolution.” Secular? As opposed to religious? So secular evolution is galaxy evolution that’s not in the context of religion? Surely not. I stopped listening to the talks and googled “secular.” It’s Latin, meaning “belonging to a certain age,” as opposed to “infinite.” Not helping. I opted for the extreme measure of waiting for the coffee break and asking an astronomer.

    “Secular evolution” in galaxies turns out to require a little context. Years ago when I started writing about the origin and evolution of the universe, “galaxy evolution” was a matter of connecting some pretty dicey dots. Cosmologists looked at nearby galaxies, at more distant galaxies, at the galaxies so far away you nearly couldn’t see them. And assuming that most distant = farthest back in time = youngest, then those populations of nearby galaxies were grownups, the more distant were adolescents, and the far-away, babies.

    ____________________________________________________________________________________________

    2

    3

    Secular evolution is defined as slow, steady evolution. In galaxies, such evolution is either the result of long-term interactions between the galaxy and its environment (such as gas accretion or galaxy harassment), or it is induced by internal processes such as the actions of spiral arms or bars. Secular evolution therefore plays a important part in the formation of disk galaxies, with both the disk and bulge potentially involved, but is probably relatively unimportant in the formation of elliptical galaxies.

    The most easily recognisable example of secular evolution in disk galaxies is the formation of stars in the spiral arms. This is induced by the action of the spiral structure on the disk of the galaxy. Although evidence for secular evolution in bulges is a little less clear, young stars have been found in the centres of many galaxies, including the Milky Way. One explanation is that gas has been funnelled into the galaxy centre (perhaps through the action of a bar) and a centrally concentrated burst of star formation has resulted. This is believed to be one of the mechanisms for creating starburst galaxies. Another secular evolution process associated with bars is the growth of bulges through kinematic disturbance. In this scenario, the galactic bar perturbs the central disk stars out of their regular orbits, either creating or expanding the bulge.

    The relative importance of secular evolution in the formation of spiral galaxies (compared to the primordial collapse or hierarchical merging processes) is still an area of active research.
    ____________________________________________________________________________________________

    Cosmologists arranged these populations into an evolution: Galaxies began as little blue messes, spun up into sparkly spirals, collided and merged into unchanging ellipticals. Galaxy evolution was interesting partly because it showed the universe growing up. The universe that formed those galaxies was aging with them.

    But that was populations of galaxies, not individual galaxies themselves —demographics, not myelination and hormones and bones losing calcium. So what’s secular?

    Slowly, as observing instruments improved, cosmologists could see what was changing in the galaxies themselves: Stars were born and died, galactic centers changed shapes, black holes flared and faded, gas got breathed in and out. So this is secular evolution: It means life changes that are local, done for individual necessity, unrelated to anything external; life without reference to the Big Context.

    These days I’m deeply into living secularly. And I have rules. I research stories, find their structure, work out the sentences, meet the deadlines. Dinner with a friend, drinks with another one, lunch, conversations that go nowhere but end in sweetness. Buy a mattress, weed the garden, lighten the dirt, and plant tomatoes. Agree to community service regardless of how boneheaded and boring. Be careful of who to invite over. Don’t react to this egregiously dumb election, in particular, don’t get mad about what Hillary is and has always been up against, regardless of how unpleasant it’s made her; and don’t consider the gendered implications of “unpleasant.” Ignore the social death-wish of wealth inequality. Remember that some questions don’t have answers. Remember that some problems are complex and intransigent and won’t be solved and will only evolve. Whatever the context—the universe, God, evolution, politics, society, life—let it go its own way. Stay away from meaninglessness. Don’t think about getting old. Leave death alone.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 8:11 pm on December 20, 2015 Permalink | Reply
    Tags: , , Hacking, The Atlantic Magazine   

    From The Atlantic: “Pop Culture Is Finally Getting Hacking Right” 

    Atlantic Magazine

    The Atlantic Magazine

    Dec 1, 2015
    Joe Marshall

    1
    USA

    Movies and TV shows have long relied on lazy and unrealistic depictions of how cybersecurity works. That’s beginning to change.

    The idea of a drill-wielding hacker who runs a deep-web empire selling drugs to teens seems like a fantasy embodying the worst of digital technology. It’s also, in the spirit of CSI: Cyber, completely ridiculous. So it was no surprise when a recent episode of the CBS drama outed its villain as a a video-game buff who lived at home with his mother. For a series whose principal draw is watching Patricia Arquette yell, “Find the malware!”, that sort of stereotypical characterization and lack of realism is to be expected.

    But CSI: Cyber is something of an anomaly when it comes to portraying cybersecurity on the big or small screen. Hollywood is putting more effort into creating realistic technical narratives and thoughtfully depicting programming culture, breaking new ground with shows like Mr. Robot, Halt and Catch Fire, and Silicon Valley, and films like Blackhat. It’s a smart move, in part because audiences now possess a more sophisticated understanding of such technology than they did in previous decades. Cyberattacks, such as the 2013 incident that affected tens of millions of Target customers, are a real threat, and Americans generally have little confidence that their personal records will remain private and secure. The most obvious promise of Hollywood investing in technically savvy fiction is that these works will fuel a grassroots understanding of digital culture, including topics such as adblockers and surveillance self-defense. But just as important is a film and TV industry that sees the artistic value in accurately capturing a subject that’s relevant to the entire world.

    In some ways, cyberthrillers are just a new kind of procedural—rough outlines of the technical worlds only a few inhabit. But unlike shows based on lawyers, doctors, or police officers, shows about programmers deal with especially timely material. Perry Mason, the TV detective from the ’50s and ’60s, would recognize the tactics of Detective Lennie Briscoe from Law & Order, but there’s no ’60s hacker counterpart to talk shop with Mr. Robot’s Elliot Alderson. It’s true that what you can hack has changed dramatically over the past 20 years: The amount of information is exploding, and expanding connectivity means people can program everything from refrigerators to cars . But beyond that, hacking itself looks pretty much the same, thanks to the largely unchanging appearance and utility of the command-line—a text-only interface favored by developers, hackers, and other programming types.

    2
    Laurelai Storm / Github

    So why has it taken so long for television and film to adapt and accurately portray the most essential aspects of programming? The usual excuse from producers and set designers is that it’s ugly and translates poorly to the screen. As a result, the easiest way to portray code in a movie has long been to shoot a green screen pasted onto a computer display, then add technical nonsense in post-production. Faced with dramatizing arcane details that most viewers at the time wouldn’t understand, the overwhelming temptation for filmmakers was to amp up the visuals, even if it meant creating something utterly removed from the reality of programming. That’s what led to the trippy, Tron-like graphics in 1995’s Hackers, or Hugh Jackman bravely assembling a wire cube made out of smaller, more solid cubes in 2001’s Swordfish.

    3
    A scene from Hackers (MGM)

    4
    A scene from Swordfish (Warner Bros.)

    But more recent depictions of coding are much more naturalistic than previous CGI-powered exercises in geometry. Despite its many weaknesses, this year’s Blackhat does a commendable job of representing cybersecurity. A few scenes show malware reminiscent of this decompiled glimpse of Stuxnet—the cyber superweapon created as a joint effort by the U.S. and Israel. The snippets look similar because they’re both variants of C, a popular programming language commonly used in memory-intensive applications. In Blackhat, the malware’s target was the software used to manage the cooling towers of a Chinese nuclear power plant. In real-life, Stuxnet was used to target the software controlling Iranian centrifuges to systematically and covertly degrade the country’s nuclear enrichment efforts.

    5
    An image of code used in Stuxnet (Github)

    6
    Code shown in Blackhat (Universal)

    In other words, both targeted industrial machinery and monitoring software, and both seem to be written in a language compatible with those ends. Meaning that Hollywood producers took care to research what real-life malware might look like and how it’d likely be used, even if the average audience member wouldn’t know the difference. Compared to the sky-high visuals of navigating a virtual filesystem in Hackers, where early-CGI wizardry was thought the only way to retain audience attention, Blackhat’s commitment to the terminal and actual code is refreshing.

    Though it gets the visuals right, Blackhat highlights another common Hollywood misstep when it comes to portraying computer science on screen: It uses programming for heist-related ends. For many moviegoers, hacking is how you get all green lights for your getaway car (The Italian Job) or stick surveillance cameras in a loop (Ocean’s Eleven, The Score, Speed). While most older films frequently fall into this trap, at least one action hacker flick sought to explore how such technology could affect society more broadly, even if it fumbled the details. In 1995, The Net debuted as a cybersecurity-themed Sandra Bullock vehicle that cast one of America’s sweethearts into a kafkaesque nightmare. As part of her persecution at the hands of the evil Gatekeeper corporation, Bullock’s identity is erased from a series of civil and corporate databases, turning her into a fugitive thanks to a forged criminal record. Technical jibberish aside, The Net was ahead of its time in tapping into the feeling of being powerless to contradict an entrenched digital bureaucracy.

    It’s taken a recent renaissance in scripted television to allow the space for storytellers to focus on programming as a culture, instead of a techy way to spruce up an action movie. And newer television shows have increasingly been able to capture that nuance without sacrificing mood and veracity. While design details like screens and terminal shots matter, the biggest challenge is writing a script that understands and cares about programming. Mr. Robot, which found critical success when it debuted on USA this summer, is perhaps the most accurate television show ever to depict cybersecurity. In particular, programmers have praised the show’s use of terminology, its faithful incorporation of actual security issues into the plot, and the way its protagonist uses real applications and tools. The HBO comedy series Silicon Valley, which was renewed for a third season, had a scene where a character wrote out the math behind a new compression algorithm. It turned out to be fleshed-out enough that a fan of the show actually recreated it. And even though a show like CSI: Cyber might regularly miss the mark, it has its bright spots, such as an episode about car hacking.

    There’s a more timeless reason for producers and writers to scrutinize technical detail: because it makes for good art. “We’re constantly making sure the verisimilitude of the show is as impervious as possible,” said Jonathan Lisco, the showrunner for AMC’s Halt and Catch Fire, a drama about the so-called Silicon Prairie of 1980s Texas. The actress Mackenzie Davis elaborated on the cachet such specificity could lend a show: “We need the groundswell of nerds to be like, ‘You have to watch this!’” The rise of software development as a profession means a bigger slice of the audience can now tell when a showrunner is phoning it in, and pillory the mistakes online. But it’s also no coincidence that Halt and Catch Fire is on the same network that was once home to that other stickler for accuracy—Mad Men.

    Rising technical literacy and a Golden Age of creative showrunners have resulted in a crop of shows that infuse an easy but granular technical understanding with top-notch storytelling. Coupling an authentic narrative with technical aplomb can allow even average viewers to intuitively understand high-level concepts that hold up under scrutiny. And even if audiences aren’t compelled to research on their own, the rough shape of a lesson can still seep through—like how cars are hackable, or the importance of guarding against phishing and financial fraud. But above all, more sophisticated representations of hacking make for better art. In an age of black mirrors, the soft glow of an open terminal has never radiated more promise.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:46 pm on December 20, 2015 Permalink | Reply
    Tags: , , The Atlantic Magazine   

    From The Atlantic: “The Chilling Regularity of Mass Extinctions” 

    Atlantic Magazine

    The Atlantic Magazine

    Nov 3, 2015
    Adrienne LaFrance

    1
    An artist’s rendering of an asteroid or comet striking Earth. Andrea Danti / Shutterstock

    One thing we know for sure is that conditions on Earth were, shall we say, unpleasant for the dinosaurs at the moment of their demise. Alternate and overlapping theories suggest the great beasts were pelted with monster comets, drowned by mega-tsunamis, scorched with lava, starved by a landscape stripped of vegetation, blasted with the radiation of a dying supernova, cloaked in decades of darkness, and frozen in an ice age.

    Now, a pair of researchers have new evidence to support a link between cyclical comet showers and mass extinctions, including the one that they believe wiped out the dinosaurs 66 million years ago. Michael Rampino, a geologist at New York University, and Ken Caldeira, an atmospheric scientist at the Carnegie Institution for Science, traced 260 million years of mass extinctions and found a familiar pattern: Every 26 million years, there were huge impacts and major die-offs. Their work was accepted by the Monthly Notices of the Royal Astronomical Society in September.

    In recent decades, researchers using other methods have found evidence for a 26-million-year cycle of extinction on Earth, but the idea has remained controversial and unexplained. “I believe that our study, using revised dating of extinctions and craters, and a new method of spectral analysis, is strong evidence for the cycles,” Rampino told me.

    Other scientists who have researched mass extinctions are more measured about the latest findings. “I’m sort of agnostic [about the larger theory],” said Paul Renne, the director of Berkeley Geochronology Center. “But I was really disappointed to see they used an age-database for the craters which is full of outdated information.”

    Renne is the author of another new study that focuses on the Chicxulub crater, the massive divot beneath the Yucatán Peninsula that was created by the same impact blamed for the extinction of the dinosaurs.

    2
    Chicxulub crater. NatGeo Illustration by Detlev van Ravenswaay, Science Source

    Renne and his colleagues believe that the comet or asteroid that blasted into Earth and made Chicxulub also set off a global chain-reaction of volcanic eruptions that accelerated the end of the dinosaurs. Volcanoes were, they believe, erupting continuously for millions of years. Long enough to make Hawaii’s Kilauea, which has been flowing since 1983, seem laughable. (“Kilauea is nothing,” Renne told me. “Kilauea is a flea.”)

    And while Renne is interested in the possibility that volcanism is tied to intervals of mass extinction, that possible connection doesn’t explain what kind of cycles might trigger the awakening of Earth’s most powerful magma systems on a global scale. That’s where theories about galactic periodicity come back into play.

    “One of the earliest proponents of a periodic record [of mass extinction] was by a guy named Rich Muller,” Renne said. “He proposed a kind of phenomenological periodicity in which they didn’t really have a mechanism.” In other words, Muller found the 26-million-year pattern of mass extinctions on Earth, but didn’t immediately know what drove the cycle.

    The latest findings from Rampino and Caldeira build on the idea that regular comet showers cause intervals of mass extinctions. The showers, the theory goes, are triggered by the movement of the sun and planets through the crowded mid-plane of our galaxy. As the sun crosses that region, it disrupts great clouds of space dust. Those clouds, in turn, throw off the orbit of comets, sending them careening toward Earth.

    In another theory, planetary scientists suggested that one region of the solar system in particular, known as the Oort comet cloud, plays a key role in mass extinctions.

    2
    Kuiper Belt and Oort Cloud.

    The Oort cloud is a sprawling region at the border of our solar system that contains trillions of icy bodies. Muller put forth a popular hypothesis in the 1980s that said our sun has a sort of evil twin in the Oort cloud. This hypothetical star, he suggested, has an orbital cycle such that it would perturb its neighboring objects, and send 1 billion of them hurtling toward Earth every 26 million years. The star, a binary to the sun, was nicknamed Nemesis, and playfully referred to as the death star. “The binary star, or Nemesis theory, was an alternate to the Galactic-plane story,” Rampino told me. “But the star was looked for, but never found, so Nemesis theory is out of favor now.”

    “[Muller] doesn’t even believe that anymore,” Renne told me.

    If Rampino and Caldeira are correct, the next mass extinction may not be far off—in geologic terms, anyway. Our little corner of the solar system crossed the plane about 2 million years ago, and we are now moving up and through it. “In the Galactic theory, we are near the Galactic plane, and we have been in the danger zone for a couple of million years,” Rampino said. “We are still close to the plane, maybe 30 light years above the plane, [and] a light year is 6 trillion miles … We won’t come back across the plane for about another 30 million years.”

    And while scientists can’t be sure when the next major comet or asteroid impact on Earth will be, the one that is believed to have killed the dinosaurs still stands out as extraordinary, even by mass-extinction standards. The city-sized asteroid that created Chicxulub, for instance, released more energy than 1 billion nuclear bombs when it hit the Earth.

    “There hasn’t been an impact large enough to cause a major mass extinction since the impact 66 million years ago,” Rampino said. “That was a 10-kilometer [six-mile] diameter asteroid or comet. The largest impactor in the last 66 million years was only 5 kilometers [3 miles] in diameter, which only has one-tenth the energy, so it probably wouldn’t have taken out the dinosaurs. In fact, if a 10-kilometer-sized object had hit in the last 66 million years, we wouldn’t be here. Our ancestors probably would have been knocked out.”

    The obvious next question, of course, is how do we prevent the terrible fate the dinosaurs suffered? “These events are so rare in geologic time that the odds of even our great-great-great-great-great-great-grandchildren witnessing them are really low,” Renne said. “The ultimate proof, which is observation, is not going to be available to us, unfortunately.” (Or fortunately, depending on your priorities.)

    In the meantime, scientists are actively scouring the skies, and calculating the orbits of monstrous comets and asteroids. “So far, none are on a collision course, but the work has just begun in earnest,” Rampino said. “Once we know one is coming, then there are several options to divert the object. (You don’t want to blow it up, that will just increase the numbers of impactors.) One possibility is to have a nuclear explosion off to one side of the comet or asteroid, pushing it just slightly off course, or possibly just hitting the object with a rapidly moving space-craft would provide enough of a nudge.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 11:34 am on December 20, 2015 Permalink | Reply
    Tags: , Creationism, The Atlantic Magazine   

    From The Atlantic: “The Evolution of Teaching Creationism in Public Schools” 

    Atlantic Magazine

    The Atlantic Magazine

    12.20.15
    Eric Jaffe

    A new study shows that anti-evolution lessons have become more stealthily integrated into curricula.

    1
    AP

    Some 90 years out from the Scopes Monkey Trial, and a full decade after the legal defeat of “intelligent design” in Kitzmiller v. Dover, the fight to teach creationism alongside evolution in American public schools has yet to go extinct. On the contrary, a new analysis in the journal Science suggests that such efforts have themselves evolved over time—adapting into a complex form of stealth creationism that’s steadily tougher to detect.

    Call it survival of the fittest policy.

    “It is one thing to say that two bills have some resemblances, and another thing to say that bill X was copied from bill Y with greater than 90 percent probability,” Nick Matzke, a researcher at the Australian National University and author of the new paper, tells CityLab via email. “I do think this research strengthens the case that all of these bills are of a piece—they are all ‘stealth creationism,’ and they all have either clear fundamentalist motivations, or are close copies of bills with such motivations.”

    Matzke performed a close textual analysis of 67 anti-evolution education bills proposed by local government since 2004. (Three U.S. states have signed them into law during this time: Mississippi, Louisiana, and Tennessee.) The result was a phylogenic tree—in effect a developmental history—tracing these policies to two main legislative roots: so-called “academic freedom acts,” and “science education acts.”

    2
    This phylogenetic tree traces most of the 67 anti-evolution education policies proposed in U.S. states since 2004 to two main roots: one group of “academic freedom acts,” and another as “scientific education acts.” (Nicholas J. Matzke / via Science)

    Matzke’s analysis shows that academic-freedom acts were popular in 2004 and 2005 but have since been “almost completely replaced” with science-education acts, which emerged as the strategy of choice after the adoption of a 2006 anti-evolution policy in Ouachita Parish, Louisiana. (That policy’s lingering impact on creationist teaching was thoroughly exposed by Zack Kopplin in Slate earlier this year.) These acts tend to call for “critical analysis” of scientific topics that are supposedly controversial among experts. They often lump evolution alongside research areas like climate change and human cloning—an effort, argues Matzke, to skirt legal precedents that connect policies solely targeting evolution with unconstitutional religious motivations.

    “Kitzmiller was mostly about policies that specifically mention intelligent design, says Matzke, who uncovered an explicit link between creationism and intelligent design during that case while working for the U.S.-based National Center for Science Education. “If a policy encourages evolution bashing, and has the same sorts of sponsors and fundamentalist motivations, but doesn’t mention intelligent design or creationism, is it unconstitutional? If I were a judge, I would say ‘yes, obviously,’ but judges have all sorts of different philosophies and political biases.”

    Another marker of science-education acts is that they typically go out of their way to note that only scientific information, not religious doctrine, is protected by the policy. That’s a revealingly insecure stipulation, given that U.S. public schools are secular arenas by default.

    3
    A school bus collects students at Dover Area High School in 2005; the Pennsylvania town’s school system was a focus of a 2004 court ruling that teaching “intelligent design” as a scientific concept was unconstitutional. (AP / Carolyn Kaster)

    A review of six anti-evolution education bills proposed at the state level in 2015 shows many of these legislative tactics on display:

    Missouri. This act looks for ways “to assist teachers to find more effective ways to present the science curriculum where it addresses scientific controversies”—with “biological evolution” among them.
    Montana. This House bill pushes “critical thinking” in science class on the grounds that “truth in education about claims over scientific discoveries, including but not limited to biological evolution, the chemical origins of life, random mutation, natural selection, DNA, and fossil discoveries, can cause controversy.”
    South Dakota. State S.B. 114 gives teachers full freedom to help students “understand, analyze, critique, or review in an objective manner the scientific strengths and scientific weaknesses of existing scientific theories,” with “biological evolution” mentioned as one such theory alongside global warming.
    Oklahoma. This bill—creating a “Science Education Act”—urges teachers to help students develop “critical thinking skills” about unidentified “controversial issues” with the understanding that it “not be construed to promote any religious or non-religious doctrine.”
    Alabama. Much in line with the aforementioned legislative efforts, the sponsor of this bill, Representative Mack Butler, betrayed its intentions when he noted on Facebook that its aim was to “encourage debate if a student has a problem learning he came from a monkey rather than an intelligent design!”
    Indiana. S.B. 562 only mentions “human cloning” as a controversial scientific subject, but its mission to help students “understand, analyze, critique, and review in an objective manner the scientific strengths and weaknesses of existing conclusions,” with its caveat about protecting “only the teaching of scientific information,” tags it firmly within the science education act lineage.

    That none of these bills passed in 2015 isn’t the point, says Matzke. After all, several states have adopted such policies into law in recent years, placing millions of public-school children in the hands of educators who might promote creationist alternatives to evolution, either because they have religious motivations themselves or simply weak scientific backgrounds. That Matzke’s analysis links two such laws to science education act language—those in Louisiana and Tennessee—is evidence enough that anti-evolution policies can, indeed, adapt to new times.

    “Successful policies have a tendency to spread,” he says. “Every year, some states propose these policies, and often they are only barely defeated. And obviously, sometimes they pass, so hopefully this article will help raise awareness of the dangers of the ongoing situation.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 11:48 am on November 28, 2015 Permalink | Reply
    Tags: , , Science vs Religion, The Atlantic Magazine   

    From The Atlantic: “Scientific Faith Is Different From Religious Faith” 

    Atlantic Magazine

    The Atlantic Magazine

    Nov 24, 2015
    Paul Bloom

    If you want to annoy a scientist, say that science isn’t so different from religion. When Ben Carson was challenged about his claim that [Charles] Darwin was encouraged by the devil, he replied, “I’m not going to denigrate you because of your faith, and you shouldn’t denigrate me for mine.” When the literary theorist Stanley Fish chastised atheists such as Richard Dawkins, he wrote, “Science requires faith too before it can have reasons,” and described those who don’t accept evolution as belonging to “a different faith community.”

    Scientists are annoyed by these statements because they suggest that science and religion share a certain epistemological status. And, indeed, many humanists and theologians insist that there are multiple ways of knowing, and that religious narratives exist alongside scientific ones, and can even supersede them.

    It is true that scientists take certain things on faith. It is also true that religious narratives might speak to human needs that scientific theories can’t hope to satisfy.

    And yet, scientific practices—observation and experiment; the development of falsifiable hypotheses; the relentless questioning of established views—have proven uniquely powerful in revealing the surprising, underlying structure of the world we live in, including subatomic particles, the role of germs in the spread of disease, and the neural basis of mental life.

    Religion has no equivalent record of discovering hidden truths.

    So why do so many people believe otherwise? It turns out that while science and religion are as different as can be, folk science and folk religion share deep properties. Most of us carry in our heads a hodgepodge of scientific views and religious views, and they often feel the same—because they are learned, understood, and mentally encoded in similar ways.

    In the first article that I ever published for The Atlantic, I argued that many religious beliefs arise from universal modes of thought that have evolved for reasoning about the social world. We are sensitive to signs of agency, which explains the animism that grounds the original religions of the world. We are naturally prone to infer intelligent design when we see complex structure, which makes creationism more appealing that natural selection. We are intuitive dualists, and so the idea of an immaterial soul just makes sense—or at least more sense than the notion that our minds are the products of our physical brains.

    I’ve continued to develop this theory with my students at Yale, doing experiments with children and atheists and adults across a range of cultures, and I still think that it is correct. But I’ve also come to see how incomplete this perspective is.

    There are many religious views that are not the product of common-sense ways of seeing the world. Consider the story of Adam and Eve, or the virgin birth of Christ, or Muhammad ascending to heaven on a winged horse. These are not the product of innate biases. They are learned, and, more surprisingly, they are learned in a special way.

    To come to accept such religious narratives is not like learning that grass is green or that stoves can be hot; it is not like picking up stereotypes or customs or social rules. Instead, these narratives are acquired through the testimony of others, from parents or peers or religious authorities. Accepting them requires a leap of faith, but not a theological leap of faith. Rather, a leap in the mundane sense that you must trust the people who are testifying to their truth.

    Many religious narratives are believed without even being understood. People will often assert religious claims with confidence—there exists a God, he listens to my prayers, I will go to Heaven when I die—but with little understanding, or even interest, in the details. The sociologist Alan Wolfe observes that “evangelical believers are sometimes hard pressed to explain exactly what, doctrinally speaking, their faith is,” and goes on to note that “These are people who believe, often passionately, in God, even if they cannot tell others all that much about the God in which they believe.”

    People defer to authorities not just to the truth of the religious beliefs, but their meaning as well. In a recent article, the philosopher Neil Van Leeuwen calls these sorts of mental states “credences,” and he notes that they have a moral component. We believe that we should accept them, and that others—at least those who belong to our family and community—should accept them as well.

    None of this is special to religion. Researchers have studied those who have strong opinions about political issues and found that they often literally don’t know what they are talking about. Many people who take positions on cap and trade, for instance, have no idea what cap and trade is. Similarly, many of those who will insist that America spends too much, or too little, on foreign aid, often don’t know how much actually is spent, as either an absolute amount or proportion of GDP. These political positions are also credences, and one who holds them is just like someone who insists that the Ten Commandments should be the bedrock of morality, but can’t list more than three or four of them.

    ___________________________________________________________

    It’s better to get a cancer diagnosis from a radiologist than from a Ouija Board.
    ___________________________________________________________

    Many scientific views endorsed by non-specialists are credences as well. Some people reading this will say they believe in natural selection, but not all will be able to explain how natural selection works. (As an example, how does this theory explain the evolution of the eye?) It turns out that those who assert the truth of natural selection are often unable to define it, or, worse, have it confused with some long-rejected pre-Darwinian notion that animals naturally improve over time.

    There are exceptions, of course. There are those who can talk your ear off about cap and trade, and can delve into the minutiae of selfish gene theory and group selection. And there are people of faith who can justify their views with powerful arguments.

    But much of what’s in our heads are credences, not beliefs we can justify—and there’s nothing wrong with this. Life is too brief; there is too much to know and not enough time. We need epistemological shortcuts.

    Given my day job, I know something about psychology and associated sciences, but if you press me on the details of climate change, or the evidence about vaccines and autism, I’m at a loss. I believe that global warming is a serious problem and that vaccines do not cause autism, but this is not because I have studied these issues myself.

    It is because I trust the scientists.

    Most of those who insist that the Earth is 6000 years old and that global warming is a liberal fraud and that vaccines destroy children’s brains would also be at a loss to defend these views. Like me, they defer, just to different authorities.

    This equivalence might lead to a relativist conclusion—you have your faith; I have mine. You believe weird things on faith (virgin birth, winged horse); I believe weird things on faith (invisible particles, Big Bang), and neither of us fully understands what we’re really talking about. But there is a critical difference. Some sorts of deference are better than others.

    It’s better to get a cancer diagnosis from a radiologist than from a Ouija Board. It’s better to learn about the age of the universe from an astrophysicist than from a Rabbi. The New England Journal of Medicine is a more reliable source about vaccines than the actress Jenny McCarthy. These preferences are not ideological. We’re not talking about Fox News versus The Nation. They are rational, because the methods of science are demonstrably superior at getting at truths about the natural world.

    I don’t want to fetishize science. Sociologists and philosophers deserve a lot of credit in reminding us that scientific practice is permeated by groupthink, bias, and financial, political, and personal motivations. The physicist Richard Feynman once wrote that the essence of science was “bending over backwards to prove ourselves wrong.” But he was talking about the collective cultural activity of science, not scientists as individuals, most of whom prefer to be proven right, and who are highly biased to see the evidence in whatever light most favors their preferred theory.

    But science as an institution behaves differently than particular scientists. Science establishes conditions where rational argument is able to flourish, where ideas can be tested against the world, and where individuals can work together to surpass their individual limitations. Science is not just one “faith community” among many. It has earned its epistemological stripes. And when the stakes are high, as they are with climate change and vaccines, we should appreciate its special status.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 6:29 am on March 13, 2013 Permalink | Reply
    Tags: , , The Atlantic Magazine, The End of Big Science   

    In the Atlantic Magazine: “The Sequester Is Going to Devastate U.S. Science Research for Decades” 

    Atlantic Magazine

    This is copyright protected material, so just enough exploration to get you interested.
    Mar 12 2013

    By Paul Alivisatos is director of Lawrence Berkeley National Laboratory. Eric D. Isaacs is director of Argonne National Laboratory. Thom Mason is director of Oak Ridge National Laboratory.

    “Cutting the meager amount the federal government spends on basic science would do little to meet short-term fiscal goals while incurring huge costs in the future.

    man
    Reuters/The Atlantic

    Less than one percent of the federal budget goes to fund basic science research — $30.2 billion out of the total of $3.8 trillion President Obama requested in fiscal year 2012. By slashing that fraction even further, the government will achieve short-term savings in millions this year, but the resulting gaps in the innovation pipeline could cost billions of dollars and hurt the national economy for decades to come.

    As directors of the Department of Energy’s National Laboratories, we have a responsibility both to taxpayers and to the thousands of talented and committed men and women who work in our labs.
    Instead, this drop in funding will force us to cancel all new programs and research initiatives, probably for at least two years. This sudden halt on new starts will freeze American science in place while the rest of the word races forward…”

    That should be enough to pique your interest. See the full article here.

     
    • jaksichja 8:29 pm on March 14, 2013 Permalink | Reply

      Reblogged this on The Silent Astronomer and commented:
      This is a very nice post–I had received a heads-up e-mail from the APS asking to call my two Senators to roll back the House’s acquiescence of sequestration. Very timely. Thanks!

      Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: