February 2015 Archives

Sat Feb 28 21:36:48 EST 2015

Items of Interest

Various web links I found to be of interest recently:

  • A Skeptic's View of Pharmaceutical Progress

    To obtain a balanced view of pharmaceutical progress (or lack thereof), we need to step back, define a few terms and concepts, and make explicit certain assumptions.

    There is also no doubt that some companies have flagrantly covered up negative data. In some cases, after being "caught" the companies paid hundreds of millions of dollars in fines or, in one recent case, settled with harmed patients for $5 billion (Singer 2009).
    Almost everyone outside the industry feels an excessive amount of money is spent on misleading advertising--especially for drugs like those in table 5 that would not sell themselves. Also, the use of ghost writers and excessive payments to thought leaders, florid conflicts of interest, and payments to practicing physicians to encourage specific drug use clearly occur (see table 1). These practices should be outlawed (Stein-brook 2009).
    Finally, scientifically worthless seeding studies (i.e., studies that do not test a hypothesis but are meant to familiarize physicians with the drug with the intent of increasing sales) may be on the wane, as is publishing only positive data and encouraging biased talks and literature. The press, academicians, journals, and public have wisely cracked down and lampooned such practices endlessly.
    However, I submit that incredible good has been done by the drugs and vaccines in tables 3 and 4 (and many others not mentioned because of space limitations, like erythropoetin for certain types of anemia).

    See the full article for more of the positives (based on facts). Overall, I think a good skeptical nuanced analysis.

  • Fluoridation
    Three part series by Discover magazine blogger George Johnson.

    "These results do not allow us to make any judgment regarding possible levels of risk at levels of exposure typical for water fluoridation in the U.S. On the other hand, neither can it be concluded that no risk is present."
    Which is the problem with all of these issues: you can never prove a negative. And that opens the door for truthiness with numbers. Scienciness.

  • Should Unprovable Physics Be Considered Philosophy?

    In some large part, science is powerful not because of ideas but because of how it treats ideas. Science asks, prove it. The distinction is what separates science from philosophy: falsifiable claims and experimentation.
    . . . String theory and the multiverse are concepts that by definition defy experimentation, and yet a small movement within cosmology is attempting to make the case that they should be exempt. At stake, according to Ellis and Silk, is the integrity of science itself.
    . . .
    The scientific high-ground is at stake, with an ocean of pseudoscientists ready to flood the landscape, taking the public with them. The answer, according to th current paper, lies in a simple question. What observational or experimental evidence is there that would convince a theorist that their theory is wrong? If there is none, then the theory is not a scientific theory.

  • The Key to Science (and Life) Is Being Wrong

    In order to recognize wrongness, scientists must maintain some level of detachment from their cherished theories and be open to the ideas of others in their respective fields.
    ... Wrongness is something we all secretly or openly dread. According to self-described "Wrongologist" Kathryn Schulz, in the abstract, we all understand that we're fallible but on the personal level, we leave little to no room for being wrong.

  • The Dark Science Of Interrogation

    How to find out anything from anyone

    Hundreds of studies have shown that interrogators would be just as well off flipping a coin

  • The Value of Violence

    Ginsberg's book is a direct challenge to the optimism of the celebrated cognitive neuroscientist Steven Pinker, whose 2011 book, The Better Angels of Our Nature, argued that violence is playing a diminishing role in human affairs. Ginsberg counters that violence is essential both to transformational change and to the preservation of political and social order.

    Also see Ginsberg's article Why Violence Works in The Chronicle of Higher Education.
  • The Preference for Potential

    Paper from Stanford University and Harvard Business School

    When people seek to impress others, they often do so by highlighting individual achievements. Despite the intuitive appeal of this strategy, we demonstrate that people often prefer potential rather than achievement when evaluating others. Indeed, compared with references to achievement (e.g., "this person has won an award for his work"), references to potential (e.g., "this person could win an award for his work") appear to stimulate greater interest and processing, which can translate into more favorable reactions. This tendency creates a phenomenon whereby the potential to be good at something can be preferred over actually being good at that very same thing. We document this preference for potential in laboratory and field experiments, using targets ranging from athletes to comedians to graduate school applicants and measures ranging from salary allocations to online ad clicks to admission decisions.


Posted by mjm | Permanent link | Comments | Comments -->

Fri Feb 20 14:23:45 EST 2015

Artificial Intelligence

Several well known smart people have been voicing fears about artificial intelligence being a threat to to humanity. For example,

But I fail to see any mention of a timeline. Is it decades, centuries or millennia? Predictions without a date attached to them are meaningless since they can never be proven wrong. I disagree with the above, so it's nice to see there are plenty of others who do also.

  • No, the robots are not going to rise up and kill you

    David W. Buchanan, member of IBM Watson "Jeopardy!" system team

    Science fiction is partly responsible for these fears. A common trope works as follows: Step 1: Humans create AI to perform some unpleasant or difficult task. Step 2: The AI becomes conscious. Step 3: The AI decides to kill us all. As science fiction, such stories can be great fun. As science fact, the narrative is suspect, especially around Step 2, which assumes that by synthesizing intelligence, we will somehow automatically, or accidentally, create consciousness. I call this the consciousness fallacy. It seems plausible at first, but the evidence doesn't support it. And if it is false, it means we should look at AI very differently.

  • An Open Letter To Everyone Tricked Into Fearing Artificial Intelligence

    Erik Sofge: Don't believe the hype about artificial intelligence, or the horror

    Forget about the risk that machines pose to us in the decades ahead. The more pertinent question, in 2015, is whether anyone is going to protect mankind from its willfully ignorant journalists.
    ...
    Here's the letter at its most ominous, which is to say, not ominous at all:

        "Because of the great potential of AI, it is important to research
        how to reap its benefits while avoiding potential pitfalls."

    ...
    To use the CNET and BBC stories as examples, neither includes quotes or clarifications from the researchers who helped put together either the letter or its companion research document
    ...
    The truth is, there are researchers within the AI community who are extremely concerned about the question of artificial superintelligence, which is why FLI included a section in the letter's companion document about those fears. But it's also true that these researchers are in the extreme minority.

  • Scientists say AI fears unfounded, could hinder tech advances

    Recent alarms over artificial intelligence research raise eyebrows at AI conference

    "We're in control of what we program," Bresina said, noting it was his own opinion and not an official NASA statement. "I'm not worried about the danger of AI... I don't think we're that close at all. We can't program something that learns like a child learns even -- yet. The advances we have are more engineering things. Engineering tools aren't dangerous. We're solving engineering problems."

  • No need to panic -- artificial intelligence has yet to create a doomsday machine

    A malevolent AI will have to outwit not only raw human brainpower but the combination of humans and whatever loyal AI-tech we are able to command -- a combination that will best either on their own.

  • Out of control AI will not kill us, believes Microsoft Research chief

    Eric Horvitz's position contrasts with that of several other leading thinkers.

    A Microsoft Research chief has said he thinks artificial intelligence systems could achieve consciousness, but has played down the threat to human life.


Posted by mjm | Permanent link | Comments | Comments -->