May 2015 Archives
Fri May 29 15:08:37 EDT 2015
Items of Interest
Various web links I found to be of interest recently:
-
Can States Boost Growth By Cutting Top Individual Tax Rates?
Howard Gleckman
A new paper by my Tax Policy Center colleagues Bill Gale, Aaron Krupkin, and Kim Rueben concludes the answer is "no" to both questions. In the cautious language of academic research: "Our results are inconsistent with the view that cuts in top state income tax rates will automatically or necessarily generate growth."
But Bill, Aaron, and Kim also have a warning for those who assert that cutting state taxes is good for growth or raising them is bad: All taxes are not alike. It turns out that while individual income taxes don't matter much at all, and corporate taxes may actually boost growth a bit, higher property taxes do seem linked to slower growth though even that relationship seems to change over time. -
How 'Mathiness' Made Me Jaded About Economics
Noah Smith
But the way math is used in macroeconomics isn't the same as in the hard sciences. This isn't something that most non-economists realize, so I think I had better explain.
In physics, if you write down an equation, you expect the variables to correspond to real things that you can measure and predict. For example, if you write down an equation for the path of a cannonball, you would expect that equation to let you know how to aim your cannon in order to actually hit something. This close correspondence between math and reality is what allowed us to land spacecraft on the moon. It also allowed engineers to build your computer, your car and most of the things you use.
... But macroeconomics, which looks at the broad economy, is different. Most of the equations in the models aren't supported by evidence. -
The Vindication of Edward Snowden
A federal appeals court has ruled that one of the NSA programs he exposed was illegal.
Telling the public about the phone dragnet didn't expose a legitimate state secret. It exposed a violation of the constitutional order. For many years, the executive branch carried out a hugely consequential policy change that the legislature never approved. Tens of millions of innocent U.S. citizens were thus subject to invasions of privacy that no law authorized. And the NSA's unlawful behavior would've continued, unknown to the public and unreviewed by Article III courts, but for Snowden's leak, which caused the ACLU to challenge the illegal NSA program.
-
Japanese hotel launches 'crying rooms'
A hotel in Tokyo is offering rooms designed to allow female guests to "cry heartily" in private
The crying rooms are the latest in a series of unusual hotels and cafes available in Japan.
-
Vin Scelsa Leaves the Airwaves May 2, 2015
Vin Scelsa concluded nearly fifty years on New York's airwaves
"Idiot's Delight" was a wonderful anachronism: unscripted, idiosyncratic, and unashamedly out of step with contemporary listening habits.
-
Why you should really start doing more things alone
"The reason is we think we won't have fun because we're worried about what other people will think," said Ratner. "We end up staying at home instead of going out to do stuff because we're afraid others will think they're a loser."
But other people, as it turns out, actually aren't thinking about us quite as judgmentally or intensely as we tend to anticipate. Not nearly, in fact. There's a long line of research that shows how consistently and regularly we overestimate others' interest in our affairs. The phenomenon is so well known that there is even a name for it in psychology: the spotlight effect. A 2000 study conducted by Thomas Gilovich found that people regularly adjust their actions to account for the perspective of others, even though their actions effectively go unnoticed. Many other researchers have since confirmed the pattern of egocentric thinking that skews how we act. -
GMO Quarterly Letter 1Q 2015
U.S. Secular Growth: Donkey or Racehorse? -- Jeremy Grantham
Mainstream economics continues to represent our economic system as made up of capital, labor, and a perpetual motion machine. It apparently does not need resources, finite or otherwise. Mainstream economics is generous in its assumptions. Just as it assumes market efficiency and perpetually rational economic players, feeling no compulsion to reconcile the data of an inconvenient real world, so it also assumes away any long-term resource problems. "It's just a question of price." Yes, but one day just a price that a workable economy simply can't afford!
Tue May 19 14:06:13 EDT 2015
Problems with Science
I am a big fan of science and so am happy to see that current problems with scientific research are being tackled.
-
The Trouble With Scientists
How one psychologist is tackling human biases in science.
Given that science has uncovered a dizzying variety of cognitive biases, the relative neglect of their consequences within science itself is peculiar. "I was aware of biases in humans at large," says Hartgerink, "but when I first 'learned' that they also apply to scientists, I was somewhat amazed, even though it is so obvious."
One of the reasons the science literature gets skewed is that journals are much more likely to publish positive than negative results: It's easier to say something is true than to say it's wrong. Journal referees might be inclined to reject negative results as too boring, and researchers currently get little credit or status, from funders or departments, from such findings. "If you do 20 experiments, one of them is likely to have a publishable result," Oransky and Marcus write. "But only publishing that result doesn't make your findings valid. In fact it's quite the opposite."
... Surprisingly, Nosek thinks that one of the most effective solutions to cognitive bias in science could come from the discipline that has weathered some of the heaviest criticism recently for its error-prone and self-deluding ways: pharmacology. It is precisely because these problems are so manifest in the pharmaceutical industry that this community is, in Nosek's view, way ahead of the rest of science in dealing with them. For example, because of the known tendency of drug companies and their collaborators to report positive results of trials and to soft-pedal negative ones, it is now a legal requirement in the Unites States for all clinical trials to be entered in a registry before they begin. This obliges the researchers to report the results whatever they say.
... The idea, says Nosek, is that researchers "write down in advance what their study is for and what they think will happen." Then when they do their experiments, they agree to be bound to analyzing the results strictly within the confines of that original plan. It sounds utterly elementary, like the kind of thing we teach children about how to do science. And indeed it is--but it is rarely what happens. Instead, as Fiedler testifies, the analysis gets made on the basis of all kinds of unstated and usually unconscious assumptions about what would or wouldn't be seen. Nosek says that researchers who have used the OSF* have often been amazed at how, by the time they come to look at their results, the project has diverged from the original aims they'd stated.
-
Retraction Watch
Tracking retractions as a window into the scientific process.
So why write a blog on retractions?
First, science takes justifiable pride in the fact that it is self-correcting -- most of the time. Usually, that just means more or better data, not fraud or mistakes that would require a retraction. But when a retraction is necessary, how long does that self-correction take? The Wakefield retraction, for example, was issued 12 years after the original study, and six years after serious questions had been raised publicly by journalist Brian Deer. Retractions are therefore a window into the scientific process.
Second, retractions are not often well-publicized. Sure, there are the high-profile cases such as Reuben's and Wakefield's. But most retractions live in obscurity in Medline and other databases. That means those who funded the retracted research -- often taxpayers -- aren't particularly likely to find out about them. Nor are investors always likely to hear about retractions on basic science papers whose findings may have formed the basis for companies into which they pour dollars. So we hope this blog will form an informal repository for the retractions we find, and might even spur the creation of a retraction database such as the one called for here by K.M Korpela.
Third, they're often the clues to great stories about fraud or other malfeasance, as Adam learned when he chased down the Reuben story. The reverse can also be true. The Cancer Letter's expose of Potti and his fake Rhodes Scholarship is what led his co-authors to remind The Lancet Oncology of their concerns, and then the editors to issue their expression of concern. And they can even lead to lawsuits for damaged reputations. If highlighting retractions will give journalists more tools to uncover fraud and misuse of funds, we're happy to help. And if those stories are appropriate for our respective news outlets, you'll only read about them on Retraction Watch once we've covered them there.
Finally, we're interested in whether journals are consistent. How long do they wait before printing a retraction? What requires one? How much of a public announcement, if any, do they make? Does a journal with a low rate of retractions have a better peer review and editing process, or is it just sweeping more mistakes under the rug? -
First results from psychology's largest reproducibility test
Crowd-sourced effort raises nuanced questions about what counts as replication.
An ambitious effort to replicate 100 research findings in psychology ended last week -- and the data look worrying. Results posted online on 24 April, which have not yet been peer-reviewed, suggest that key findings from only 39 of the published studies could be reproduced.
But the situation is more nuanced than the top-line numbers suggest (See graphic, 'Reliability test'). Of the 61 non-replicated studies, scientists classed 24 as producing findings at least "moderately similar" to those of the original experiments, even though they did not meet pre-established criteria, such as statistical significance, that would count as a successful replication. -
When All Models Are Wrong
More stringent quality criteria are needed for models used at the science/policy interface, and here is a checklist to aid in the responsible development and use of models.
Against this background of declining trust and increasing problems with the reliability of scientific knowledge in the public sphere, the dangers for science become most evident when models-abstracts of more complex real-world problems, generally rendered in mathematical terms-are used as policy tools. Evidence of poor modeling practice and of negative consequences for society abounds.
... The situation is equally serious in the field of environmental regulatory science. Orrin Pilkey and Linda Pilkey-Jarvis, in a stimulating small volume titled Useless Arithmetic: Why Environmental Scientists Can't Predict the Future, offer a particularly accessible series of horror stories about model misuse and consequent policy failure. They suggest, for example, that the global change modeling community should publicly recognize that the effort to quantify the future at a scale that would be useful for policy is an academic exercise. They call modeling counterproductive in that it offers the illusion of accurate predictions about climate and sea level decades and even centuries in the future. Pilkey and Pilkey-Jarvis argue that given the huge time scales, decisionmakers (and society) would be much better off without such predictions, because the accuracy and value of the predictions themselves end up being at the center of policy debates, and distract from the need and capacity to deal with the problem despite ongoing uncertainties.
... In this light, we wish to revisit statistician George E. P. Box's 1987 observation that "all models are wrong but some are useful." We want to propose a key implication of Box's aphorism for science policy: that stringent criteria of transparency must be adopted when models are used as a basis for policy assessments. Failure to open up the black box of modeling is likely to lead only to greater erosion of the credibility and legitimacy of science as a tool for improved policymaking. In this effort, we will follow The Economist's recommendations and provide a checklist, in the form of specific rules for achieving this transparency. -
Science's Biggest Fail
Scott Adams
What's is science's biggest fail of all time? I nominate everything about diet and fitness.
... Science is an amazing thing. But it has a credibility issue that it earned. Should we fix the credibility situation by brainwashing skeptical citizens to believe in science despite its spotty track record, or is society's current level of skepticism healthier than it looks? Maybe science is what needs to improve, not the citizens. -
Unreliable research: Trouble at the lab
Scientists like to think of science as self-correcting. To an alarming degree, it is not. The Economist, Oct 19th 2013
Dr Kahneman and a growing number of his colleagues fear that a lot of this priming research is poorly founded. Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed.
In 2005 John Ioannidis, an epidemiologist from Stanford University, caused a stir with a paper showing why, as a matter of statistical logic, the idea that only one such paper in 20 gives a false-positive result was hugely optimistic. Instead, he argued, "most published research findings are probably false." -
Defending Science
The Project on Scientific Knowledge and Public Policy
The Project on Scientific Knowledge and Public Policy (2002-2008) examined the nature of science and how it is used and misused in government decision-making and legal proceedings. Through empirical research, conversations among scholars, and publications, SKAPP aimed to enhance understanding of how knowledge is generated and interpreted. SKAPP promoted transparent decision-making, based on the best available science, to protect public health.
How and why science works may be difficult for non-scientists to understand. The aura around science and scientists - reflecting the power of scientific understanding and its complexity - creates opportunities for misunderstanding and misuse of scientific evidence. Indeed, failure on the part of decision-makers to understand the norms of science may lead to inaccurate conclusions and inappropriate applications of scientific results.