Posts By gg

Ethanol and sleep

Background
Ethanol is an evolutionary conserved neuromodulating agent, effective in mammals as it is in invertebrates. The fruitfly Drosophila melanogaster responds to ethanol with all the stereotypical signs that are also observed in humans: including euphoria, sedation, habituation and addiction (for a review see 1). Genetic predisposition in humans is accounted to be responsible for about 50% of the risk of developing addiction to ethanol, a major medical and social problem in modern society. For all these reasons, Drosophila has successfully been used in the past decades to investigate the genetic and molecular components of ethanol effects in the brain.

Hypothesis Student will Investigate
The student will investigate how ethanol affects the sleep / wake cycle of Drosophila and, conversely, how the sleep / wake cycle affects the behavioural and molecular responses to ethanol. Some questions that the student will address are: is the sedation induced by high concentrations of ethanol similar to sleep, with all the restorative effects associated to it? What are the effects on the sleep / wake cycle of chronic consumption of ethanol? In flies, Response to ethanol has been shown to be under partial control of the genes regulating the circadian clock and regulating synaptic output in the brain[2], and the same is true for sleep[3]: what is the biological relevance of this observation.

Effects of ethanol on Drosophila locomotion (a) Representation of the locomotor velocity of wild-type flies during an exposure to a moderate dose of ethanol (ethanol exposure period is shown by the grey horizontal bar). (b) Computer-generated traces of the locomotor behavior of a group of 20 flies before and during exposure to ethanol vapor. Each panel corresponds to a 10-s time period recorded at the times indicated in (a). Reproduced from (1)

Techniques Student will Use
The student will perform behavioural experiments to investigate the physiological responses to ethanol sub ministration: this will include assaying sleep, anesthesia, sedation, motility as well as learning and memory by mean of Drosophila learning paradigms. The student will also perform anatomical dissections and molecular analysis exploring how gene expression changes in the brain upon ethanol administration.

References and recommended readings

  1. Drugs, flies and videotape: the effects of ethanol and cocaine on Drosophila locomotion.
    Curr Opin Neurobiol. 2002 Dec;12(6):639-45. (pdf)
  2. arouser reveals a role for synapse number in the regulation of ethanol sensitivity.
    Neuron. 2011 Jun 9;70(5):979-90. (pdf)
  3. Widespread changes in synaptic markers as a function of sleep and wakefulness in Drosophila.
    Science. 2009 Apr 3;324(5923):109-12. (pdf)

Sleep and learning: a genetic approach. The allnighter gene.

Background

Sleep is a vital activity, whose function still remain mysterious despite centuries of scientific research. All animals that have been tested so far, from nematodes to humans, possess and require the fundamental characteristics of sleep. In Drosophila, like in humans, sleep deprivation leads to a remarkable decrease in intellectual performance, learning and memory; chronic sleep restrictions also shows widespread metabolic changes and eventually leads to unexplained death.

My laboratory investigates the many functions of sleep using mainly the fruit fly Drosophila melanogaster as model organism. In particular, current research is aimed at elucidating the connections between sleep and synaptic plasticity, learning and neuronal homoeostasis. In previous work we provided evidence of how sleep may function as a mechanism to maintain a proper homoeostasis for synaptic strength and connections in Drosophila (see [1] for a recent review). We are now extending that line of work and we employ a rich selection of multidisciplinary techniques ranging from genetic manipulation of Drosophila (with transgenes and RNAi) to computer assisted analysis of behaviour to measure intellectual performance, including odours recognition and ability to court and mate.

The project

Following a genome wide screening for short sleeping mutant flies, we identified a novel gene that we called allnighter. allnighter mutant flies are viable but sleep considerably less than wild type controls and show general symptoms (such as tense “eagle” wings) that strongly suggest an underlying problem with neuronal excitability. 

The project aims at extending the characterization of this gene, and other belonging to the same family. Some of the questions that you will try to answer are: “where is the gene express and at what stages of development? Does expression change with sleep or experience? Is the enzymatic activity of allnighter required for its function? How do allnighter flies perform when challenged with task measuring their learning and memory capabilities”

Techniques. Working with Drosophila

An MRes rotation in Drosophila allows the unique opportunity to investigate a biological problem in vivo and yet follow the development of a relatively complicated project from the beginning (e.g.:genetic manipulation of a new animal) to the end (e.g.: behavioural testing of the new phenotype). Your daily work will most likely encompass basic techniques of molecular biology (DNA cloning, PCR etc), genetics (crossing flies and follow up progeny) and behavioural neuroscience (analysis of sleep, sleep deprivation, analysis of learning and memory performance).

Drosophila and neurobiology.

For decades, Drosophila has been the most powerful animal models for genetic dissection and manipulation, and the outstanding contributions that flies gave to developmental biology and genetics were celebrated twice with Nobel Prizes(1933, 1995), and countless time in our text books. Recently, more and more laboratories started pairing the incredible genetic tools that we have been building in the past century with new and exciting neuronal techniques, leading to a Drosophila neuronal renaissance. From circuit formation to their function, Drosophila offers the complexity of an animal that can learn, memorize, socialize and yet the accessibility of a 250 thousand neurons brain. Here, I link a few entertaining and informative videos with the aim to communicate the excitement this field is living right now.

  • Gero Miesenboeck (Oxford). Engineering the brain (18 minutes TED talk)
  • Michael Dickinson (Caltech). Towards an integrated view of brain function (28 minutes video)
  • Bjoern Brembs (Berlin). The Drosophila Flight Simulator (3 minutes video)
  • Charalambos Kyriacou (Leicester). An interview on the use of Drosophila for studying circadian biology and behaviour (14 minutes video)

Getting in touch.

My office is in room 743 of the Huxley Building, in the South Kensington Campus.
Email and phone number are listed here. I’ll be happy to meet you and show you the lab, just drop me an email.

References and sample readings

  1. Synaptic plasticity in sleep: learning, homeostasis and disease.
    Trends Neurosci. 2011 Sep;34(9):452-63. (pdf)
  2. Waking experience affects sleep need in Drosophila.
    Science. 2006 Sep 22;313(5794):1775-81. (pdf)
  3. Widespread changes in synaptic markers as a function of sleep and wakefulness in Drosophila.
    Science. 2009 Apr 3;324(5923):109-12. (pdf)

Was will das Postdoc? (What does a postdoc want?)

Introduction.

A couple of months ago, an interesting article by Jennifer Rohn on Nature prompted an explosive discussion on the interweb about the career perspectives of  post doctoral researchers. Jennifer’s point in a nutshell was “we should give postdocs the chance to keep being postdocs forever and ever if they so wish” and was encountered by an overwhelming appraisal, at least judging by the comments in the original piece.

I, for one, would not really like to be a postdoc forever and after a, cough cough, probably excessive rant, I made my points clear on these pages. To say it all, my post was sparked by the surprise of seeing so many postdocs showing enthusiasm when facing the idea of being stuck in their limbo forever. Am I the only weird? Everyone claim their postdocs was the best time of their scientific experience, why didn’t I feel like that when it was my turn? Why do I enjoy so much more being completely independent instead? Part of the answer is, I am sure, that yes, I am weird indeed. The other part comes from the results obtained from the poll I then decided to put up in the successive weeks.

Methods.

The survey was conducted with great scientific rigour pretty much in random way. I put up a web page and asked for feedback on my blog post. Except for an initial round on facebook and so, I didn’t advertise it much on social sites myself, because I felt in that way I would have reached more easily people somehow close to my views and I would have somehow biased the sample. Instead I asked the participants to spread it around and gave them 4 weeks time to do so.

Results.

After about a month, I came back and found little more than 100 responses. The exact picture of the responses is available in the page here (I suggest you open the result page in a new window and scroll along as you ready my comments, figure by figure).

84% of respondents were either in life science or Physics. For some reason I think Physics is heavily represented and life science is probably under represented. This may be due to physicists being better at twitter.

For how long have you been a postdoc? Very nice skewed distribution. Apparently the median of a single postdoc experience in the USA is 2.2 years, which matches nicely what I got. I cannot help but cringe when looking at the first and last bar of the graph: those who don’t know what it’s going to happen to them and those who, also, don’t know what it’s going to happen to them but in a different way.

Most people are at their first postdoc. Some at second. Not much else to comment here.
67% of respondant want (or dream) to continue their career in academia. This matches exactly what found by the National Academy of Sciences in the USA. Dear God, I didn’t know I was so good at making survey!

First surprising result: only 6% has no clue of what to do with their life. I am sure if we were to ask PhD students a similar questions we would get a different picture. There are two possible explanations of why almost 70% of postdocs want to pursue a research in academia: the first one is that they really still love it, no matter what they go through; the second one is that they wasted invested so much time and effort and money and personal relationships in it, than they cannot admit to themselves that maybe it was the wrong choice. This latter, is exactly what the Theory of Cognitive Dissonance predicts. Namely,

inconsistency among beliefs and behaviors will cause an uncomfortable psychological tension. This will lead people to change their beliefs to fit their actual behavior, rather than the other way around, as popular wisdom may suggest.

So, we are all a bunch of stubborn delusional. Are we?

Where do you see your career progress? Here things get a bit more grey in colour. Suddenly 1/3 of respondents is not sure they’ll manage to land on an academic position and most seem to start growing doubts. This is particularly relevant if you think that most people are still in their <3 years of postdoc.

And now is where things get a bit more negative indeed.

Most people think they are not well informed about alternative career outside the academic path and only 10% of respondents seem not to have concerns. About 70% of folks are quite worried indeed.

Next two questions are about the environment in the lab: a shy majority thinks their PI doesn’t really care much about their career progression but it’s really not as bad as one may think.

Most postdocs, on the other hand, are experiencing a lot of freedom given that more than 60% seem to be experiencing complete or almost complete academic freedom.

Then, here comes the result that makes me think I am not so weird at all! 70% or so of respondents would be quite ready or ready to be an assistant professor starting next Monday! (OK let’s say Tuesday, given that Monday is bank holiday). This is astonishing considering, again, that most people have < 3 years of postdoctoral experience. I personally value experience in this job very very little and I keep saying that either you are ready from the start to be a scientist by yourself or you will never really be. From the start can be anything between 5 and 25 years of age.

Yet, I do realize that I may be a bit extreme in my view, given that in fact most people recognized their postdoc was in fact useful to prepare for the next step.

Now, next question is again an experiment in social psychology. Question is: do you feel more or less suited to being a researcher than your peers? Overwhelming majority thinks they are better than average, confirming yet again what scholars call Illusory Superiority. Those with Impostor Syndrome were probably all Darwin-ianly selected out during PhD.

Majority of respondents would not mind a job in research. Good to hear.

Then comes the question that started it all. Would you enjoy being a postdoc for life? Would you enjoy being an independent postdoc? Most people would rather be an Independent Postdoc (75%) than a Postdoc for life (42%). Ah! I knew it!

Finally, the last two question: people are quite divided on whether funding agencies should produce less postdoc but with higher salaries. In fact, only 42% thinks that would be right to do so. Awww, I am moved. What a socialistic altruistic bunch are postdocs! And, if that was not enough, an overwhelming majority think is a bad idea to be protectionist about the job and do not think that their country should make it difficult for foreign people to get a postdoc position. I am so proud of you guys!

Discussion.

First thing to notice is that I may have an alternative career in designing survey. Second thing, we confirmed at least two important theories of social psychology: too bad I am late to publish them. (Note: The Freudian title of this post is dedicated to my staggering successes in the field of psychology indeed).

Third: if you look into how to redistribute money, take into account what postdocs want. They are scientists and they want to be treated as such. In short: they think they are better than their peers, they want freedom and they don’t care that much about money after all. Their priority is to keep doing research whether in academia or industry whether independently  or as postdocs.

I gave my recipe of “what must change in Science and how” already, no need to repeat myself. Glad to see I wasn’t speaking only for myself!

 

Patches for Nautilus “move to trash” bug

Warning, this post contains a geek rant.

If you use the nautilus or nautilus-elementary filemanager (the default file manager in any gnome-based linux distro, including Ubuntu), you are probably aware of the annoying bug with file deletion.

Like any other file manager, nautilus allows you to delete your files using keyboard shortcuts: permanently (hit <Shift-Delete>) or temporarily by moving them to the trashbin (hit <Delete> on your keyboard). Removing files is always a critical action so any other file manager will make sure that you don’t do it accidentally: the file manager in MacOS, finder, will require you to hit the key combination <AppleKey+Delete>, difficult to perform by mistake. Microsoft Explorer, Konqueror, Thunar and many others will ask you to confirm that you really want to trash files with a dialog box.

Unfortunately nautilus lacks this ability: if you, your toddler or your cat accidentally hit the Delete key on the keyboard while a file or folder is selected, they go into the trashbin without warning. If you are not looking at the screen while this happens, the item is well gone. Obviously, this flaw was pointed out already long time ago. Users started asking for a fix already in 2004 (that is seven years ago!) and lots of people wanted to get that fixed: see for instance here, here, here, here, here, here

Surprisingly, reactions of the gnome developers to this problem were of two kinds: “I don’t think this is a real problem”[¹] or “I don’t think you are proposing the perfect solution”[²].  Back in 2009, I accidentaly lost some file and wrote a patch to fix this bug. The patch simply gave the user the option to activate a warning dialog if they wanted to. I figured “people who want the dialog will enable it and be happy, people who don’t will leave it alone and keep discussing about what is really truly the best solution for the next 7 years“. Believe it or not, the problem still exists,  so I thought of raising the issue once again (this time, I also proposed a patch to change Delete in Control-Delete).

Guess what: even after 7 years and hundred of people begging for a fix, we are sitting on the same attitude:

This is a real problem, but I don’t think the solution is a windows-like alert dialog. […] An animation with the file becoming red and/or flying to the trash would be a nice addition.

Or maybe a small cluebar with an embedded undo button would already be enough. I like how Google does it in its webapps.

What if deleted files were visible as some ghost-like-icon in the directory they used to be? And it could be possible to turn on/off the visibility of deleted files? And you can have your animation then as well; of an icon that dies.

I think your use case is a real concern, and something we should fix indeed, but as others said in this thread, I don’t think a confirmation dialog is how we want this to be implemented, especially when it carries a new preference with it.

Personality i like my delete and it would felt awkward if the delete didn’t delete anything.

We will rather keep the hole than having a solution we don’t like. Little does it matter if any other browser is actually using that solution or if lots of people want to see the thing fixed.

This attitude is amazingly complicated to understand for my simple brain. For me, getting things done means finding the meeting point between the optimal solution and the best outcome. If my car gets a flat tire on the way, I will accept any new tire a rescuer would give me and I won’t be sitting 7 years waiting for one that really matches the other three. And I like to think this is not Linux true philosophy either.

Anyway, here you can download the patches to fix this issue.

I am using the second one on nautilus-elementary (which also sports a very convenient Undo feature).

Edit 1 April 2011. To much of my pleasure, the patch has now been accepted and from next version on, Control<Delete> will be the shortcut to send stuff to trash. No more accidental  deletions! Open Source wins again.

Edit June 2011. If you arrive to this page because you freaked out finding the new nautilus behaviour, this is how to get back to the old key combo

Notes:

1. You are all familiar with the “how many people it takes to change a light-bulb?” jokes.
The one about software developers goes like this:

Q: How many developers does it take to change a lightbulb?
A: The lightbulb works fine on the system in my office. NOT REPRO.

2. The one about C++ programmers goes like this:

Q: How many C++ programmers does it take to change a lightbulb?

A: You’re still thinking procedurally. A properly-designed lightbulb object would inherit a change method from a generic lightbulb class, so all you’d have to do is send it a bulb.change message.

 

Postdoc, love or hate? A survey.

Following last week’s discussions about the tough life of a postdoc, I’ve realized more data is needed before making general assumptions on what postdocs want and need. Jennifer Rohn’s post had an overwhelming response of sympathizing postdocs who would love to have a “postdoc for life position” and I didn’t find this surprising. What came a bit unexpected to me, though, is that the other voice was hardly heard.

I think the problem has deeper issues that will have to be solved by completely changing the way we define a laboratory.

For sake of smart discussions, I am setting up a survey aimed at all postdocs out there. You’ll find it here: http://thepostdoctrap.gilest.ro

I am not doing this just because I care about the issue: I have been invited to a meeting organized by the postdocs of the MPI-CBG in late May and I’d love to give those guys some numbers about the issue. So, please, take that survey and come back in couple of months for the results.

I am a postdoc and I think I just realized I have been screwed for years

It seems in the past two weeks someone has started going around lifting big stones in the luxurious and exotic garden of science, finding the obvious gross underneath. To be more precise, the topic being discussed here is: “I am a postdoc and I think I just realized I have been screwed for years“.

A couple of weeks ago, a friend of mine blogged about his decision to leave academia after yet another nervous breakdown. I leave it to his words to describe what it means to realize in your early thirties that your childhood dream won’t become a reality because the job market is broken and you can’t cope with that stress. To be honest, while I sympathize with him,  I find his rant extreme, but what is more important than discussing anecdotal experiences is actually the huge number of comments that post had, not only on the blog but also on social discussion websites. Literally hundreds of comments from people who went through similar experiences, culminating with the epiphany that finding a job in academia is freaking difficult.

This discussion is not new, of course. Occasionally people from academia feel the urge to let postdocs and PhD student know that this is a very risky road. See Jonathan Katz’s opinion from back in 2005, for instance.

Why am I (a tenured professor of physics) trying to discourage you from following a career path which was successful for me? Because times have changed (I received my Ph.D. in 1973, and tenure in 1976). […] American universities train roughly twice as many Ph.D.s as there are jobs for them. When something, or someone, is a glut on the market, the price drops. In the case of Ph.D. scientists, the reduction in price takes the form of many years spent in “holding pattern” postdoctoral jobs. Permanent jobs don’t pay much less than they used to, but instead of obtaining a real job two years after the Ph.D. (as was typical 25 years ago) most young scientists spend five, ten, or more years as postdocs. They have no prospect of permanent employment and often must obtain a new postdoctoral position and move every two years.

Pretty actual, isn’t it? Although these arguments do emerge now and then, they do it way less than they should¹. Why? The main reason is that PIs have really nothing to gain from changing the current situation: as it is now, they find the field overcrowded with postdocs who cannot do anything else but staying in the lab, hoping to get more papers than their competitors; waiting for the unlucky ones to drop out to reduce competition. That means it’s easy for the PIs to get postdocs for cheap and keep them in the lab as long as possible.

Of course there could be an even better scenario for PIs: postdocs who never leave the lab! Let’s face it: having so many postdocs to choose from is nice, but many of them aren’t actually that good and also it takes time for them to acquire certain skills. So why don’t give them the chance to stay for 20 years in the same lab? This is exactly what Jennifer Rohn was advocating on Nature last week. I think in her editorial Jennifer actually rightly identifies the problem:

The system needs only one replacement per lab-head position, but over the course of a 30–40-year career, a typical biologist will train dozens of suitable candidates for the position. The academic opportunities for a mature postdoc some ten years after completing his or her PhD are few and far between.

But she fails to provide the right solution:

An alternative career structure within science that professionalizes mature postdocs would be better. Permanent research staff positions could be generated and filled with talented and experienced postdocs who do not want to, or cannot, lead a research team — a job that, after all, requires a different skill set. Every academic lab could employ a few of these staff along with a reduced number of trainees. Although the permanent staff would cost more, there would be fewer needed: a researcher with 10–20 years experience is probably at least twice as efficient as a green trainee.

I cannot even start saying how full of rage this attitude makes me. This position is so despicable to me! Postdoc positions exist, on the first place, because they provide a buffer for all those who would like to get a professor job but cannot, due to the limited market. Any economist would tell you that the solution is not to transform this market into something even more static but to increase mobility, for Newton’s sake! Sure, some postdocs may realize too late they don’t really want to be independent and they would gladly keep doing what they are doing for some more time: this is what positions in industry are for², and this is what a lab tech position is for. No need to invent new names for those jobs.

So, here I propose an alternative solution: what about giving postdocs the chance of being independent, without necessarily being bound to running a 4 people lab to start with, or without the need to hold a tenure position? What about redistributing resources so that current PIs will have a smaller lab so that 1 or 2 more people somewhere else could have the chance to start their own career? Isn’t this more fair?

I wrote about this before, so I won’t repeat myself: in short, the big lab model is not sustainable anymore and it is not fair!

The problem, Jennifer, is not that postdoc want to stay longer in the lab: the problem is that they want out!

Notes

1: a recurrent question in the new Open Science society is “should scientists be blogging?“. My answer is yes, definitely (in fact, that’s what I am doing) but I don’t expect them to blog about their opinion on the last paper in their field. I don’t think that is so useful, actually. I’d rather have them talk about their daily life as scientists and speak freely and loudly about controversial issue.

2: My wife is one of them: she realized she didn’t want to have anything to do with academia anymore and she moved to industry where she actually got a salary that was more than twice the one she was getting in the University doing pretty much the same job, without worrying about fellowships and competition. She has never been so happy at work, too.

Of scientists and entrepreneurs.

Prepare to jump. Jump.

As my trusty 25 readers would know, a few months ago I made the big career jump and moved from being on the bench side of science to the desk side, becoming what is called a Principal Investigator (PI). As a matter of fact nothing really seems to have changed so far: I hold a research fellow position at Imperial College, meaning that I am a one-man lab: I still have to plan and execute my experiments, still have to write my papers and deal with them, still have to organize my future employment – all exactly as I was doing before.

Me, in my lab. Feb 2011

However, starting your own lab is still a formalization of walking with your own legs and, as such, one must be prepared to encounter new challenges. Unfortunately no one really ever prepared me to this: we spend a great deal of time as PhD and Postdocs learning skills that not necessarily will help for the next steps and when the moment comes to be really independent, a lot of people feel lost in translation. This may bring frustration in the PI (who find themselves completely unprepared for the new role) and in their students (who find themselves led by someone who is completely unprepared for their role). I saw this happening countless times.

Scared by the idea of ending up like this, I actually started thinking about how things would evolve quite some time ago. It’s easy: you just take inspiration from PIs around you. You start with all those who work in the same institute or department, for instance. And you try to figure out what they do right and what they do wrong, and learn by Bayesian inference: I like that, I don’t like this, I want to be like that, I don’t want to be like this. If you are more of a textbook person, you can also get yourself one of those “How to be a successful PI” guidebook; they are particular popular in the USA and some people find them helpful. Did that too, found it a bit dumb.

Look around.

Finally, there is a third strategy you may want to follow and that is: find inspiration and stories of success in people who are doing things completely different from what you do. The rational of this strategy lays in the assumption that certain people will be good in what they do, no matter what that is. They have special skills that make them succesful, whether they are running a research lab or a law firm or a construction business. A good gymnasium (in the greek sense of the world) to get in touch with such people is the entrepreneur world. There are several analogies between being the founder of a, let’s say, computer startup and being a newly appointed PI. Here are some examples out of the tip of my head:

  • both need to believe in themselves and in what they do, more than anybody else around them
  • both need to convince people that what they want to do is worth their investment money, whether they are millions of venture capitals or bread-crumbs of research grant money
  • both have to choose very carefully the people they will work with
  • both have to find their niche in a very competitive market or else, if they will rather go after the big competition, they need to make sure their product is better in quality and/or appeal
  • both need to innovate and always be ahead of competition
  • they both chose their career because they enjoy being bosses of themselves (or at least they better do)
  • both need to learn how to overcome difficult times by themselves (“loop to point 1” is one solution)
  • et cetera

If you are not yet convinced about this, read this essay by angel investor Paul Graham titled “What we look for in founders“. If I were to substitute the world “founder” with “scientist”, you would not even notice.

These are the reasons why a couple of years ago I started following the main community of startup founders in the web, hackernews. It’s a social community composed of people with a knack for entrepreneurship – some of them extremely succesful (read $$$ in their world). Most of them are computer geeks, which is good for my purposes as it is yet another category of people who share a lot with scientists, namely: social inepts who’d love to improve their relationship skills but dedicate way too much time to work.

So the question now is: what did I learn from them? To begin, I reinforced my prejudice: that scientists and entrepreneurs have a lot in common and that certain people would be succesful in anything they would do. This is a crucial starting point because you’ll find that there is way more information on how to be a succesful entrepreneur than how to be a succesful academic – I still don’t have a good explanation on why it is so, actually. The moment you accept that, your sample case just grew esponentially and you have much more material for your inference based learning. I am no longer just limited at taking inspiration from other scientists, but also succesful companies. This is actually not so obvious to most people. For instance, every now and then a new research institute is born with the great ambition of being the next big thing. The decide to follow the path of those institutes who succeded in the past, assuming there is something magic in their recipy and because the sample set is limited they always end up naming the same names: LMB, CSHL, EMBL, Carnegie… Why nobody takes Google as an example? Or Apple? Or IBM? I am actually deeply convinced that if Google were to create a Google Research Institute, they would be amazingly succesful. They have already made exciting breakthrough in (published!) research with Google Flu Trends or Google Book Projects. If they were to philantropically extend their research interests to other fields, they’ll make a lot of people bite their dust (I’d kill to work at a Google Research Institute, by the way. wink wink.).

Five examples of relevant things I learned by looking at the entrepreneur world.

1. Talking about Google, I found extremely smart their philosophy to incentivate people to work 20% of their time on something completely unrelated to their project. Quoting wikipedia:

As a motivation technique, Google uses a policy often called Innovation Time Off, where Google engineers are encouraged to spend 20% of their work time on projects that interest them. Some of Google’s newer services, such as Gmail, Google News, Orkut, and AdSense originated from these independent endeavors.[177] In a talk at Stanford University, Marissa Mayer, Google’s Vice President of Search Products and User Experience, showed that half of all new product launches at the time had originated from the Innovation Time Off.[178]

The irony behind this, actually, is that I am willing to bet my pants that this idea was in fact borrowed from academia: or better, from how it should be in academia but it’s not anymore.

2. Freedom is the main reason why I chose the academic path and I find people who know how to appreciate freedom (and make it fruitful) very inspirational. See for instance this essay by music entrepreneur Derek Sivers on “Is there such a thing as too much freedom?” or his “Delegate or die“.

3. On a different note, I appreciate tips on how to deal with hiring people. See for instance “How to reject a job candidate without being an asshole“. I wish more people would follow this example. Virtually no one in academia will ever tell you why you didn’t get their job, even though it’s every scientist’s duty to give direct straight feedbacks about other people’s work (it is in fact the very essence of peer reviewing!). I was on the job market last year for a tenure track position and it was a very tough year, in terms of competition. The worst ever, apparently. Each open position had at least 100 or 200 applicants of which half a dozen on average were then called for interview. I had a very high success rate in terms of interviews selections, being called to something like 15 places out of 50 applications sent. Many of them happened to be the best places in the world. In many of them didn’t work out and NONE of them offered any kind of feedback on the interviewed applicants. NONE of them actually took the time to say “this is what didn’t convince us about your interview”. What a shame.

4. I am not that kind of scientist who aim to spend his entire career on one little aspect of something; I enjoy taking new roads (talking about freedom again, I guess). So companies like Amazon or Apple, constantly changing their focus, are of great inspirations.

5. Startup founders know two unwritten rules “Execution is more important than the idea” and “someone else is probably working on the same thing you are”. Read about facebook story to grasp what I am talking about. Here’s is also well summarized (forget point 3 though, that doesn’t apply to science I believe).

6. Finally, as someone who starts with a tiny budget and who has a passion for frugality, I found the concept of ramen profitability very interesting: think big, but start small. That’s exactly what I am doing right now.

What has changed in science and what must change.

I frequently have discussions about funding in Science (who doesn’t?) but I realized I never really formalized my ideas about it. It makes sense to do that here. A caveat before I start is that everything I write about here concerns the field of bio/medical sciences for those are the ones I know. Other fields may work in different ways. YMMV.

First of all, I think it is worth noticing that this is an extremely hot topic, yet not really controversial among scientists. No matter whom you talk to, not only does everyone agree that the status quo is completely inadequate but there also seem to be a consensus on what kind of things need to be done and how. In particular, everyone agrees that

  1. more funding is needed
  2. the current ways of distributing funding and measuring performance are less than optimal

When everybody agrees on what has to be done but things are not changing it means the problem is bigger than you’d think. In this post I will try to dig deeper into those two points, uncovering aspects which, in my opinion, are even more fundamental and controversial.

Do we really need more funding?

The short answer is yes but the long answer is no. Here is the paradox explained. Science has changed dramatically in the past 100, 50 (or even 10) years, mainly because it advances at a speedier pace than anything else in human history and simply enough we were (and are) not ready for this. This is not entirely our fault since, by definition, every huge scientific breakthrough comes as a utter surprise and we cannot help but be unprepared to its consequences¹. We did adapt to some of the changes but we did it badly and we did not do it for all to many aspects we had to. In short, everyone is aware about the revolution science has had in the past decades, yet no one has ever heard of a similar revolution in the way science is done.

A clear example of something we didn’t change but we should is the fundamental structure of Universities. In fact, that didn’t change much in the past 1000 years if you think about it. Universities still juggle between teaching and research and it is still mainly the same people who does both. This is a huge mistake. Everybody knows those things have nothing in common and there is no reason whatsoever for them to lie under the same roof. Most skilled researchers are awful teachers and viceversa and we really have no reason to assume it should not be this way Few institutions in the world concentrate fully on research or only teaching but this should not be the exception, it should be the rule. Separating teaching and research should be the first step to really be able to understand the problems and allocate resources.

Tenure must also be completely reconsidered. Tenure was initially introduced as a way to guarantee academic freedom of speech and action. It was an incentive for thoughtful people to take a position on controversial issues and raise new ones. It does not serve this role anymore: you will get sacked if you claim something too controversial (see Lawrence Summers’ case) and your lab will not receive money if you are working on something too exotic or heretic. Now, I am not saying this is a good or a bad thing. I am just observing that the original meaning of tenure is gone. Freedom of speech is something that should be guaranteed to everyone, not just academic, through constitutional laws and freedom of research is not guaranteed by tenure anyway because you don’t get money to do research from your university, you just get your salary. It’s not 1600 anymore, folks.

Who is benefiting from tenure nowadays? Mainly people who have no other meaning of paying their own salary, that is researchers who are not active or poorly productive and feel no obligation to do so because they will get their salary at the end of the month anyway. This is the majority of academic people not only in certain less developed countries – like Italy, sigh – but pretty much everywhere. Even in the US or UK or Germany many departments are filled with people who publish badly or scarcely. Maybe they were good once, or maybe it was easier at their time to get a job. Who pays for their priviledge? The younger generation, of course.

Postdoc number keep growing. Academic positions do not.
Postdoc number keeps growing. Academic positions do not².

The number of people entering science grows every year², especially in the life sciences. The number of academic position and the funding extent is far from being sufficient to cover current needs. In fact, about 1-2 in 10 postdoc will manage to find a job as professor and among those who do, funding success rate is again 20-30% in a good year. In short, even if we were to increase the scientific budget by 5 times tomorrow morning that would still not be enough. This means that even though it would be sure nice to have more money, it’s utopia to think this will help. Indeed, we need to revolutionize everything, really. People who have tenure should not count on it anymore and they should be ready to leave their job to somebody else. There is no other way, sorry.

Do we really need better forms of scientific measurement?

No. We need completely new forms of scientific measurement. And we need to change the structure of the lab. Your average successful lab is composed of 10 to 30 members, most of them PhD students or postdocs. They are the ones who do the work, without doubts. In many cases, they are the ones who do the entire work not only without their boss, but even despite the boss. This extreme eventuality is not the rule, of course, but the problem is: there is no way to tell it apart! The principal investigator as they say in the USA, or the group leader as it is called with less hypocrisy in Europe, will spend all of their time writing grants to be funded, speaking at conferences about work they didn’t do, writing or merely signing papers. Of course leading a group takes some rare skills, but those are not the skill of a scientist they are the skills of a manager. The system as it is does not reward good scientists, it rewards good managers. You can exploit creativity of the people working for you and be succesful enough to keep receiving money and be recognized as a leader but you are feeding a rotten process. Labs keep growing in size because postdocs don’t have a chance to start their own lab and because their boss uses their work to keep getting the money their postdoc should be getting instead. This is an evil loop.

This is a problem that scientometrics cannot really solve because it’s difficult enough to grasp the importance of a certain discovery, let alone the actual intellectual contribution behind it. It would help to rank laboratories not just by number of good publications, but by ratio between good papers and number of lab members. If you have 1 good paper every second year and you work alone, you should be funded more than someone who has 4 high publications every year but has a group of 30 people.

Some funding agencies, like HHMI, MRC and recently WellcomeTrust, decided to jump the scientometric problem and fund groups independently of their research interest: they say “if you prove to be exceptionally good, we give you loads of money and trust your judgement”. While this is a commendable approach, I would love to see how those labs would rank when you account for number of people: a well funded lab will attract the best sutudents and postdocs and good lab members make a lab well funded. Here you go with an evil loop again.

In gg-land, the imaginary nation I am supreme emperor of, you can have a big lab but you must really prove you deserve it. Also, there are no postdocs as we know them. Labs have students who learn what it means to do science. After those 3-5 years either you are ready to take the leap and do your stuff by yourself or you’ll never be ready anyway. Don’t kid yourself. Creativity is not something you can gain with experience; if at all, it’s the other way around: the older you get, the less creative you’ll be.

Some good places had either a tradition (LMB, Bell labs) or have the ambition (Janelia) of keeping group small and do science the way it should be done. Again, this should not be the exception. It should be the rule. I salute with extreme interest the proliferation of junior research fellowships also known as independent postdoc positions. They are not just my model of how you do a postdoc. In fact they are my model of how you do science tout court. Another fun thing about doing science with less resource is that you really have to think more than twice about what you need and spend your money more wisely. Think of the difference between buying your own bike or building one from scratch. You may start pedaling first if you buy one, but only in the second case you will have a chance to build a bike that run faster and better. On the long run, you may well win the race (of course you should never reinvent the wheel; it’s OK to buy those).

Of course, the big advantage of having many small labs over few big is that you get to fund different approaches too. As our grandmother used to say: it’s not good to keep all eggs in the same basket. As it happens in evolution, you have to diversify, in science too³.

What can we (scientists) do? Bad news is, I don’t think these are problems that can be solved by scientists. You cannot expect unproductive tenure holders to give up their job. You cannot expect a young group leader to say no to tenure, now that they are almost there. You cannot expect a big lab to agree in reducing the number of people. Sure, all of them complaint that they spend their times writing grants and cannot do the thing they love the most – experiments! – anymore because too busy. If you were to give them the chance to go back to the bench again, they would prove as useless as an undergrad. They are not scientists anymore, they are managers. These are problem that only funding agencies can solve, pushed by those who have no other choice that asking for a revolution, i.e.: the younger generation.

Notes:

1. Surprise is, I believe, the major difference between science and technology. The man on the moon is technology and we didn’t get there by surprise. Penicillin is science and comes out of the blue, pretty much.

2. Figure is taken from Mervis, Science 2000. More recent data on the NSF website, here.

3. See Michael Nielsen’s post about this basic concept of life.

Update:

Both Massimo Sandal and Bjoern Brembs wrote a post in reply to this, raising some interesting points. My replies are in their blogs as comments.

Lots of smoke, hardly any gun. Do climatologists falsify data?

One of climate change denialists’ favorite arguments concerns the fact that not always can weather station temperature data be used as raw. Sometimes they need to be adjusted. Adjustments are necessary in order to compensate with changes the happened over time either to the station itself or to the way data were collected: if the weather station gets a new shelter or gets relocated, for instance, we have to account for that and adjust the new values; if the time of the day at which we read a certain temperature has changed from morning to afternoon, we would have to adjust for that too. Adjustments and homogenisation are necessary in order to be able to compare or pull together data coming from different stations or different times.

Some denialists have problems understanding the very need for adjustments – and they seem rather scared by the word itself. Others, like Willis Eschenbach at What’s up with that, fully understand the concept but still look at it as a somehow fishy procedure. Denialists’ bottom line is that adjustments do interfere with readings and if they are biased toward one direction they may actually create a warming that doesn’t actually exist: either by accident or as a result of fraud.

To prove this argument they recurrently show this or that probe to have weird adjustment values and if they find a warming adjustment they often conclude that data are bad – and possibly people too. Now, let’s forget for a moment that warming measurements go way beyond meteorological surface temperatures. Let’s forget satellite measurements and let’s forget that data are collected by dozens of meteorological organizations and processed in several datasets. Let’s pretend, for the sake of argument, that scientists are really trying to “heat up” measurements in order to make the planet appear warmer than it really is.

How do you prove that? Not by looking at the single probes of course but at the big picture, trying to figure out whether adjustments are used as a way to correct errors or whether they are actually a way to introduce a bias. In science, error is good, bias is bad. If we think that a bias is introduced, we should expect the majority of probes to have a warming adjustment. If the error correction is genuine, on the other hand, you’d expect a normal distribution.

So, let’s have look. I took the GHCN dataset available here and compared all the adjusted data (v2.mean_adj) to their raw counterpart (v2.mean). The GHCN raw dataset consists of more than 13000 station data, but of these only about half (6737) pass the initial quality control and end up in the final (adjusted) dataset. I calculated the difference for each pair of raw vs adj data and quantified the adjustment as the trend of warming or cooling in degC per decade. I got in this way a set of 6533 adjustments (that is, 97% of the total – a couple of hundreds were lost in the way due to the quality of the readings). Did I find the smoking gun? Nope.

Distribution of adjustment bias in the GHCN/CRU dataset
Distribution of adjustment bias in the GHCN/CRU dataset

Not surprisingly, the distribution of adjustment trends2 is a quasi-normal3 distribution with a peak pretty much around 0 (0 is the median adjustment and 0.017 C/decade is the average adjustment – the planet-warming trend in the last century has been about 0.2 C/decade). In other words, most adjustments hardly modify the reading, and the warming and cooling adjustments end up compensating each other1,5. I am sure this is no big surprise. The point of this analysis is not to check the good faith of people handling the data: that is not under scrutiny (and not because I trust the scientists but because I trust the scientific method).
The point is actually to show the denialists that going probe after probe cherry-picking those with a “weird” adjustment is a waste of time. Please stop the nonsense.

Edit December 13.
Following the interesting input in the comments, I added a few notes to clarify what I did. I also feel like I should explain better what we learn from all this, so I add a new paragraph here (in fact, it’s just a comment promoted to paragraph).

How do you evaluate whether adjustments are a good thing?

To start, you have to think about why you want to adjust data in the first place. The goal of the adjustments is to modify your reading so that they could be easily compared (a) inter-probes and (b) intra-probes. In other words: you do it because you want to (a) be able to compare the measures you take today with the ones you took 10 years ago at the same spot and (b) be able to compare the measures you take with the ones your next-door neighbor is taking.

So, in short, you do want your adjustment to siginificatively modify your data – this is the whole point of it! Now, how do you make sure you do it properly? If I were to be in charge of the adjustment I would do two things. 1) Find another dataset – one that possibly doesn’t need adjustments at all – to compare my stuff with: it doesn’t have to cover the entire period, it just has to overlap enough to be used as a test for my system. The satellite measurements are good for this. If we see that our adjusted data go along well with the satellite measurements from 1980 to 2000, then we can be pretty confident that our way of adjusting data is going to be good also before 1980. There are limits, but it’s pretty damn good. Alternatively, you can use a dataset from a completely different source. If the two datasets arise from different stations, go through different processings and yet yield the same results, you can go home happy.

Another way of doing it is to remember that a mathematical adjustment is just a trick to overcome a lack of information on our side. We can take a random sample of probes and do a statistical adjustment. Then go back and look at the history of the station. For instance: our statistical adjustment is telling us that a certain probe needs to be shifted +1 in 1941 but of course it will not tell us why. So we go back to the metadata and we find that in 1941 there was a major change in the history of our weather station, for instance, war and subsequent move of the probe. Bingo! It means our statistical tools were very good in reconstructing the actual events of history. Another strong argument that our adjustments are doing a good job.

Did we do any of those things here? Nope. Neither I, nor you, nor Willis Eschenbach nor anyone else on this page actually tested whether adjustments were good! Not even remotely so.
What did we do? We tried to answer a different question, that is: are these adjustments “suspicious”? Do we have enough information to think that scientists are cooking the data? How did we test so?

Willis picked a random probe and decided that the adjustment he saw were suspicious. End of the story. If you think about it, all his post is entirely concentrated around figure 8, which simply is a plot of the difference between adjusted data and raw data. So, there is no value whatsoever in doing that. I am sorry to go blunt on Willis like this – but that is what he did and I cannot hide it. No information at all.

What did I do? I just went a step back and asked myself: is there actually a reason in the first place to think that scientists are cooking data? I did what is called a unilaterally informative experiment. Experiments can be bilaterally informative when you learn something no matter what the outcome of the experiment is (these are the best); unilaterally informative when you learn something only if you get a specific outcome and in the other case you cannot draw conclusions; not informative experiments.
My test was to look for a bias in the dataset. If I were to find that the adjustments are introducing a strong bias then I would know that maybe scientists were cooking the data. I cannot be sure about it, though, because (remember!) the whole point of doing adjustments is to change data in the first place!. It is possible that most stations suffer of the same flaws and therefore need adjustments going in the same direction. That is why if my experiment were to lead to a biased outcome, it would not have been informative.
On the other hand, I found instead that the adjustments themselves hardly change the value of readings at all and that means I can be pretty positive that scientists are not cooking data. This is why my experiment was unilaterally informative. I was lucky.

This is not a perfect experiment though because, as someone pointed out, there could be a caveat. One caveat is that in former times the distributions of probes was not as dense as it is today and since the global temperature is calculated doing spatial averages, you may overrepresent warming or cooling adjustments in a few areas while still maintaining a pretty symmetrical distribution. So, to test this you would have to check the distribution not for the entire sample as I did but grid by grid. (I am not going to do this because I believe is a waste of time but if someone wants to, be my guest).

Finding the right relationship between the experiment you are doing and the claim you make is crucial in science.

Notes.
1) Nick Stockes, in this comment, posts an R code to do exactly the same thing confirming the result.

2) What I consider here is the trend of the adjustment not the average of the adjustment. Considering the average would be methodologically wrong. This graph and this graph have both averages of adjustment 0, yet the first one has trend 0 (and does not produce warming) while the second one has trend 0.4C/decade and produces 0.4C decade warming. If we were to consider average we would erroneously place the latter graph in the wrong category.

3) Not mathematically normal as pointed out by dt in the comments – don’t do parametric statistics on it.

4) The python scripts used for the quick and dirty analysis can be downloaded as tar.gz here or zip here

5) RealClimate.org found something very similar but with a more elegant approach and on a different dataset. Again, their goal (like mine) is not to add pieces of scientific evidence to the discussion,  because these tests are actually simple and nice but, let’s face it, quite trivial. The goal is really to show to the blogosphere what kind of analysis should be done in order to properly address this kind of issue, if one really wants to.

Il surriscaldamento (globale) della blogosfera e il metodo scientifico.

Questo post e’ pubblicato anche su nFA. Rimando li’ per i commenti

Premessa: per capire le correzioni che cerco di fare in questo post, occorre prima aver letto il post in cui Aldo riassume molto bene alcuni dei punti su cui ruota il negazionismo da blogosfera sul AGW.

La mazza da Hockey.

La mazza da Hockey è uno dei punti fissi dei negazionisti, cioé quel gruppo particolarmente attivo sulla blogosfera e su certi media che nega che il climate change esista o sia da attribure all’attività umana. Perché I negazionisti sono così interessati a questi grafici? Uno dei motivi è perché credono, come scrive Aldo, che:

I grafici [a mazza da Hockey] sono la base scientifica del protocollo di Kyoto.

Questo non è propriamente vero. Il protocollo di Kyoto è nato per l’11 Dicembre del 1997, sulla base dei primi rapporti dell’IPCC che risalgono al 1990 e 1995 ( IPCC è l’ente scientifico sovra-governativo commissionato dalle Nazioni Unite). Il primo e più famoso grafico a mazza di hockey di Michael Mann e colleghi compare in letteratura l’anno dopo, nel 1998, e entra quindi nell’ IPCC solo nel terzo report, nel 2001. Le evidenze che hanno portato alla formazione dell’IPCC prima e hanno convinto della necessità del protocollo di Kyoto, poi, erano già ampie ben prima la comparsa della mazza da Hockey.

Il fattore principale che ha portato a IPCC e Kyoto è stata la constatazione che concentrazione di gas da effetto serra fosse aumentata nell’ultimo secolo; non esiste dubbio alcuno che l’effetto serra surriscaldi il pianeta: questa è fisica da libri di testo da almeno 150 anni (l’effetto serra è stato scoperto da Joseph Fourier nel 1824 e il collegamento tra effetto serra e riscaldamento antropogenico è stato introdotto per la prima volta da Svante Arrhenius nel 1890).

Perché quindi il grafico a mazza da Hockey riceve tutta questa attenzione tra i negazionisti? Probabilmente perché è molto semplice da capire per il pubblico: ha un colpo d’occhio sicuramente toccante e i media lo hanno usato tantissimo come simbolo dell’ AGW. Lo stesso Al Gore ne fa un largo uso nel documentario “An Inconvenient Truth” durante la famosa scenetta della gru.

Chiarito quindi che, scientificamente, il sostegno ad AGW va ben oltre il grafico a mazza da Hockey, credo sia importante cercare di capire quale è il messaggio del grafico. Il paper originale di Mann si intitola “Global-scale temperature patterns and climate forcing over the past six centuries” cioé, appunto dal 1400 al 2000 come si vede nella figura 1b del lavoro originale. Perché solo 1400? Perché come è facile immaginare, risalire alla temperatura del globo indietro nel tempo non è così semplice e più distanti si va, maggiore diventa l’errore e l’approssimazione. Il succo di quel lavoro, però, è che sicuramente la temperatura dei giorni nostri è la più alta degli ultimi sei secoli. Notare che dopo aver messo le cose in questo contesto, diversi gruppi hanno lavorato alla ricostruzione paleclimatologica, ricorrendo a dati, metodi, approcci statistici e sperimentali completamente diversi da quello originale di Mann del 98.

Ad esempio, oggi abbiamo grafici a mazza da hockey basati su la linea di retrazione dei ghiacciai:

Oerleman et al. Science 2005. Extracting a Climate Signal from 169 Glacier Records”

basati sugli storici della temperatura del terreno (borehole, in inglese)

Pollack et al. Science 1998. Climate change record in subsurface temperatures: a global perspective

basati sulla dendrocronologia, cioé la capacità di misurare la temperatura “leggendo” gli anelli dei tronchi (vedi arancione scuro e blu scuro):

Osborn et al. Science 2006. The Spatial Extent of 20th-Century Warmth in the Context of the Past 1200 Years

Altri metodi usano coralli, alghe, registri di bordo dei grandi navigatori e via discorrendo.

Ovviamente tutti questi grafici, ottenuti indipendentemente da gruppi diversi, si sovrappongono bene con il grafico a mazza da hockey della concentrazione di CO2 calcolata coi carotaggi ai poli.

Report IPCC 2007.

Credo che sia chiaro che tutte queste misurazioni indipendenti si rinforzano l’un con l’altra (2) e che vanno quindi lette in un quadro globale.

Detto questo, quale è il punto forte di queste analisi e quale il punto debole. Il punto forte è che risulta veramente incontrovertibile un aumento di temperatura nell’ultimo secolo rispetto ai precedenti. Il punto debole è che è difficile definire “precedenti” perché più si va indietro e più c’è variabilità. È comunque un argomento degno di approfondimento e per questo motivo altri studi sono stati condotti che cercano di estendere le letture il più possibile. Ne posta un esempio Aldo nel suo articolo (figura 2, presa dal report IPCC) in cui si vedono letture eseguite con metodi diversi (ogni colore è un paper diverso).

Aldo usa quel grafico per riportare un punto ricorrente dei negazionisti, cioé che i cambiamenti climatici sono naturali e ciclici. Afferma che quel grafico

mostra chiaramente un andamento diverso da quello della figura precedente, con un aumento delle temperature negli anni successivi all’anno 1000.

In realtà ciò non è vero e si vede anche solo ad occhio nudo: (se vi funziona javascript, passate e togliete il mouse sulla figura successiva per vedere la sovrapposizione).

e in particolare non c’è una grossa differenza nel cosiddetto periodo caldo medievale.

Pur ignorando le misure più fredde, le letture più calde (linea rossa e azzurrina) toccano e passano appena la linea tratteggiata di riferimento ad ascissa 0 nell’anno 1000. La temperatura attuale (linea nera, misurata coi termometri) sta ad ascissa 0.5 (notare che questi non sono gradi ma un misura di anomalia di temperatura). Quindi nessuna ciclicità e sulla base dei dati non è affatto giustificato quello che riporta Aldo e cioé che

le temperature attuali sono tornate dove erano nel 1200.

Non lo sono. A meno di non volere considerare per buoni solo I margini d’errore superiore ma non vedo perché farlo.

Per terminare questa parte, c’è una cosa che è importante sottolineare e cioé che il riscaldamento del 20esimo secolo è degno di nota principalmente per uno motivo e cioé che mentre gli andamenti dei secoli scorsi sono tutti spiegabili abbastanza bene con i soli fattori natural, il riscaldamento del 20esimo secolo, invece, si spiega soltanto con la variabile antropogenica (4).

Veniamo quindi alle presunte critiche tecniche.

Come dice Aldo, il primo lavoro di Mann sul grafico a mazza da Hockey, è stato criticato nel 2003 da McKitrick (un economista dell’Universita di Guelf, Ontario) e McIntyre (ex dipendente dell’industria mineraria, ora blogger). McIntyre è particolarmente noto alla banda dei negazionisti perché è il gestore di un blog e di un forum web ( climateaudit.org ), dal quale partono molti degli attacchi ai climatologi. Le critiche di M&M al paper di Mann (pubblicate su una rivista non peer-reviewed nel 2003 e qui nel 2004 ) riguardavano presunti errori statistici e sono state presto smentite prima dagli autori (qui e poi qui), poi da altri studi indipendenti (qui e qui).

Col senno di poi, le smentite, benvenute, non sarebbero state in realtà neanche necessarie perché negli anni, la mazza da hockey è diventata sempre più una evidenza condivisa, riproposta da almeno una dozzina di altri gruppi, in maniera completamente indipendente utilizzando misure scorrelate (di alcune ho fatto esempi all’inizio di questo post).

McIntyre e McKitrick non hanno perso la propria verve, però, e hanno continuato con il lavoro di negazionisti. Sul blog.

Infatti quando Aldo dice che

un famoso articolo di un membro del gruppo [del CRU], Keith Briffa, era stato sottoposto a severe critiche

si riferisce di nuovo a McIntyre e McKitrick e ad un post sul loro blog che cerca di smontare un lavoro di Briffa su Science del 2006 basato sulla rilevazione dendrocronologica (temperatura estrapolata nei cerchi nei tronchi). Onestamente, stiamo parlando di un post su un blog di negazionisti e la faccenda non meriterebbe particolare seguito qui su nFA ma visto che Aldo le definisce “severe critiche”, tocca chiarire. McIntyre decide, nel suo blog che gli alberi usati da Briffa sono stati selezionati a caso e preferisce sostituirli con altri:

As a sensitivity test, I constructed a variation on the CRU data set, removing the 12 selected cores and replacing them with the 34 cores from the Schweingruber Yamal sample.

Lo Schweingruber Yamal sample è un campione che nessuno usa perché non ancora caratterizzato. La cosa ridicola è che il risultato delle nuove analisi di McIntrye è che l’hockey stick si appiattisce completamente (qui linea rossa vs linea nera) contraddicendo, in questo modo, gli unici dati che solo un paranoico metterebbe in dubbio e cioé i dati strumentali:

Le registrazioni strumentali sono iniziate attorno al 1850. Le severe critiche di McIntrye non sono compatibili nemmeno col termometro.

La fuga di email.

Veniamo ora al presunto punto di partenza: dei negazionisti si intrufulano sul server di posta del CRU e trafugano messaggi email dal 1996 ad oggi. Poi ne rilasciano circa un migliaio, leggibili qui. Ovviamente la blogosfera dei negazionisti esplode e si trascina dietro una buona parte dei media classici. Viene fatta una lista delle email più scottanti; molte di queste sono emails in cui gli scienziati del CRU parlano con un certo livore dei negazionisti. Si può discutere se sia più o meno elegante usare la parola “coglione” riferendosi a un tipo come McIntyre in una conversazione privata (io lo farei senza problemi). Non mi sembra segno di frode. Altre email sono chiaramente scherzose (ad esempio in una un ricercatore dice qualcosa tipo “ma quale global warming e global warming, oggi fa un freddo matto”. Una rassegna delle emails più piccanti viene discussa qui e qui da alcuni dei protagonisti (soprattutto nei commenti). Non credo questa sia la sede per discuterle una a una.

Occorre cambiare opinione?

Direi proprio di no. Quando le cose sono spiegate, invece che riferite, prendono tutta un’altra piega. È il motivo per cui consiglio a chi avesse un genuino interesse nella materia ad approfondire direttamente alla fonte delle cose. Purtroppo l’argomento dell’AGW è uno degli argomenti affrontati in maniera meno professionale dalla stampa: di fronte ad un consenso scientifico praticamente universale, troviamo stampa e pubblico spezzati (soprattutto negli USA e meno in Europa per fortuna).

Su nFA abbiamo posts critici verso la stampa in continuazione (il tag giornalismo è il secondo per numero di articoli) e nessuno si stupisce di frequenti prese di posizione ideologiche in ambito economico. Perché dovremmo, per AGW, decidere di dare più fiducia alla stampa che non alla comunità scientifica?

I negazionisti non sono in grado di produrre materiale che regga il vaglio della comunità scientifica e la quasi totalità delle critiche viene mossa dalla blogosfera. La maggiorparte di queste critiche sono semplicemente ridicole (gli esempi di questo post spero siano utili a capirlo) ma hanno una presa enorme sul pubblico e sui media. Il dibattito acquisisce due livelli: uno, scientifico, che è anche molto controverso su alcuni dettagli (ad esempio contemporaneo, a quanto leggo, riguarda la controversia su quale sarà il ruolo di El Nino sul medio termine: più o meno pioggie torrenziali?) ma che è completamente ignorato. L’altro, quello che origina dalla blogosfera, guadagna una attenzione esagerata e arriva a trarre in inganno anche gente che, in altri argomenti, si distingue per sano scetticismo.

Note:

  1. firmato ma non ratificato, pero’. La maggiorparte dei paesi ha ratificato solo dopo il 2001. Ad oggi 187 paesi hanno ratificato Tokyo, 8 non hanno preso posizione e 1 solo, gli USA, ha deciso di non ratificare – da qui.

  2. E’ un po’ un esempio di cosa veramente vuol dire consenso, come cercavo di spiegare in questo commento nell’altra discussione.

  3. A dirla tutta, la denominazione stessa di periodo caldo e’ tutt’altro che accettata e il report IPCC 2007 precisa che “current evidence does not support globally synchronous periods of anomalous cold or warmth over this time frame, and the conventional terms of ‘Little Ice Age’ and ‘Medieval Warm Period’ appear to have limited utility in describing trends in hemispheric or global mean temperature changes in past centuries”

  4. In chiusura di questa parte, segnalo una review, in inglese, decisamente accessibile a tutti (qui)