Posts in Category: blog

What is wrong with scientific publishing and how to fix it.

Randy Sheckman’s recent decision to boycott the so called glam-mag Cell Nature & Science (CNS) made me realize that I never expressed on this blog my view on the problems with scientific publishing. Here it comes. First, one consideration: there are two distinct problems that have nothing to do with each other. One is the #OA issue, the other is the procedural issue. My solution addresses both but for sake of reasoning let’s start with the latter:

  1. Peer review is not working fairly. A semi-random selection of two-three reviewers is too unrepresentative of an entire field and more often than not papers will be poorly reviewed.
  2. As result of 1, the same journal will end up publishing papers ranging anywhere in the scale of quality, from fantastic to disastrous
  3. As result of 2, the IF of the journal cannot be used to proxy quality of single papers and not even of the average paper because distribution is too skewed (the famous 80/20 problem)
  4. As result of 3, there is no statistical correlation between a paper published on a high IF journal and its actual value and this is a problem because it’s somehow commonly accepted that there should be one.
  5. As result of 4, careers and grants are made based on a faulty proxy
  6. As result of 5, postdocs tend to wait years in the lab hoping to get that one CNS paper that will help them get the job – and  obviously there are great incentives to publish fraudulent data for the same reason.

Ok, so let’s assume tomorrow morning CNS cease to exist. They close down. How does this solve the issue? It doesn’t.

CNS are not damaging Science. They are simply sitting on the very top of the ladder of scientific publishing and they receive more attention than any other journal. Remove them from the top and we have just moved the problem a bit down the ladder, next to whatever journal is following. Some people criticise CNS for being great pusher of the IF system; “to start”, they say, “CNS could help by making public the citation data of the single papers and not just the IF as journal aggregate”.  This would be an interesting move (scientists love all kind of data) but meaningless to solve the problem. Papers’ quality would still be skewed and knowing the citation number of a single paper will not be necessarily representative of its value because  bad papers, fake papers & sexy papers can end up being extremely cited anyway. Also, it takes time for papers to be cited anyway.

So what is the solution? The solution is to abolish pre publication peer review as we know it. Just publish anything and get an optional peer-review as service (PRaaS) if you think your colleagues may help you get a better paper out. This can create peer reviewing companies on the free market and scientists would get paid for professional peer review. When you are ready to submit, you send the paper to a public repository. The repository has no editing service and no printing fees. It’s free and Open Access because costs are anyway minimal. What happens to journals in this model? They still exist but their role is now different. Nature, Cell and Science now don’t deal with the editorial process any longer. Instead, the constantly look through the pool of papers published on the repository and they pick and highlight the ones they think are the best ones, similarly to how a music or a videogame magazine would pick and review for you the latest CD on the markets. They still do their video abstracts, their podcasts, their interviews to the authors, their news and views. They still sell copies but ONLY if they factually add value.

This system solves so many problems:

  1. The random lottery of the peer review process is no longer
  2. Nobody will tell you how you have to format your paper or what words you can use in your discussion
  3. Everything that gets published is automatically OA
  4. There a no publication fees
  5. There is still opportunity for making money, only this time in a fair way: scientists make money when they enrol for peer-review as a service; journals still continue to exist.
  6. Only genuinely useful journals continue to exist: all those thousands of parasitic journals that now exist just because they are easy to publish with, will perish.

Now, this is my solution. Comments are welcome.

The alternative is: I can publish 46 papers on CNS, win the Nobel prize using those papers, become editor of a Journal (elife) that does the very same thing that CNS do and then go out on the Guardian and do my j’accuse.

The Japanese Taxis

Someone says I have a peculiar tendency to dissect all of my experiences and place them in labelled boxes for sake of understanding. It’s possibly true and it is with the same spirit that I found myself dissecting Japan during my recent visit over there. I was invited to teach at a Summer School for Master students in Biology and Computing at the Tokyo Institute of Technology and I decided to spend a few more days sightseeing. So for those who ask what I found most striking about Japan, my answer is going to be “Taxis”.

Japanese Taxis

Taxis like this are everywhere. Why are they special? Look at the car. Not sure what model may be, but one can easily bet this car was produced and sold sometime in the 80’s and then somehow time stopped and it never got old. It is perfectly polished, no tear and wear, not a single sign of ageing internally or externally. The car is fitted with improbable technological wonders, coming out from an improbable sci-fi movie from the 80’s: absolutely emblematic is a spring operated mechanism that open and closes the rear door. This taxi represents Japan for me. This is a country that has lived a huge economic boom from after the war all the way to late 80’s, with annual growth that surpassed any historical record. Then, suddenly, in 1991, met an equally dramatic economic crisis with the explosion of a giant bubble and everything stopped. Economists call the following years The Lost Decade because not much moved after the burst and let me tell you: it’s perfectly visible. To this date, Japan still didn’t really recover from the  bubble.

Reaction of the Japanese bank to the 91 bubble was somehow similar to what observed nowadays in the rest of the world after the 2008 crisis: quantitative easing, continous bailout and a rush to save banks. Bail out was so common that Japan in the 90’s was said to be “A loser heaven”. And yet, society responded very differently: unemployment rate did not really sky rocket as it is happening now pretty much everywhere else. Instead, unemployment in Japan remains one of the lowest world wide and a huge deflation took place instead. My naive gut feeling cannot help but putting these two things together but I don’t know enough to claim with certainty that these two are really consequence of each other so go look for an answer somewhere else, and let me know if I was right, please. Anyhow, what is intersting is that the fact that people maintained their job – yet with decreasing salaries – and that meant society didn’t really collapse but simply froze. And that is why travelling in Japan is like travelling back in time to the early 90’s: everything, from furniture in hotel room to cars and even clothings and fashion, stopped in 1991. Granted, it’s a technological advanced 1991, filled with wonders of the time. Remember the dash of Delorean from the Back to the Future (1985)? Yupe, that is what Japan looks like. Why did thing did not get any old? Why does the Taxi above still looks mint new? A Japanese friend gave me the answer:  Japan has a tradition of conservatism as opposite as consumism. In Japan a tool that is important for your life or job, is a tool to which you must dedicate extreme care. Thus, taxi drivers polish and clean and care and caresse their cars as the Samurai took care of their swords. Paraphrasing the first shogun Tokugawa Ieyasu: “the taxi is the soul of the taxi driver”1. Below, some pictures from the trip.

1. Well, the original citation would be The Sword is the soul of the Samurai. This goes a a bit off topic but during one of my jetlegged night I found this video on the history of Samurai swords quite interesting.

Can your lab book do this?

When I was a student I used to be a disaster at keeping lab books. Possibly because they weren’t terribly useful to me since back then I had an encyclopedic memory for experimental details or possibly because I never was much of a paper guy. As I grew older my memory started to shrink (oh god, did it shrink!), I started transforming data into manuscripts and as a consequence I began to appreciate the convenience of going back 6 months in time and recover raw data. Being a computer freak, I decided to give up with the paper lab book (I was truly hopeless) and turned to digital archiving instead. As they say, to each their own!. Digital archiving really did it for me and changed enormously my productivity. One of the key factors, to be honest, was the very early adoption of sync tools like Dropbox that would let me work on my stuff from home or the office without any hassle.

As soon as I started having students, though, I realized that I needed a different system to share data and results with the lab. After a bit of experimentation that led nowhere, I can now finally say I found the perfect sharing tool within the lab: a blog content manager promoted to shared lab book (here). This is what it looks like:

A screenshot of our lab book in action

This required some tweaking but I can say now it works just perfectly. If you think about it, a blog is nothing less than a b(ook) log and so what better instrument to keep a lab book log? Each student gets their own account as soon as they join they lab and day after day they write down successes and frustrations, attaching raw data, figures, spreadsheets, tables and links. Here some of the rules and guidance they need to follow. Not only can I go there daily and read about their results on my way home or after dinner, but I can quickly recall things with a click of the mouse. Also, as bonus, all data are backup’d daily on the Amazon cloud and each single page can be printed as PDF or paper if needed. As you can see in the red squares in the above picture, I can browse data by student, by day, by project name or by experiment. That means that if I click on the name of the project I get all the experiments associated to it, no matter who did them. If I click on a experimental tag (for instance PCR) I get all the PCRs run by all the people in the lab.

Except for the protocols, all contents are set to be seen only by members of the lab. However, inspired by this paper, I decided that the project will be then flagged as public as soon as the results will be published.

Was will das Postdoc? (What does a postdoc want?)

Introduction.

A couple of months ago, an interesting article by Jennifer Rohn on Nature prompted an explosive discussion on the interweb about the career perspectives of  post doctoral researchers. Jennifer’s point in a nutshell was “we should give postdocs the chance to keep being postdocs forever and ever if they so wish” and was encountered by an overwhelming appraisal, at least judging by the comments in the original piece.

I, for one, would not really like to be a postdoc forever and after a, cough cough, probably excessive rant, I made my points clear on these pages. To say it all, my post was sparked by the surprise of seeing so many postdocs showing enthusiasm when facing the idea of being stuck in their limbo forever. Am I the only weird? Everyone claim their postdocs was the best time of their scientific experience, why didn’t I feel like that when it was my turn? Why do I enjoy so much more being completely independent instead? Part of the answer is, I am sure, that yes, I am weird indeed. The other part comes from the results obtained from the poll I then decided to put up in the successive weeks.

Methods.

The survey was conducted with great scientific rigour pretty much in random way. I put up a web page and asked for feedback on my blog post. Except for an initial round on facebook and so, I didn’t advertise it much on social sites myself, because I felt in that way I would have reached more easily people somehow close to my views and I would have somehow biased the sample. Instead I asked the participants to spread it around and gave them 4 weeks time to do so.

Results.

After about a month, I came back and found little more than 100 responses. The exact picture of the responses is available in the page here (I suggest you open the result page in a new window and scroll along as you ready my comments, figure by figure).

84% of respondents were either in life science or Physics. For some reason I think Physics is heavily represented and life science is probably under represented. This may be due to physicists being better at twitter.

For how long have you been a postdoc? Very nice skewed distribution. Apparently the median of a single postdoc experience in the USA is 2.2 years, which matches nicely what I got. I cannot help but cringe when looking at the first and last bar of the graph: those who don’t know what it’s going to happen to them and those who, also, don’t know what it’s going to happen to them but in a different way.

Most people are at their first postdoc. Some at second. Not much else to comment here.
67% of respondant want (or dream) to continue their career in academia. This matches exactly what found by the National Academy of Sciences in the USA. Dear God, I didn’t know I was so good at making survey!

First surprising result: only 6% has no clue of what to do with their life. I am sure if we were to ask PhD students a similar questions we would get a different picture. There are two possible explanations of why almost 70% of postdocs want to pursue a research in academia: the first one is that they really still love it, no matter what they go through; the second one is that they wasted invested so much time and effort and money and personal relationships in it, than they cannot admit to themselves that maybe it was the wrong choice. This latter, is exactly what the Theory of Cognitive Dissonance predicts. Namely,

inconsistency among beliefs and behaviors will cause an uncomfortable psychological tension. This will lead people to change their beliefs to fit their actual behavior, rather than the other way around, as popular wisdom may suggest.

So, we are all a bunch of stubborn delusional. Are we?

Where do you see your career progress? Here things get a bit more grey in colour. Suddenly 1/3 of respondents is not sure they’ll manage to land on an academic position and most seem to start growing doubts. This is particularly relevant if you think that most people are still in their <3 years of postdoc.

And now is where things get a bit more negative indeed.

Most people think they are not well informed about alternative career outside the academic path and only 10% of respondents seem not to have concerns. About 70% of folks are quite worried indeed.

Next two questions are about the environment in the lab: a shy majority thinks their PI doesn’t really care much about their career progression but it’s really not as bad as one may think.

Most postdocs, on the other hand, are experiencing a lot of freedom given that more than 60% seem to be experiencing complete or almost complete academic freedom.

Then, here comes the result that makes me think I am not so weird at all! 70% or so of respondents would be quite ready or ready to be an assistant professor starting next Monday! (OK let’s say Tuesday, given that Monday is bank holiday). This is astonishing considering, again, that most people have < 3 years of postdoctoral experience. I personally value experience in this job very very little and I keep saying that either you are ready from the start to be a scientist by yourself or you will never really be. From the start can be anything between 5 and 25 years of age.

Yet, I do realize that I may be a bit extreme in my view, given that in fact most people recognized their postdoc was in fact useful to prepare for the next step.

Now, next question is again an experiment in social psychology. Question is: do you feel more or less suited to being a researcher than your peers? Overwhelming majority thinks they are better than average, confirming yet again what scholars call Illusory Superiority. Those with Impostor Syndrome were probably all Darwin-ianly selected out during PhD.

Majority of respondents would not mind a job in research. Good to hear.

Then comes the question that started it all. Would you enjoy being a postdoc for life? Would you enjoy being an independent postdoc? Most people would rather be an Independent Postdoc (75%) than a Postdoc for life (42%). Ah! I knew it!

Finally, the last two question: people are quite divided on whether funding agencies should produce less postdoc but with higher salaries. In fact, only 42% thinks that would be right to do so. Awww, I am moved. What a socialistic altruistic bunch are postdocs! And, if that was not enough, an overwhelming majority think is a bad idea to be protectionist about the job and do not think that their country should make it difficult for foreign people to get a postdoc position. I am so proud of you guys!

Discussion.

First thing to notice is that I may have an alternative career in designing survey. Second thing, we confirmed at least two important theories of social psychology: too bad I am late to publish them. (Note: The Freudian title of this post is dedicated to my staggering successes in the field of psychology indeed).

Third: if you look into how to redistribute money, take into account what postdocs want. They are scientists and they want to be treated as such. In short: they think they are better than their peers, they want freedom and they don’t care that much about money after all. Their priority is to keep doing research whether in academia or industry whether independently  or as postdocs.

I gave my recipe of “what must change in Science and how” already, no need to repeat myself. Glad to see I wasn’t speaking only for myself!

 

Patches for Nautilus “move to trash” bug

Warning, this post contains a geek rant.

If you use the nautilus or nautilus-elementary filemanager (the default file manager in any gnome-based linux distro, including Ubuntu), you are probably aware of the annoying bug with file deletion.

Like any other file manager, nautilus allows you to delete your files using keyboard shortcuts: permanently (hit <Shift-Delete>) or temporarily by moving them to the trashbin (hit <Delete> on your keyboard). Removing files is always a critical action so any other file manager will make sure that you don’t do it accidentally: the file manager in MacOS, finder, will require you to hit the key combination <AppleKey+Delete>, difficult to perform by mistake. Microsoft Explorer, Konqueror, Thunar and many others will ask you to confirm that you really want to trash files with a dialog box.

Unfortunately nautilus lacks this ability: if you, your toddler or your cat accidentally hit the Delete key on the keyboard while a file or folder is selected, they go into the trashbin without warning. If you are not looking at the screen while this happens, the item is well gone. Obviously, this flaw was pointed out already long time ago. Users started asking for a fix already in 2004 (that is seven years ago!) and lots of people wanted to get that fixed: see for instance here, here, here, here, here, here

Surprisingly, reactions of the gnome developers to this problem were of two kinds: “I don’t think this is a real problem”[¹] or “I don’t think you are proposing the perfect solution”[²].  Back in 2009, I accidentaly lost some file and wrote a patch to fix this bug. The patch simply gave the user the option to activate a warning dialog if they wanted to. I figured “people who want the dialog will enable it and be happy, people who don’t will leave it alone and keep discussing about what is really truly the best solution for the next 7 years“. Believe it or not, the problem still exists,  so I thought of raising the issue once again (this time, I also proposed a patch to change Delete in Control-Delete).

Guess what: even after 7 years and hundred of people begging for a fix, we are sitting on the same attitude:

This is a real problem, but I don’t think the solution is a windows-like alert dialog. […] An animation with the file becoming red and/or flying to the trash would be a nice addition.

Or maybe a small cluebar with an embedded undo button would already be enough. I like how Google does it in its webapps.

What if deleted files were visible as some ghost-like-icon in the directory they used to be? And it could be possible to turn on/off the visibility of deleted files? And you can have your animation then as well; of an icon that dies.

I think your use case is a real concern, and something we should fix indeed, but as others said in this thread, I don’t think a confirmation dialog is how we want this to be implemented, especially when it carries a new preference with it.

Personality i like my delete and it would felt awkward if the delete didn’t delete anything.

We will rather keep the hole than having a solution we don’t like. Little does it matter if any other browser is actually using that solution or if lots of people want to see the thing fixed.

This attitude is amazingly complicated to understand for my simple brain. For me, getting things done means finding the meeting point between the optimal solution and the best outcome. If my car gets a flat tire on the way, I will accept any new tire a rescuer would give me and I won’t be sitting 7 years waiting for one that really matches the other three. And I like to think this is not Linux true philosophy either.

Anyway, here you can download the patches to fix this issue.

I am using the second one on nautilus-elementary (which also sports a very convenient Undo feature).

Edit 1 April 2011. To much of my pleasure, the patch has now been accepted and from next version on, Control<Delete> will be the shortcut to send stuff to trash. No more accidental  deletions! Open Source wins again.

Edit June 2011. If you arrive to this page because you freaked out finding the new nautilus behaviour, this is how to get back to the old key combo

Notes:

1. You are all familiar with the “how many people it takes to change a light-bulb?” jokes.
The one about software developers goes like this:

Q: How many developers does it take to change a lightbulb?
A: The lightbulb works fine on the system in my office. NOT REPRO.

2. The one about C++ programmers goes like this:

Q: How many C++ programmers does it take to change a lightbulb?

A: You’re still thinking procedurally. A properly-designed lightbulb object would inherit a change method from a generic lightbulb class, so all you’d have to do is send it a bulb.change message.

 

Postdoc, love or hate? A survey.

Following last week’s discussions about the tough life of a postdoc, I’ve realized more data is needed before making general assumptions on what postdocs want and need. Jennifer Rohn’s post had an overwhelming response of sympathizing postdocs who would love to have a “postdoc for life position” and I didn’t find this surprising. What came a bit unexpected to me, though, is that the other voice was hardly heard.

I think the problem has deeper issues that will have to be solved by completely changing the way we define a laboratory.

For sake of smart discussions, I am setting up a survey aimed at all postdocs out there. You’ll find it here: http://thepostdoctrap.gilest.ro

I am not doing this just because I care about the issue: I have been invited to a meeting organized by the postdocs of the MPI-CBG in late May and I’d love to give those guys some numbers about the issue. So, please, take that survey and come back in couple of months for the results.

I am a postdoc and I think I just realized I have been screwed for years

It seems in the past two weeks someone has started going around lifting big stones in the luxurious and exotic garden of science, finding the obvious gross underneath. To be more precise, the topic being discussed here is: “I am a postdoc and I think I just realized I have been screwed for years“.

A couple of weeks ago, a friend of mine blogged about his decision to leave academia after yet another nervous breakdown. I leave it to his words to describe what it means to realize in your early thirties that your childhood dream won’t become a reality because the job market is broken and you can’t cope with that stress. To be honest, while I sympathize with him,  I find his rant extreme, but what is more important than discussing anecdotal experiences is actually the huge number of comments that post had, not only on the blog but also on social discussion websites. Literally hundreds of comments from people who went through similar experiences, culminating with the epiphany that finding a job in academia is freaking difficult.

This discussion is not new, of course. Occasionally people from academia feel the urge to let postdocs and PhD student know that this is a very risky road. See Jonathan Katz’s opinion from back in 2005, for instance.

Why am I (a tenured professor of physics) trying to discourage you from following a career path which was successful for me? Because times have changed (I received my Ph.D. in 1973, and tenure in 1976). […] American universities train roughly twice as many Ph.D.s as there are jobs for them. When something, or someone, is a glut on the market, the price drops. In the case of Ph.D. scientists, the reduction in price takes the form of many years spent in “holding pattern” postdoctoral jobs. Permanent jobs don’t pay much less than they used to, but instead of obtaining a real job two years after the Ph.D. (as was typical 25 years ago) most young scientists spend five, ten, or more years as postdocs. They have no prospect of permanent employment and often must obtain a new postdoctoral position and move every two years.

Pretty actual, isn’t it? Although these arguments do emerge now and then, they do it way less than they should¹. Why? The main reason is that PIs have really nothing to gain from changing the current situation: as it is now, they find the field overcrowded with postdocs who cannot do anything else but staying in the lab, hoping to get more papers than their competitors; waiting for the unlucky ones to drop out to reduce competition. That means it’s easy for the PIs to get postdocs for cheap and keep them in the lab as long as possible.

Of course there could be an even better scenario for PIs: postdocs who never leave the lab! Let’s face it: having so many postdocs to choose from is nice, but many of them aren’t actually that good and also it takes time for them to acquire certain skills. So why don’t give them the chance to stay for 20 years in the same lab? This is exactly what Jennifer Rohn was advocating on Nature last week. I think in her editorial Jennifer actually rightly identifies the problem:

The system needs only one replacement per lab-head position, but over the course of a 30–40-year career, a typical biologist will train dozens of suitable candidates for the position. The academic opportunities for a mature postdoc some ten years after completing his or her PhD are few and far between.

But she fails to provide the right solution:

An alternative career structure within science that professionalizes mature postdocs would be better. Permanent research staff positions could be generated and filled with talented and experienced postdocs who do not want to, or cannot, lead a research team — a job that, after all, requires a different skill set. Every academic lab could employ a few of these staff along with a reduced number of trainees. Although the permanent staff would cost more, there would be fewer needed: a researcher with 10–20 years experience is probably at least twice as efficient as a green trainee.

I cannot even start saying how full of rage this attitude makes me. This position is so despicable to me! Postdoc positions exist, on the first place, because they provide a buffer for all those who would like to get a professor job but cannot, due to the limited market. Any economist would tell you that the solution
is
not to
transform this market
into something
even more static
but to increase mobility
the solution is not to transform this market into something even more static but to increase mobility, for Newton’s sake! Sure, some postdocs may realize too late they don’t really want to be independent and they would gladly keep doing what they are doing for some more time: this is what positions in industry are for², and this is what a lab tech position is for. No need to invent new names for those jobs.

So, here I propose an alternative solution: what about giving postdocs the chance of being independent, without necessarily being bound to running a 4 people lab to start with, or without the need to hold a tenure position? What about redistributing resources so that current PIs will have a smaller lab so that 1 or 2 more people somewhere else could have the chance to start their own career? Isn’t this more fair?

I wrote about this before, so I won’t repeat myself: in short, the big lab model is not sustainable anymore and it is not fair!

The problem, Jennifer, is not that postdoc want to stay longer in the lab: the problem is that they want out!

Notes

1: a recurrent question in the new Open Science society is “should scientists be blogging?“. My answer is yes, definitely (in fact, that’s what I am doing) but I don’t expect them to blog about their opinion on the last paper in their field. I don’t think that is so useful, actually. I’d rather have them talk about their daily life as scientists and speak freely and loudly about controversial issue.

2: My wife is one of them: she realized she didn’t want to have anything to do with academia anymore and she moved to industry where she actually got a salary that was more than twice the one she was getting in the University doing pretty much the same job, without worrying about fellowships and competition. She has never been so happy at work, too.

Of scientists and entrepreneurs.

Prepare to jump. Jump.

As my trusty 25 readers would know, a few months ago I made the big career jump and moved from being on the bench side of science to the desk side, becoming what is called a Principal Investigator (PI). As a matter of fact nothing really seems to have changed so far: I hold a research fellow position at Imperial College, meaning that I am a one-man lab: I still have to plan and execute my experiments, still have to write my papers and deal with them, still have to organize my future employment – all exactly as I was doing before.

Me, in my lab. Feb 2011

However, starting your own lab is still a formalization of walking with your own legs and, as such, one must be prepared to encounter new challenges. Unfortunately no one really ever prepared me to this: we spend a great deal of time as PhD and Postdocs learning skills that not necessarily will help for the next steps and when the moment comes to be really independent, a lot of people feel lost in translation. This may bring frustration in the PI (who find themselves completely unprepared for the new role) and in their students (who find themselves led by someone who is completely unprepared for their role). I saw this happening countless times.

Scared by the idea of ending up like this, I actually started thinking about how things would evolve quite some time ago. It’s easy: you just take inspiration from PIs around you. You start with all those who work in the same institute or department, for instance. And you try to figure out what they do right and what they do wrong, and learn by Bayesian inference: I like that, I don’t like this, I want to be like that, I don’t want to be like this. If you are more of a textbook person, you can also get yourself one of those “How to be a successful PI” guidebook; they are particular popular in the USA and some people find them helpful. Did that too, found it a bit dumb.

Look around.

Finally, there is a third strategy you may want to follow and that is: find inspiration and stories of success in people who are doing things completely different from what you do. The rational of this strategy lays in the assumption that certain people will be good in what they do, no matter what that is. They have special skills that make them succesful, whether they are running a research lab or a law firm or a construction business. A good gymnasium (in the greek sense of the world) to get in touch with such people is the entrepreneur world. There are several analogies between being the founder of a, let’s say, computer startup and being a newly appointed PI. Here are some examples out of the tip of my head:

  • both need to believe in themselves and in what they do, more than anybody else around them
  • both need to convince people that what they want to do is worth their investment money, whether they are millions of venture capitals or bread-crumbs of research grant money
  • both have to choose very carefully the people they will work with
  • both have to find their niche in a very competitive market or else, if they will rather go after the big competition, they need to make sure their product is better in quality and/or appeal
  • both need to innovate and always be ahead of competition
  • they both chose their career because they enjoy being bosses of themselves (or at least they better do)
  • both need to learn how to overcome difficult times by themselves (“loop to point 1” is one solution)
  • et cetera

If you are not yet convinced about this, read this essay by angel investor Paul Graham titled “What we look for in founders“. If I were to substitute the world “founder” with “scientist”, you would not even notice.

These are the reasons why a couple of years ago I started following the main community of startup founders in the web, hackernews. It’s a social community composed of people with a knack for entrepreneurship – some of them extremely succesful (read $$$ in their world). Most of them are computer geeks, which is good for my purposes as it is yet another category of people who share a lot with scientists, namely: social inepts who’d love to improve their relationship skills but dedicate way too much time to work.

So the question now is: what did I learn from them? To begin, I reinforced my prejudice: that scientists and entrepreneurs have a lot in common and that certain people would be succesful in anything they would do. This is a crucial starting point because you’ll find that there is way more information on how to be a succesful entrepreneur than how to be a succesful academic – I still don’t have a good explanation on why it is so, actually. The moment you accept that, your sample case just grew esponentially and you have much more material for your inference based learning. I am no longer just limited at taking inspiration from other scientists, but also succesful companies. This is actually not so obvious to most people. For instance, every now and then a new research institute is born with the great ambition of being the next big thing. The decide to follow the path of those institutes who succeded in the past, assuming there is something magic in their recipy and because the sample set is limited they always end up naming the same names: LMB, CSHL, EMBL, Carnegie… Why nobody takes Google as an example? Or Apple? Or IBM? I am actually deeply convinced that if Google were to create a Google Research Institute, they would be amazingly succesful. They have already made exciting breakthrough in (published!) research with Google Flu Trends or Google Book Projects. If they were to philantropically extend their research interests to other fields, they’ll make a lot of people bite their dust (I’d kill to work at a Google Research Institute, by the way. wink wink.).

Five examples of relevant things I learned by looking at the entrepreneur world.

1. Talking about Google, I found extremely smart their philosophy to incentivate people to work 20% of their time on something completely unrelated to their project. Quoting wikipedia:

As a motivation technique, Google uses a policy often called Innovation Time Off, where Google engineers are encouraged to spend 20% of their work time on projects that interest them. Some of Google’s newer services, such as Gmail, Google News, Orkut, and AdSense originated from these independent endeavors.[177] In a talk at Stanford University, Marissa Mayer, Google’s Vice President of Search Products and User Experience, showed that half of all new product launches at the time had originated from the Innovation Time Off.[178]

The irony behind this, actually, is that I am willing to bet my pants that this idea was in fact borrowed from academia: or better, from how it should be in academia but it’s not anymore.

2. Freedom is the main reason why I chose the academic path and I find people who know how to appreciate freedom (and make it fruitful) very inspirational. See for instance this essay by music entrepreneur Derek Sivers on “Is there such a thing as too much freedom?” or his “Delegate or die“.

3. On a different note, I appreciate tips on how to deal with hiring people. See for instance “How to reject a job candidate without being an asshole“. I wish more people would follow this example. Virtually no one in academia will ever tell you why you didn’t get their job, even though it’s every scientist’s duty to give direct straight feedbacks about other people’s work (it is in fact the very essence of peer reviewing!). I was on the job market last year for a tenure track position and it was a very tough year, in terms of competition. The worst ever, apparently. Each open position had at least 100 or 200 applicants of which half a dozen on average were then called for interview. I had a very high success rate in terms of interviews selections, being called to something like 15 places out of 50 applications sent. Many of them happened to be the best places in the world. In many of them didn’t work out and NONE of them offered any kind of feedback on the interviewed applicants. NONE of them actually took the time to say “this is what didn’t convince us about your interview”. What a shame.

4. I am not that kind of scientist who aim to spend his entire career on one little aspect of something; I enjoy taking new roads (talking about freedom again, I guess). So companies like Amazon or Apple, constantly changing their focus, are of great inspirations.

5. Startup founders know two unwritten rules “Execution is more important than the idea” and “someone else is probably working on the same thing you are”. Read about facebook story to grasp what I am talking about. Here’s is also well summarized (forget point 3 though, that doesn’t apply to science I believe).

6. Finally, as someone who starts with a tiny budget and who has a passion for frugality, I found the concept of ramen profitability very interesting: think big, but start small. That’s exactly what I am doing right now.

What has changed in science and what must change.

I frequently have discussions about funding in Science (who doesn’t?) but I realized I never really formalized my ideas about it. It makes sense to do that here. A caveat before I start is that everything I write about here concerns the field of bio/medical sciences for those are the ones I know. Other fields may work in different ways. YMMV.

First of all, I think it is worth noticing that this is an extremely hot topic, yet not really controversial among scientists. No matter whom you talk to, not only does everyone agree that the status quo is completely inadequate but there also seem to be a consensus on what kind of things need to be done and how. In particular, everyone agrees that

  1. more funding is needed
  2. the current ways of distributing funding and measuring performance are less than optimal

When everybody agrees on what has to be done but things are not changing it means the problem is bigger than you’d think. In this post I will try to dig deeper into those two points, uncovering aspects which, in my opinion, are even more fundamental and controversial.

Do we really need more funding?

The short answer is yes but the long answer is no. Here is the paradox explained. Science has changed dramatically in the past 100, 50 (or even 10) years, mainly because it advances at a speedier pace than anything else in human history and simply enough we were (and are) not ready for this. This is not entirely our fault since, by definition, every huge scientific breakthrough comes as a utter surprise and we cannot help but be unprepared to its consequences¹. We did adapt to some of the changes but we did it badly and we did not do it for all to many aspects we had to. In short, everyone is aware about the revolution science has had in the past decades, yet no one has ever heard of a similar revolution in the way science is done.

A clear example of something we didn’t change but we should is the fundamental structure of Universities. In fact, that didn’t change much in the past 1000 years if you think about it. Universities still juggle between teaching and research and it is still mainly the same people who does both. This is a huge mistake. Everybody knows those things have nothing in common and there is no reason whatsoever for them to lie under the same roof. Most skilled researchers are awful teachers and viceversa and we really have no reason to assume it should not be this way Few institutions in the world concentrate fully on research or only teaching but this should not be the exception, it should be the rule. Separating teaching and research should be the first step to really be able to understand the problems and allocate resources.

Tenure must also be completely reconsidered. Tenure was initially introduced as a way to guarantee academic freedom of speech and action. It was an incentive for thoughtful people to take a position on controversial issues and raise new ones. It does not serve this role anymore: you will get sacked if you claim something too controversial (see Lawrence Summers’ case) and your lab will not receive money if you are working on something too exotic or heretic. Now, I am not saying this is a good or a bad thing. I am just observing that the original meaning of tenure is gone. Freedom of speech is something that should be guaranteed to everyone, not just academic, through constitutional laws and freedom of research is not guaranteed by tenure anyway because you don’t get money to do research from your university, you just get your salary. It’s not 1600 anymore, folks.

Who is benefiting from tenure nowadays? Mainly people who have no other meaning of paying their own salary, that is researchers who are not active or poorly productive and feel no obligation to do so because they will get their salary at the end of the month anyway. This is the majority of academic people not only in certain less developed countries – like Italy, sigh – but pretty much everywhere. Even in the US or UK or Germany many departments are filled with people who publish badly or scarcely. Maybe they were good once, or maybe it was easier at their time to get a job. Who pays for their priviledge? The younger generation, of course.

Postdoc number keep growing. Academic positions do not.
Postdoc number keeps growing. Academic positions do not².

The number of people entering science grows every year², especially in the life sciences. The number of academic position and the funding extent is far from being sufficient to cover current needs. In fact, about 1-2 in 10 postdoc will manage to find a job as professor and among those who do, funding success rate is again 20-30% in a good year. In short, even if
we
were to
increase the scientific
budget by
5 times tomorrow
morning that would still
not be enough
even if we were to increase the scientific budget by 5 times tomorrow morning that would still not be enough. This means that even though it would be sure nice to have more money, it’s utopia to think this will help. Indeed, we need to revolutionize everything, really. People who have tenure should not count on it anymore and they should be ready to leave their job to somebody else. There is no other way, sorry.

Do we really need better forms of scientific measurement?

No. We need completely new forms of scientific measurement. And we need to change the structure of the lab. Your average successful lab is composed of 10 to 30 members, most of them PhD students or postdocs. They are the ones who do the work, without doubts. In many cases, they are the ones who do the entire work not only without their boss, but even despite the boss. This extreme eventuality is not the rule, of course, but the problem is: there is no way to tell it apart! The principal investigator as they say in the USA, or the group leader as it is called with less hypocrisy in Europe, will spend all of their time writing grants to be funded, speaking at conferences about work they didn’t do, writing or merely signing papers. Of course leading a group takes some rare skills, but those are not the skill of a scientist they are the skills of a manager. The system as it is does not reward good scientists, it rewards good managers. You can exploit creativity of the people working for you and be succesful enough to keep receiving money and be recognized as a leader but you are feeding a rotten process. Labs keep growing in size because postdocs don’t have a chance to start their own lab and because their boss uses their work to keep getting the money their postdoc should be getting instead. This is an evil loop.

This is a problem that scientometrics cannot really solve because it’s difficult enough to grasp the importance of a certain discovery, let alone the actual intellectual contribution behind it. It would help to rank laboratories not just by number of good publications, but by ratio between good papers and number of lab members. If you have 1 good paper every second year and you work alone, you should be funded more than someone who has 4 high publications every year but has a group of 30 people.

Some funding agencies, like HHMI, MRC and recently WellcomeTrust, decided to jump the scientometric problem and fund groups independently of their research interest: they say “if you prove to be exceptionally good, we give you loads of money and trust your judgement”. While this is a commendable approach, I would love to see how those labs would rank when you account for number of people: a well funded lab will attract the best sutudents and postdocs and good lab members make a lab well funded. Here you go with an evil loop again.

In gg-land, the imaginary nation I am supreme emperor of, you can have a big lab but you must really prove you deserve it. Also, there are no postdocs as we know them. Labs have students who learn what it means to do science. After those 3-5 years either you are ready to take the leap and do your stuff by yourself or you’ll never be ready anyway. Don’t kid yourself. Creativity is not something you can gain with experience; if at all, it’s the other way around: the older you get, the less creative you’ll be.

Some good places had either a tradition (LMB, Bell labs) or have the ambition (Janelia) of keeping group small and do science the way it should be done. Again, this should not be the exception. It should be the rule. I salute with extreme interest the proliferation of junior research fellowships also known as independent postdoc positions. They are not just my model of how you do a postdoc. In fact they are my model of how you do science tout court. Another fun thing about doing science with less resource is that you really have to think more than twice about what you need and spend your money more wisely. Think of the difference between buying your own bike or building one from scratch. You may start pedaling first if you buy one, but only in the second case you will have a chance to build a bike that run faster and better. On the long run, you may well win the race (of course you should never reinvent the wheel; it’s OK to buy those).

Of course, the big advantage of having many small labs over few big is that you get to fund different approaches too. As our grandmother used to say: it’s not good to keep all eggs in the same basket. As it happens in evolution, you have to diversify, in science too³.

What can we (scientists) do? Bad news is, I don’t think these are problems that can be solved by scientists. You cannot expect unproductive tenure holders to give up their job. You cannot expect a young group leader to say no to tenure, now that they are almost there. You cannot expect a big lab to agree in reducing the number of people. Sure, all of them complaint that they spend their times writing grants and cannot do the thing they love the most – experiments! – anymore because too busy. If you were to give them the chance to go back to the bench again, they would prove as useless as an undergrad. They are not scientists anymore, they are managers. These are problem that only funding agencies can solve, pushed by those who have no other choice that asking for a revolution, i.e.: the younger generation.

Notes:

1. Surprise is, I believe, the major difference between science and technology. The man on the moon is technology and we didn’t get there by surprise. Penicillin is science and comes out of the blue, pretty much.

2. Figure is taken from Mervis, Science 2000. More recent data on the NSF website, here.

3. See Michael Nielsen’s post about this basic concept of life.

Update:

Both Massimo Sandal and Bjoern Brembs wrote a post in reply to this, raising some interesting points. My replies are in their blogs as comments.

Lots of smoke, hardly any gun. Do climatologists falsify data?

One of climate change denialists favorite arguments concerns the fact that not always can weather station temperature data be used as raw. Sometimes they need to be adjusted. Adjustments are necessary in order to compensate with changes the happened over time either to the station itself or to the way data were collected: if the weather station gets a new shelter or gets relocated, for instance, we have to account for that and adjust the new values; if the time of the day at which we read a certain temperature has changed from morning to afternoon, we would have to adjust for that too. Adjustments and homogenitations are necessary in order to be able to compare or pull togheter data coming from different stations or different times.

Some denialists have problems understanding the very need of adjustements – and they seem rather scared by the word itself. Others, like Willis Eschenbach at What’s up with that, fully understand the concept but still look at it as a somehow fishy procedure. Denialists’ bottomline is that adjustment do interfere with readings and if they are biased toward one direction they may actually create a warming that doesn’t actually exist: either by accident or as result of fraud.

To prove this argument they recurrently show this or that probe to have weird adjustment values and if they find a warming adjustment they often conclude that data are bad – and possibly people too. Now, let’s forget for a moment that warming measurements go way beyond meteorological surface temperatures. Let’s forget satellite measurements and let’s forget that data are collected by dozens of meteorological organizations and processed in several datasets. Let’s pretend, for the sake of argument, that scientists are really trying to “heat up” measurements in order to make the planet appear warmer than it really is.

How do you prove that? Not by looking at the single probes of course but at the big picture, trying to figure out whether adjustments are used as a way to correct errors or whether they are actually a way to introduce a bias. In science, error is good, bias is bad. If we think that a bias is introduced, we should expect the majority of probes to have a warming adjustment. If the error correction is genuine, on the other hand, you’d expect a normal distribution.

So, let’s have look. I took the GHCN dataset available here and compared all the adjusted data (v2.mean_adj) to their raw counterpart (v2.mean). The GHCN raw dataset consists of more than 13000 station data, but of these only about half (6737) pass the initial quality control and end up in the final (adjusted) dataset. I calculated the difference for each pair of raw vs adj data and quantified the adjustment as trend of warming or cooling in degC per decade. I got in this way a set of 6533 adjustments (that is, 97% of total – a couple of hundreds were lost in the way due to the quality of the readings). Did I find the smoking gun? Nope.

Distribution of adjustment bias in the GHCN/CRU dataset

Distribution of adjustment bias in the GHCN/CRU dataset

Not surprisingly, the distribution of adjustment trends2 is a quasi-normal3 distribution with peak pretty much around 0 (0 is the median adjustment and 0.017 C/decade is the average adjustment – the planet warming trend in the last century has been of about 0.2 C/decade). In other words, most adjustment hardly modify the reading, and the warming and cooling adjustments end up compensating each other1,5. I am sure this is no big surprise. The point of this analysis is not to check the good faith of people handling the data: that is not under scrutiny (and not because I trust the scientists but because I trust the scientific method).
The point is actually to show the denialists that going probe after probe cherry picking those with a “weird” adjustment is a waste of time. Please stop the non-sense.

Edit December 13.
Following the interesting input in the comments, I added a few notes to clarify what I did. I also feel like I should explain better what we learn from all this, so I add a new paragraph here (in fact, it’s just a comment promoted to paragraph).

How do you evaluate whether adjustments are a good thing?

To start, you have to think on why you want to adjust data on a first place. The goal of the adjustments is to modify your reading so that they could be easily compared (a) inter-probes and (b) intra-probes. In other words: you do it because you want to (a) be able to compare the measures you take today with the ones you took 10 years ago at the same spot and (b) be able to compare the measures you take with the ones your next door neighbor is taking.

So, in short you do want your adjustment to siginificatively modify your data – this is the all point of it! Now, how do you make sure you do it properly? If I were to be in charge of the adjustment I would do two things. 1) Find another dataset – one that possibly doesn’t need adjustments at all – to compare my stuff with: it doesn’t have to cover the entire period, it just has to overlap enough to be used as test for my system. The satellite measurements are good for this. If we see that our adjusted data go along well with the satellite measurements from 1980 to 2000, then we can be pretty confident that our way of adjusting data is going to be good also before 1980. There are limits, but it’s pretty damn good. Alternatively you can use a dataset from completely different source. If the two dataset arise from different stations, go through different processings and yet yield same results, you can go home happy.

Another way of doing it is to remeber that a mathematical adjustment is just a trick to overcome a lack of information on our side. We can take a random sample of probes and do statistical adjustment. Then go back and look the history of the the station. For instance: our statistical adjustment is telling us that a certain probe needs to be shifted +1 in 1941 but of course it will not tell us why. So we go back to the metadata and we find that in 1941 there was a major change in the history of our weather station, for instance war and subsequent move of the probe. Bingo! It means our statistical tools were very good in reconstructing the actual events of history. Another strong argument that our adjustments are doing a good job.

Did we do any of those things here? Nope. Neither I, nor you, nor Willis Eschenbach nor anyone else on this page actually tested whether adjustments were good! Not even remotely so.
What did we do? We tried to answer a different question, that is: are these adjustments “suspicious”? Do we have enough information to think that scientists are cooking the data? How did we test so?

Willis picked a random probe and decided that the adjustment he saw where suspicious. End of the story. If you think about it, all his post is entirely concentrated around figure 8, which simply is a plot of the difference between adjusted data and raw data. So, there is no value whatsoever in doing that. I am sorry to go blunt on Willis like this – but that is what he did and I cannot hide it. No information at all.

What did I do? I just went a step back and asked myself: is there actually a reason on a first place to think that scientists are cooking data? I did what is called a unilaterally informative experiment. Experiments can be bilaterally informative when you learn something no matter what the outcome of the experiment is (these are the best); unilaterally informative when you learn something only if you get a specific outcome and in the other case you cannot draw conclusions; not informative experiments.
My test was to look for a bias in the dataset. If I were to find that the adjustments are introducing a strong bias then I would know that maybe scientists were cooking the data. I cannot be sure about it, though, because (remember!) the whole point of doing adjustments is to change data in the first place!. It is possible that most stations suffer of the same flaws and therefore need adjustments going in the same direction. That is why if my experiment were to lead to a biased outcome, it would not have been informative.
On the other hand, I found instead that the adjustments themselves hardly change the value of readings at all and that means I can be pretty positive that scientists are not cooking data. This is why my experiment was unilaterally informative. I was lucky.

This is not a perfect experiment though because, as someone pointed out, there could be a caveat. One caveat is that in former times the distributions of probes was not as dense as it is today and since global temperature is calculated doing spatial averages, you may overepresent warming or cooling adjustments in few areas still mantaining a pretty symmetrical distribution. So, to test this you would have to check the distribution not for the entire sample as I did but grid by grid. (I am not going to do this because I believe is a waste of time but if someone wants to, be my guest).

Finding the right relationship between the experiment you are doing and the claim you make is crucial in science.

Notes.
1) Nick Stockes, in this comment, posts a R code to do exactly the same thing confirming the result.

2) What I consider here is the trend of the adjustment not the average of the adjustment. Considering the average would be methodologically wrong. This graph and this graph have both average of adjustment 0, yet the first one has trend 0 (and does not produce warming) while the second one has trend 0.4C/decade and produces 0.4C decade warming. If we were to consider average we would erroneously place the latter graph in the wrong category.

3) Not mathematically normal as pointed out by dt in the comments – don’t do parametric statistics on it.

4) The python scripts used for the quick and dirty analysis can be downloaded as tar.gz here or zip here

5) RealClimate.org found something very similar but with a more elegant approach and on a different dataset. Again, their goal (like mine) is not  to add pieces of scientific evidence to the discussion,  because these tests are actually simple and nice but, let’s face it, quite trivial. The goal it is really to show to the blogoshpere what kind of analysis should be done in order to properly address this kind of issue, if one really wants to.