I’ve been talking to @tassie_gal on twitter about the relationship between confidence and doubt while doing a PhD. But my thoughts didn’t coalesce until last week, when a scientist told me a story about plants.

A friend of this scientist did his PhD on the use of hormones to promote plant growth. The first part of the study was to grow a series of plants using a method published in an earlier paper. The method was meant to yield 15 new plants per round, but the plant researcher only got 8. Perplexed, he threw out his materials and tried again, only to get the same result. This time he assumed the temperature was wrong, threw out his materials and tried again. Once more: nothing. For a year he fiddled around, trying to get the expected 15 plants, but he never did.

Later the plant researcher happened to go to a conference and met the person who did the original study. He took the opportunity to  describe his  failures in exhausting detail and ask for help. The person who did the original study merely blinked and said “oh yes, that experiment didn’t work particularly well”. As it turned out, the original person never did get fifteen plants using that method.

In other words, the original researcher lied.

It’s tempting to view the PhD researcher in this story as a bit of an idiot for assuming he was doing something wrong, but this would overlook the fact that the written word can have immense persuasive power. And it’s not just what is written on a page which can lead us researchers astray – it’s the ideas which get stuck in our heads.

For example, it is not uncommon for PhD students to turn up to our statistical consulting service asking the mathematicians to ‘fix’ their results, when the results are, in fact, correct. You would expect the students to be relieved to find out they did their analysis right, but apparently many will still insist the numbers must be wrong because they didn’t ‘fit’ the hypothesis which was being tested.

We need preconceptions – let’s call them hunches – to get going in the first place, but problems can develop when we hold on to them too tightly. I was reminded of this recently, while working on my current research project about PhD students and progress reports.

Twice a year we ask our PhD students to fill in a progress report accounting for how they spent their time and what they will do next (you may have a similar system at your university). Administrators and supervisors complain that progress reporting is a meaningless ‘rubber stamp’ exercise which should be changed, or even abandoned, so we decided to study it and see what could be done.

Our focus groups confirmed that students felt the same way as the administrators and supervisors: the progress reporting procedure was largely meaningless. However we were wrong in our assumption that students would want to change the system too. Many LIKED that it was a rubber stamp exercise and that the plans they wrote didn’t actually translate to reality. It seems the mere act of writing a plan can be psychologically reassuring and the administrative meaninglessness of the reports meant that no one would attack them when the plans didn’t translate to reality.

I puzzled over how to understand this until I realised that all our stakeholders were being pragmatic, but pragmatic meant something different to students. In retrospect this explanation was blindingly obvious, but it took an embarrassingly long time to come to me because I was thinking with my hunch, rather than looking at the data. Actually – I was thinking like an administrator not a researcher (oh the shame!). Once I had become aware of this tendency the rest of the analysis came easily. I just assumed that my first thought would be wrong and looked for other explanations.

We don’t often think about how useful these kinds of errors can be – if they are taken seriously. In his new book ‘Where good ideas come from’ , Stephen Johnson makes some interesting observations on the nature of error and creativity, citing research on how people free associate from trigger words. 40% of people presented with the word ‘green’ will say ‘grass’; 80% when shown the colour blue will suggest another colour, or say the word ‘sky’. Only a few people will volunteer words like ‘Ireland’, ‘leaves’ or ‘jeans’.

It would be easy to assume that the outliers are naturally more ‘creative’, but it seems that those of us who would choose ‘sky’ are not so pedestrian after all. In another experiment people were exposed to the colour blue while sitting in front of a screen with a group of actors. The actors insisted that the colour was red, which made people doubt that their initial perception of blue was correct. When these people were later asked to free associate they produced more ‘outlier’ responses. While Johnson out that too much error can be fatal, a little ‘noise’ in the system can be good.

In other words, assuming you are wrong can make you more creative.

There’s certainly comfort in conforming with existing theories and ideas, rather than challenging them. It takes confidence to take ‘wrong’ results seriously because you have to examine your own biases.  If our hapless plant researcher had more confidence in his own ability he wouldn’t have wasted a whole year. But I think his story shows us that confidence can sometimes be in short supply when you are doing your PhD.

Which leaves me with a final thought: does a lack of confidence stem, at least in part, from a fear of being examined? Perhaps in our heart of hearts we still view this last step of the PhD as a ‘test’ through which we have to pass, rather than a review process which ensures our work is the best that it can be? I’m not sure, in lieu of an answer I can only say: try to have confidence in your doubt – and doubt in your confidence!

 

%d bloggers like this: