Lately I have been asked by various people to help them make better use of technology in their academic work – or at least write a post about it.

It seems I have developed a reputation as the ‘technology expert’ and, lately, the ‘social media expert’  in my workplace. This is not a new experience; throughout my working life I have been the ‘go to’ person when people have technology problems and new ideas. I find this curious, because this is not how I think of myself. I will admit to being an ‘early adopter’ and being fascinated by the Shineys (new gadgets). I’m always looking for new ways to use computers in my work, but I have spent my life around true geeks. I have high standards as to what ‘geek’ really means and it usually involves being able to program UNIX.

My dad studied to be a dye house chemist in the 1960s, but  became a computer programmer in the late 70’s . His first job involved looking after mainframes, so I spent my childhood playing making necklaces out of punched cards. But even though computer geekery is clearly in my blood, I have never become a proper computer nerd. Despite a brief addiction to Zork in the 80’s, I never really got into computer games, nor did I display much aptitude for computer programming – much to my father’s disappointment (I did marry a computer programmer though, which cheered the old man up a bit).

So when I get asked to share the knowledge with my colleagues – which I am always happy to do – I try to explain that it is not software proficiency which makes me ‘geeky’, but the attitude to technology I developed in my childhood. I think this attitude is best summed up by the conversation I had with my father when he sat me down in front of our family’s first PC, sometime in the early 80s. As I recall it went something like this:

Dad: “We’re going to learn ‘Basic’. Type: 10 print “hello world”
Me: “What if I hurt it?”
Dad: “What do you mean?”
Me: “What if I kill its brain?”
Dad: “You can’t kill it. It’s not alive. It doesn’t have a brain”
Me: “Then how does it, you know – do stuff – if it doesn’t have a brain?”
Dad: “The computer is stupid. It only does what you tell it to do. The only way you can hurt it is to type “format c:”
Dad: “please don’t do that by the way”
Me: “OK.  Can I play Zork now?”

In retrospect, this conversation was almost the most important I have ever had with my father. He gave me the confidence to face new software without fear. I jump in and fiddle around, break some stuff and eventually work out how to use it – with the comforting knowledge that nothing I do (except typing “format c:”) will cause lasting damage.  Technology isn’t scary, but it isn’t that special either; computers are stupid – humans are smart. When it comes to ‘working more effectively with technology” I try to think of the problem before the tool – and be open to the idea that technology is not always the answer.

This approach works best if you take the time to really understand the nature of your problems. Let’s look three problems you definitely have and how they might be solved with technology:

The Information Problem

Researchers have to collect information from various sources: books, journal articles, interviews, experiments, artefacts and so on. Storing this information and finding it again is obviously a problem, but there’s a larger problem lurking: how to make sense of the information. Specifically, how to make connections between pieces of information and your own thoughts in order to come up with original ideas.

I’ll admit to being old school and keeping a journal, but I rarely transcribe what I write there into the computer, or even look at it again to be honest with you. I used to worry about this, but I’ve come to accept that the act of writing is important to helping me to remember and understand what I hear or read.  The information I wrestle with most is in electronic form – there’s so damn much of it and bookmarking is inadequate as a way of retrieving and using it.

One way I solve this problem is Evernote, a free online database application. You can store random webpages, pdfs, images and notes which appear as little thumbnails in the viewer; these can be arranged and viewed in different ways. Evernote is a ‘cloud app’ which means I can use it from any computer or my phone, which is handy, but it’s key advantage is that you can ‘tag’ ideas with keywords. This means you can store multiple sorts of information in ways which are meaningful to you – and start to see the connections between them.

The Reading Problem

Researchers have to read. A lot. Again the problem is twofold: managing the sheer volume references, and reading them efficiently.

Most researchers use bibliographic software to store references (if you don’t, you really should) and most universities support Endnote. People tell me Zotero is better and I see the appeal, but I like Mendeley; mostly because it works a bit like itunes (I like being able to make ‘play lists’).

Reading efficiently is an art. I’ll admit to ‘surface reading’ most of it and ‘deep reading’ only what’s interesting, but it’s still an enormous task. If you take it seriously, reading inevitably leaks into every corner of your life. I read on public transport, in waiting rooms, playgrounds, while cooking and even at parties (ok – boring ones). Making your reading material portable is solved by printing it out – but then the article is ‘off line’ and the notes can get lost. This is where an e-reader and something like “instapaper” or Calibre can be useful. Mr Thesis Whisperer recently bought me a Kindle (which I LOVE), both these programs will make the webpage into a kindle document, which I can then write notes on.

The Writing Problem

I’ve written previously about Scrivener, which I think addresses many problems of research writing better than MS Word. However the other problem with writing is that it can be arduous – I have tendinitis from my PhD and a sore back. I find it physically painful to write at times. On the advice of Paul Gruba and @sadistician I have started using the built in speech recognition capabilities in Windows to talk my first draft straight into MS Word (thank you Microsoft – you still totally rock). I then transfer the text into scrivener. This doesn’t solve the writing/pain problem entirely, but it’s a significant improvement.

So how about you? Do you use technology to solve these kinds of problems? Or do you have ‘analog’ techniques which are just as effective? I’d love to hear about them.

Related Posts

What Shiny should I buy?

5 phone apps for researchers

5 ways to avoid death by email

%d bloggers like this: