We don’t only use language to get a message across – the way we speak sends a message to the listener about how old we are, where we come from, which social class we belong to, whether we’re relaxed or not etc etc etc. Folk have been interested in the differences in the way people speak for millennia – just go back to the Tower of Babel for an example – but it’s only in the last 150 years that these differences have been systematically documented in the area of dialectology. From the late 19th century onwards, fieldworkers would literally get on their bikes, armed with paper and pencil and make their way to the farms around Leeds, Kent and Aberdeen to record the peculiar words and phrases used by speakers in these rural areas: quine for girl, mardy for grumpy, oop t’ poob for up to the pub.
Fast forward to the 1950s, and access to new technologies radically changed the way that speech data were documented: from pencil and paper to reel to reel, the transformation from written to audio. Researchers could now record not only individual words and phrases, but hours of running speech in one sitting, thus the amount of data grew, as did the software for analyses of these much larger corpora. This changed the face of how linguistic research was conducted, with the ability to map patterns of language use across time and space.
And where are we dialectologists now in the 21st century? In the midst of yet another revolution in the analysis of speech data through access to Digital technologies. We still go out into the field armed with recorders, but these are now state of the art DAT machines which fit in your pocket yet provide studio quality sound. And once these nuggets of speech gold get back to the lab, we no longer get our pencil out to transcribe what we hear, but instead turn to a text to speech synthesised software in the creation of easily searchable corpus data. And no longer do we plough through vowel measurements by hand, painstakingly coding each sound we hear. Instead we run the data through ‘forced alignment’ software which turns 200 hours of man hours into one hour of automated (and probably more accurate!) analysis.
Last week we were reviewing what the two research assistants working on a current project I’m running had done in the last month. To my delight/dismay, I found that they had completed in 4 weeks what I took 4 years to complete for my PhD. More accountable, more easily replicable data in one hundredth of the time….wow. I’m excited to see what digital next brings to linguistic research.
Text by Dr Jennifer Smith, Reader, English Language.