Blinded by methodology
This week, with great fanfare, The New Economics Foundation has released its latest ranking of ‘clone towns’ – all those town high streets that have the same old retail stores from the big chains. And the most clone-like town is… [drum roll]… Cambridge.
Cambridge?! Have they ever been there? I studied there for 3 years and have been back many times since, so I know it pretty well. The city centre is based around a thriving open air market around which mediaeval twisting streets meander off towards the venerable colleges and the river. The winding streets hold a cornocopia of interesting shops from tweedy gentlemen’s outfitters to second hand camera stores. It couldn’t be further from the average bland high street. Given the historic nature of the centre, what big chains there are are rammed into a couple of uninspiring shopping arcades and a couple of wider streets.
And herein lies the problem it seems. NEF were only considering high streets, so in Cambridge’s case they seem to have chosen what looks like the average high street and disregarded the rest – ie the diverse majority. Their methodology, applied to that fraction of the city centre, concluded that it lacked diversity – quelle surprise. I asked them why their report didn’t even mention the market and they replied they weren’t considering markets. Which is a bit like saying a deaf person doesn’t have a disability because you’re not considering hearing in your analysis.
Why am I ranting about this? Because again and again I see people with blind faith in their methodology to the point of stupidity. Years ago I was invited to take part in a stakeholder consultation on which of 12 industrial sectors the Government should priorities for reducing hazardous waste. About 40 of us assembled in London and diligently went through the scoring process and came up with a ranking of 1 to 12. At the end they showed us the ranking their internal assessment gave which was almost identical but with no 1 and no 12 transposed. They aggregated the two sets of scores and ended up with a final ranking identical to their initial results. Their no 1 had not even dropped a single place despite us ranking it 12. “It looks as if you agree with us!” the organiser declared smugly. There was an uncomfortable pause, which I broke. I started off politely, but as they waffled on, I declared that the day had been a waste of time as our opinions could never change the result. They refused to accept this because they had developed a methodology and they were sticking to it.
In green business, the biggest risk is Life Cycle Assessment (LCA). We are constantly bombarded with studies that “prove” or “disprove” X, Y or Z and if you watch long enough you’ll see plenty of proving and disproving of the same thing. My own experience of LCA during my MPhil on the subject is that it is highly dependent on the ‘system boundary’ you draw around the subject of your analysis and other assumptions you make. I developed an LCA model for very large products and tested using a ship. The model did its own sensitivity analysis and found that most of the input variables that had most effect on the results were highly uncertain assumptions I had made, for example the life span of the ship (which is dependent on scrap metal values as much as anything else) or the trans-atlantic journey I has assumed for it (which depends on world trade patterns). My model couldn’t handle these factors, so my methodology (and its limitations) was driving the results, not real world data.
So basically this is a plea not to be blinded by methodology. The results of any analysis should always be put through a common sense filter – and if they don’t feel right, check the methodology – paying particular regard to system boundaries. And if at the end of the day your methodology can’t handle the real world then don’t pretend it can.
And, finally, I don’t care how much data is collected, aggregated and analysed, Cambridge is not a clone town. Go and see for yourself.
1 Comment
Leave your reply.