Data analysis


As regular readers will be aware….I have a bee in my bonnet about the need for someone to start building civic spaces online – spaces which are designed to support civic and political discourse rather than designed to sell us stuff. However it’s all very well having the idea – you then have to figure out how to build it.

This post provides an overview of the social media audit – a piece of research that is carried out before you set up a civic space in order to gain an objective view of who you should be including in the conversation. I use ‘we’ a lot in this post as though I had the bright idea of doing the audit and put some structure in place its my team at Public-i who have done most of the detailed development of the process.  We’ll be blogging more about this over at the Public-i blog but here is the first draft of the overview that will end up in the thesis.

In essence what we are trying to do is to find the conversations which are already taking place in the local online space.  More importantly we are trying to find the active individuals in order to create a network response to civic interactions – civic spaces are going to be defined by the networks that share them as much as by the content.

A bit of background

When I started looking at this I thought about this idea in terms of government building these spaces. I was influenced by Stephen Coleman’s thinking around ‘A Civic Commons in Cyberspace’ and also Castells’ work that shows the insidious power of media conglomerates and negative impact that gas on objectivity in the press that this brings to the fourth estate (Castells, “Communication Power”). This, combined with the fact that I have been immersed in working with Local Government for almost the last 10 years led me design the “Virtual Town Hall” pilot which you can read about here. The name really gives it away – I was imagining a civic space built by government – echoing real world civic architecture – and then used by the public.

I persisted with this idea for a while and blamed the fact that we were being slow to implement the technology for the fact that the pilot sites were not taking flight. There is no doubt that we were being slow with the technology implementation but I now believe that the reasons for the pilot sites not getting off the ground were more complex than just that and that there were a number of issues with the way I had originally designed the virtual town hall solution, the main one being that the original project design didn’t have the right role for the community. We envisaged using unmoderated community content and then using community moderators or champions to widen involvement but this was really a compromise en route to what has become the inclusion of the affordance of co-production in the final pilot sites. We have to accept that we can only work effectively with the public online if we don’t try and control the conversation that the community moderators were in some way an attempt to manage risk from the point of view of the Council without truly considering their wishes in this.

However once it became clear that these spaces, even if facilitated by government, needed to be equally owned by all stakeholders another issue arose; who do we include in the conversation? The community that you contact to create the civic space is going to integral to how it behaves and even though we would expect participation to shift throughout the life of a civic space that initial group is significant in terms of how likely you are to get an independent conversation started and also in terms of what tone is set for the space from the outset.

Its turns out that picking this group was causing project paralysis – no-one could get started until they knew who to include in the process. I’m going to do some follow up interviews on this point but I think that the issue here was a mix of risk and representativeness. The first was a concern about making the ‘wrong’ choice because we weren’t aware of the full picture. The second is more complex – but I think highlights the real democratic tension here which is the fact that the people who are active online are not representative of the general population and this is both a good and a bad thing. Good in that they are more likely to be civic and active offline as well (OXIS, Coleman) but bad in that they are not well…representative. The solution here is fiendishly simple and fiendishly difficult – involve the elected representatives – but that’s for another post.

Social Media Audits – a solution to the problem

The starting point as the fact that a civic space can’t be initiated until you have some idea as to who might be participating.  The social media audit is a response to this problem – its a systematic piece of research that provides a representative snapshot of the local informal civic conversation so that you can make am informed decision about who to include in the initial iteration of the civic space. Not only that – practically speaking – it gives you the list of people to contact , the conversational lures they are interested and a view of the interactions which are already going on.

We wanted to create an objective view of what was happening so that we provided a starting point for engagement with the local civic content creators. We can’t expect to find everything – and the content will change from week to week – but we were looking for a way to provide a starting point that would then be built on rather than freezing the results in time. Its important that the output of the audit also provides the means to extend and continue to search so that the civic space is created in a state of always being open to new voices.

Objective is a difficult thing to achieve as ultimately this process comes down to making value judgements about which sites should be included in the civic space. What we have therefore done is to create as robust and re-creatable process around the creation of the data set and then been as transparent as possible in terms of qualification of that data set down to something which is manageable for analysis and then for engagement with individuals.

This has deliberately been designed in this way rather than a piece of more quantitative analysis around the number of sites located in a specific area for example as we are trying to uncover individuals with specific intents rather than just to content that they are creating online – we are trying to connect to people as well as places.

What are we looking for?

The audit is designed to find not only an overview of the informal civic participation in the area but specifically to focus in on the significant content creators who will be the most vocal contributors to the civic space. The choice of the word ‘significant’ is deliberate here – we’re not trying to judge influence – just activity.

Significance is a fairly subjective term and so we try and define this with the site hosts to make sure we have a clear idea of what we are looking for. Once a site has been found via the relevant search terms then broadly we are after:

  • Persistence – we are looking for sites and individuals that are active over a reasonable period of time – or are linked to a specific campaign – not 2 post blogs that have been set up with the best intention
  • Audience – we can’t easily judge audience but we are looking for indications that the creator is aware of an audience and wants to interact with it
  • Constructive – we are looking for voices that want to improve their community not just complain

This last one is the most difficult – judging intent from content is extremely tenuous. Another way of looking at this is to say that we looking for content and creators who would satisfy a simple code of conduct test for any community website. Codes of conduct exist to ensure that interactions are respectful and do not insult some basic principles. The point of this filter is to try and rule some of these people out from the start. I see this as largely a pragmatic decision – no council is going to put together a civic space which includes inappropriate content from the start – but its one that needs to be kept under consideration to make sure that the space remains inclusive and open.

Its also worth noting that we usually issue a health warning with respect to language – the language online can be robust but this needs to be included. This issue of language is a cultural one where you need to understand that the social web can use a different tone to that which the more formal world is used to.

What’s the process?

To state the obvious – the internet is huge – and if you try and do this on a rolling basis then you just keep searching forever.  Instead we create a snapshot which we know will not include all the content but will be representative of the local civic space to an acceptable degree.  Here is how we go about creating that snapshot:

  • Define a matrix of search terms: This point about language is relevant from the onset of the audit. The process starts with a definition of search terms based on place and topic. We are trying to identify the language that the local residents are using to talk about where they live and about current affairs. We are seeking the stories that are currently active as these are the ones which illuminate activity.
  • Create a data set: we then use a combination of advanced use of google and link analysis to create an initial data set. This can be done largely automatically and then gets deduped and cleaned up. This second step may create a data set of over 1000 sites.
  • Qualify the data set: Once we have the data set narrowed down to around 3-400 then there is a manual qualification task which is the really time-consuming bit as we check each site against the significance criteria and also categorise it for place, topic, type of site and a few other metrics. We also highlight interesting examples – and also the downright odd stuff that you find online.

At this point we would hope to have a well qualified data set of around 200 sites that give us a good overview of the local informal civic activity. We do not know if these numbers of going to provide a useful benchmark – we’ve run the process a number of times now and they seem consistent but we expect them to keep increasing. However – at the moment – we believe that the 200 sites for a County or urban area is a reasonable benchmark to work against.

And the analysis

This is my favourite bit…

Once we have a coded up spreadsheet then we can do some straightforward statistical analysis and look at the spread of sites and content creators in terms of location and topic. We can see what proportion of activity is on Facebook for example (yes – we even search there), examine interactions on local media sites and see if there are pockets of activity around a specific place. We then use this to identity clusters of sites for a short case study analysis – which is really focused on looking at what is causing the cluster and how it might be used to introduce the group into the civic space.

The other piece of analysis is to use twitter as the starting point of a social network analysis of the space. This is really just a starting point for this and can be considered to be a snowball approach to an open network (Wasserman) rather than a real piece of SNA but what is does show is the potential reach of the civic creators. For my own research purposes I then ask the civic creators we have found to complete a more through social network analysis questionnaire which looks more deeply not just at their online but also their offline networks.

What don’t we do?

In developing the audit process we considered using semantic analysis tools bit in the end concluded that they didn’t offer the sophistication of search combinations that we were after and, more importantly, are designed to find content rather than individuals.

I think we could probably use more of the mainstream analysis tools but to date have not found anything that delivers what we are after – we’ll keep researching this however I will post findings on that when I have time.

It may be possible to have the same result through word of mouth as opposed to this fairly labour intensive research – ie by asking community participants to self report activity. My concern with this approach is that many of the sites that we find are not really describing themselves as civic – they are just people who are doing something that they think is interesting and they don’t feel the need to define it.

And the impact?

Its too early to say what the overall impact will be on the civic space but we have definitely succeeded in overcoming the project paralysis issue and have also been able to shape appropriate approaches and messages in order to involve these content creators in the initial proposition of the shared civic space and I wouldn’t want to try and instigate a site without doing this kind of research as we have not yet failed to turn up content and individuals that the host was not aware of before.

Even without knowing what impact it will have on the civic space its clearly a really effective way of getting a feel for the local activity in order to shape any kind of intervention online.

Its also an excellent way to deal with the people who are still saying that they don’t need to engage online – this is as robust a process as we can make it and is carried out based on search terms that the host defines – excellent and relevant local facts to put in front of anyone who thinks digital engagement is still optional at this point.

We are going to continue to work on the process and also on the automation of the process where possible. We are also trying to build in the idea of ‘discovery’ where we start to set the civic space in listening mode in order to uncover new civic voices but this is still early days – I’ll keep you posted.

As ever comments on this are very welcome.

Advertisement

Right – may have gone a little mad today but have assembled a LOT of facts into a new page (its was too big even for me to post as a blog entry). So I present with fear and trepidation Facts Glorious  Facts for you to enjoy… I am not yet happythat this has got any kind of narrative and if there is any statisticians out there who would enjoycorrecting me then please feel free! I will be updating the page as I find new stuff (@DemSoc has been tantalising me with the possibility of a political party / Tolkein society comparison which is even better than the fact the the Swedish Pirate Party now has the largest grassroots  membership of any political party in Sweden)

One other disclaimer – there are a lot of graphs as images in here – I know this is not ideal but I don’t have time to turn them into real data right now… Otherwise I hope you find this helpful….always good to  have your facts straight

More work on the theoretical framework which will be dull for anyone who is not interested in trying to come up with a way of analysing behaviours with the hope of influencing them – this is very much an action research post for my purposes.

Lovely Easter (and actually I mean it) reading various articles on social capital over Easter – as well as reading Robert Putnam’s excellent book of articles “Democracies in Flux”. I am trying to define my framework for analysis as I am about to start data collection from the Virtual Town Hall Pilot sites and I want to be sure that the questionnaire is going to give me the right data (rather than loads of interesting data I can’t then use). In doing this I am looking to reconcile the following issues:

  • How do I define the different context that people operate in – with social media it is possible to have an informal chat interlinked with a formal debate – how do we create measurement tools that accommodate this
  • How do I separate tools from behaviours (its not about what you use – its how you use it)
  • The ladder of participation model (or indeed more frameworks) seem to me to inherently value contributions at the top more than those lower down – I want to appreciate the lurkers and the listeners who may just vote – but do so in an informed way.
  • How do I measure it all so that I can then see if I have had an effect

The categorisation of informal social / informal civic / formal consultation / formal democratic works well for me. It illustrates a growth in commitment towards democratic engagement and when used against a context which shows growing distrust in political without devaluing the participation that happens before that point. Here is a slightly expanded description of the different catagories:

  • Informal Social:  Interactions with your friends and family
  • Informal Civic: Interactions a community about issues which concern the local civic space or a wider single issue in some way
  • Formal consultation or civic: I am questioning whether I really mean formal consultation – I am actually trying to define formal civil society where interactions happen within some kind of formal context where they can be taken into account by decision makers. Formal consultation is one of the these contexts but others might be housing associations or PCT boards etc, justice of the peace and other formal but not necessarily representative roles.
  • Formal democratic: Defined by the involvement of the representative – and flows from any decision that needs to be made by them.

This categorisation can be supported by social capital literature which describes the difference in informal and formal social capital (this is referenced in the Putnam book as well as other articles I have been reading this weekend). However social capital is a measurement or outcome – and it, like this categorisation, does not actually provide me with the framework I need to look at whether the social web provides us with the opportunity to create online civic spaces which connect these interactions so that the measurable levels of interactions at the less formal end of this analysis have a positive effect on the volume of interactions at the more formal end.

However if I can describe interactions / actions which are typical of each of these catagories I will be able to see if the space we have created has had an effect on the volume (assuming baseline / re-sample during the project period). The problem here then is that a specific action cannot necessarily be considered to fall into one of these categories as most actions (for example commenting on a blog can happen within different contexts.

Here is my recent long list but I have now organised the list against the Forrester Groundswell framework as well as adding in some more proactive actions marked in italics (thanks to Phil Green for the suggestion):

· Formal

· Informal

Creators

· Start a petition

·

· Instigate / Run a campaign

· Social reporting (blogging / tweeting re: local issues)

· Managing a hyperlocal website

· Organise a community meeting

Conversationalists

· Interact with a member

· Share something from the Virtual Town Hall with someone else

· Tweet VTH topics

Critics

· Rate a comment on a discussion board (within VTH)

· Rate a comment on a blog (within VTH)

· Comment on the discussion board (within VTH)

· Rate a webcast (or a meeting)

· Comment on a blog (within VTH)

· Comment on webcast

· Comment on a blog (outside VTH)

· Comment on the discussion board (outside VTH)

· Rate a comment on a discussion board (outside VTH)

· Rate a comment on a blog (outside VTH)

· Rate a YouTube clip

· Comment on YouTube clip

·

Collectors

· Save something to your user profile

· Sign up for alerts

· Subscribe to an RSS feed etc from a social reporter

Joiners

· Sign up to attend an event

· Sign a petition

· Create a user profile

· Join a discussion forum (outside VTH)

Spectators

Watch a webcast event

· Attend a formal meeting

·

Inactives

Not voting…..or anything else….

I have not split informal / formal down into my narrower catagories – but mainly as it will not fit on the page for now! Interestingly I am not sure where to place ‘Stand for election’ as part of this list – its probably more naturally something for creators but could also be considered conversationalist. Perhaps the point here is that this is something beyond the usual social web behaviour as it is considerably more structured / formal that this framework is supposed to analyse – more thinking to be done here.

This is a useful exercise in that it starts to give us some kind of way of judging democratic behaviours against the social web norms that are described by Forrester. I now need to research the Forrester model a bit more and consider it against something like the OFCOM equivalent as well as looking for more academic models (which I haven’t found as yet). I include the OFCOM overview below:

OFCOM Social networking profiles
The qualitative research suggests five distinct groups of people who use social networking sites :

  • Alpha Socialisers – mostly male, under 25s, who use sites in intense short bursts to flirt, meet new people and be entertained.
  • Attention Seekers – mostly female, who crave attention and comments from others, often by posting photos and customising their profiles.
  • Followers – males and females of all ages who join sites to keep up with what their peers are doing.
  • Faithfuls – older males and females generally aged over 20, who typically use social networking sites to rekindle old friendships, often from school or university.
  • Functionals – mostly older males who tend to be single-minded in using sites for a particular purpose.

So – where does this get me? What I have done is to take my democratic categorisation and apply it to a social web typology rather than attempting to fit social web behaviour onto a participation framework – which is what I was doing with the ladder of participation. Mmmmm……this feels like a stronger direction to me as the main thrust of what I am looking at is whether it is possible to use the proven participation in social media at a social level in order to increase levels of measurable democratic engagement. To do that perhaps the most important thing it to work out how to measure informal civic as opposed to informal social interactions. To do this we need to look at:

  • Geography – civic engagement requires a democratic unit – which is described by location
  • Intent – participants need to be trying to have an effect or influence on their community – or rather they need to be constructive in their comments and observations
  • Accountability – one of the key elements of civic rather than social participation online must be the management of identity and the fact that you need to be traceable as a citizen in order to have influence on the democratic unit (lots more to say on this at some point)

So the table above will help in describing the behaviours – which is useful in itself – but does not actually get at the heart of the difference between the categories – which is what I am after in order to judge whether the creating of a civic webspace makes informal civic behaviour more or less likely to turn into formal behaviours.

I think that what this implies is that the baseline questionnaire is even more important than before – but what I need to do is to expand the section on intent and to ask questions about people’s likelihood of moving on to participate formally. When I do the next round of data collection then I will be able to see if this intent has increased in the group which went onto to participate within the new webspace – and if we can see increases in the measurable behaviours which are described above. As the baseline questionnaire will only be administered to people who are already participating in an informal/civic way then this should be a good indicator. This does mean though that I will need to document the social web audit which we conduct in order to form these civic webspaces in the first place as this looks at the conditions which I describe above – which is no bad thing apart from the ‘more work’ element of it.

So – conclusions:

  • I am going to stick with the catagories as described above but then use a social media rather than a participation typology to describe behaviours within them
  • I am going to focus some reading on these models to finally decide which one to use
  • I’m going to review the baseline questionnaire to reflect this
  • I am going to use the initial social social web audit to look at the conditions needed for each of the catagories

Now – if only I was doing this full time then it would be no problem at all…..

I was in Ireland this week helping to run a Citizenscape workshop in Donegal (one of the pilot sites). We are looking to involve Youth Councillors and other young people as community moderators (Just as an aside – I really struggle with what to call these folks as a group – “young people” makes me feel like we are talking about them as an alien race but what else do you use? For now I will call them the folks at Donegal and you’ll have to remember that they are all under 25!). It was a really enjoyable session and I am looking forward to working with this lot as they campaign around getting government buildings using sustainable energy and getting more cycling lanes in Donegal.

Anyway – this post is really an action research note on the workshop to help improve the format etc for next time and then highlight points for future research so brace yourselves – its long.

The aim of the workshop was threefold:

  • Identify a topic that they wanted to work with

  • Make sure they were all comfortable using all the technology involved – including filming short pieces to camera

  • Get to a common agreement around how the site would be moderated and agree some immediate actions to get things moving

Overall the workshop was run very loosely as its difficult to know in advance where the participants would like to focus it. Next time I do this I will try and spend more time on the actual topic – we got rather carried away with the technology stuff which was fine with this group as they were interested but I will try and bring the balance back on the content. I think it would also be good to have a stand campaign template that people could start to complete in the workshop as a takeaway.

The first section of the day however was a discussion of current web tools that the team already use – so that we could then relate them to a citizenscape context. We organised the data in these catagories:

  • 1 to 1 tools where you know the person (or people) you are communicating with well. These tools include: SMS / MMS / Email / Skype

  • 1 to Many tools where you are is a shared space of people that you may not know in person. These broke down further into two groups: Social and Themed

  • The internet out in the wild with no really social aspect. General sites and services included:  Google,Music Download, Yahoo

I am doing a more detailed analysis on this as the catagorisation fits in with my wider theoretical framework that I am using for evaluation. However there are a few particular themes I wanted to pull out of this session which I will pick up on in my focus groups for the project evaluation:

  • Privacy / Safety – the group were reassuring aware of online safety and were careful about what details they revealed online.

  • Identity – they were also sophisticated about the need to have different personae online and were comfortable with the idea that you might have a specific persona for a specific purpose.

  • Space and place – in discussing the way in which we were catagorising the sites/services they had all listed there was clear agreement about the different social spheres that these worked for. There was a sense of appropriate spaces for different activities and when we started talking about campaigning we were able to talk about how we can use these different social spheres to contact different people.

  • This is stating the obvious perhaps but there was a huge difference in the level of online skills when compared to an older group. I would like to explore this more and look at doing more mixed age groups.

  • Also stating the obvious there was a big skills gap between the participants and the youth workers which would need to be addressed in future iterations. Happily the officers for this group were also really enthusiastic and used the event as a real chance to learn – but this could be a barrier with other sites and needs to be looked at

  • There were no gamers in this group – but they said this was not typical and we should keep an eye out on other groups.

The other thing to note is that the two of the main propositions around citizenscape – that you can use the Social Web to find people who are interested in stuff and that you need a specific place to talk about ‘civic’ issues both stood up to scrutiny here which is reassuring.

If anyone is interested I can share the workshop plan etc.

PS If anyone from the workshop is reading then I am very concious that I have not met the interesting blog criteria of having photos and I know this is too long – I promise I will try harder next time you you know know how I like to talk!!

However tempting it is just to dive in a set projects up in the social web I you need to consider stopping to think about how you will measure and evaluate success. It’s a big part of using these new tools to have a positive impact rather than just creating empty buzz.

I have been doing a few things this week that all tie together to make me think about evaluation. I’m right in the middle of writing my research proposal and so am having to focus on how to evaluate the impact of the CitizenScape approach in an academically rigorous way. I also took part in a MJ round table event talking about the way in which the social web is being used by Local Authorities and finally I was helping with the judging of the LGComms reputation awards. All of these things highlight the important of figuring out how to measure the impact and effectiveness of using web 2.0 sites and technologies and the need to bring some discipline to the process. In many ways this is a reflection of the fact that these technologies and sites are entering the mainstream – after all if the Prime Minister can make a t*t of himself of YouTube then the possibilities for Councils are endless!!!!

What makes a good evaluation?

This is probably stating the obvious but the key to good evaluation is knowing what you want to achieve in the first place. I think that experimentation is a perfectly good reason in its own right to try something. Its obvious that Local Authorities need to get involved in the online world and that the social web phenomenon is now too big to ignore and I have a huge amount of respect for the Council’s who are making foray’s into this world. However without systematic evaluation of the impacts of these trials we are just dabbling and not really learning. Its the difference between skimming the headlines and sitting down and reading a book on current affairs – you may be able to give the sound bites but you won’t have any particular depth of knowledge. Clearly I am a bit biased here as I take the importance of evaluation so seriously I am doing the PHD but still – evaluation matters.

What can you evaluate?

So – how can we evaluate social web projects? Many people seem to be looking at the traditional web metrics of counting things; numbers of people joining a facebook group, number of followers on twitter, number of views or comments on YouTube. This is one approach but if you go back to the question of what you are trying to achieve then the only question you can really answer with basic metrics with these is “did more people see my content” – its an advertising eyeballs evaluation. For many marketing campaigns then this might be enough but if what you are really trying to do is to reach ‘hard to reach groups’ or encourage some kind of participation then you are missing both demographic and impact assessment data. The absence of traceable / checkable demographic data is probably the biggest frustration here and one of the main reasons why I think it will remain impossible to carry out deliberative debate on these sites – or at least deliberative debate which can then be counted strongly as part of the decision making process. Its also one of the reasons that I think the Virtual Town Hall approach is a better bet. The issue of impacts is also an interesting one. You can probably judge whether or not the numbers of people – the metrics – have effected the decision but how can you measure whether you have effected the people? If you are trying to increase democratic participation then you probably need to know whether your interventions have meant they are more or less motivated to participate in the future.

Finding richer data – not just a head count

Richer data of course means more work. You probably need to do a survey and hound people to answer it and you should also run some actual focus groups (yes – face to face evaluation of an online project – oh the irony!!). My basic plan is to gain a baseline of participation, in both democracy and generally online from as large a group as possible as I can initally and then re-sample this group at the end of the project (and again in the middle if the elapsed time is more than a few months). I will use this survey as a recruitment tool to find out who is willing to either be interviewed or join a focus group. Simply put this approach breaks down like this:

  • Web metrics will show you how many actions have been carried out

  • Surveys will show you has done this and some basic motivations for their actions

  • Interviews will allow you to get a sense of changes in attitudes

Hopefully this balances the need not to overburden the team with work and the need to actually find out more about the people and their reasons for being involved.  I am currently working on a baseline questionnaire and hope to have it out in the world fairly soon.

Analysis: Find a framework and stick to it

So – now we have a lovely lot of data what are we going to do with it? The chances are you will not be thinking of one large pilot – more about a series of smaller projects. In which case a standard evaluation framework (and consistency across your survey questions) is going to help make data collected across pilots comparable and also allow you to make draw some conclusions about whether you are having an effect on your population. In my research I am intending to translate the ladder of engagement idea into something which relates more closely to formal democracy and then to define online activities which have equivalence (where appropriate) with offline democratic actions. The underlying idea of this of one of progression – you plot where people are in terms of democratic engagement at the start of the project and then see whether or not they have moved through the course of your actions. Because you are gathering qualitative data as well as the easier quantitative stuff you can find out more about people’s motivations and their attitudes to the process.

There are also all kinds of interesting social network analyse tools you can use to look at measuring social capital – but these are probably a bit too much for everyday use.

Good value for money?

Just one final thought – though we would all like to do these projects for the love of democracy and the common good the reality is that at some point we will be asked about value for money. This is a huge post in its own right but the basics are:

  • For communications projects: Equivalent ad spend figures can be a useful starting point

  • For Community engagement projects: Cost of recruitment to a process comparisons or cost effectiveness of running better attended meetings with online supporting

  • For democracy engagement projects: Democracy costs! But can you can make some comparisons between online and offline methods. If you look at the ‘cost ode democracy’ formula (yes – councils do have one) then online methods compare well to offline ones

Where you can make comparisons between offline methods then online always looks more cost effective. The issue is of course that no-one wants to stop doing offline – and nor should they. The trick then is to ensure that your pilots are not only creating online effects but also enhancing the existing offline process – for instance by reducing the cost of recruiting a citizens panel or by ensuring that more people attend a public meeting.

Any use?

Well – this has been helpful for me as I will now try and write something very similar but far more detailed for my research proposal!

I am taking a break from the statistical wrestling necessitated by my exam next week to note down my thoughts on my questionnaire as its been preying on my mind and is a lot more interesting than the stats (or to be more accurate more interesting than SPSS – the stats are fine!).

The questionnaire seems to have taken on a life of its own – its initial purpose was to provide a benchmark for my research so that I could measure whether the citizenscape pilots had succeeded in making people more likely to do something democratic.  However I now want to combine this with additional exploration of the similarities in behaviours required for informal participation online and formal democratic participation.  This is still work in progress but the main sections of the questionnaire will need to be:

  • Demographics – I want to find out a little about the respondents and at the moment plan to look at age, gender, educational background and main occupation.  We should also find out where (if anywhere) they use the internet.  This section should support the link between age and online participation.  There will also be a question to define their role in the process for example citizens / officers / members (though I want to make it possible for people to choose more than one role)
  • Current online activity – This section will look at what respondents already do online.  I want to cover transactions, social interactions and user generated content.  This needs to be properly catagorised but I am aiming at a distinction between doing the weekly grocery shop, posting to a friends facebook wall, gaming  and blogging.  We will also ask people to list their 3 most frequently visited sites
  • Current democratic activity:  This section will look at how involved the respondents are in their local community, communities of interest or in democratic debate more generally.  Within this section we need to look at formal and informal activity including democratic transactions such as petitions.
  • Where should this happen? This is the section that I am struggling most with as I want to ask a set of questions which explore where people want to talk about ‘important’ things, where they think they ought to be listened to and whether they see the connection which I am making between informal participation and formal decision making.  I want to ask questions which explore the idea of a virtual civic space and their feelings towards this (including opinions about moderation and co-creation) but I am going to need to think more about how to pose these theoretical attitudinal questions – and also read more about questionnaire design to support this

The questionnaire will be administered online, by paper through the pilot sites (if they are prepared to do the admin for this) and also with some interviews as I want to use this as a basis for a more substansial discussion with a few stakeholders (I might do this to pilot the questionnaire as well).  We will then run follow up interviews as well.

At the end of this process I am hoping we will have a random sample of data from a number of UK sites as well as from the other pilot areas for both CitizenScape and EuroPetition.  The first three sections will serve as a benchmark for both these projects as well as being a standalone piece of research on the current informal/formal behaviours online.  The last section is much more focused on my research question.