Tina Phillips, Cornell Lab of Ornithology, Cornell University
The Cornell Lab of Ornithology is probably best known for creating and implementing the traditional form of citizen science where people go and collect lots of data out in the field and share it with scientists who are interested in answering some questions. I have been there for about 16 years and in that time I have been a practitioner running projects, I have been an evaluator, and I have been a researcher. I am going to give you a best practice from each of those different hats that I wear.
The first is from my researcher hat. A best practice would be to really know and understand your audience. As I said we have done mostly the traditional form of citizen science, but in our one real effort to do crowdsourcing in the online digital mode we assumed we knew something about our audience. We said, we are going to put all of these images of birds on the website and we are going to get gamers because gamers love to do this kind of stuff, right? Well, we were really wrong. We didn’t get any gamers, we got people who knew about the Lab, and the aspects of what we thought we knew about that were not really fitting well with the design of the project.
We should have really taken the time to understand their needs, their wants, and what their expectations for the project were. That project, which was called CamClickr, was in some ways a failure right from the get-go because we just didn’t understand the audience. Doing that will obviously help keep your audience retained over the long term, and we weren’t able to keep that retention after the initial spike that Chris talked about. In contrast to that, eBird is a project that the Lab is successful with and that is because we really understand our audience. We have people who are project leaders, who have face-to-face contact with a lot of the people who make up that group. So there is a vast difference in comparison of how successful a project can be just by understanding the audience.
For my practitioner hat I am going to use the CamClickr example again. We over-designed that project. We had a wonderful designer who was very good at what she did, but we added all sorts of bells and whistles and things that actually made the navigation of the project very clunky. We built it in JAVA as opposed to having it on a web-based platform, so it was not simple, it was over-designed. That is a detriment.
There has to be a point at which practitioners say to designers, “Okay, we’ve done enough here. We don’t need to add any more bells and whistles.” Because I think the end result of that is to actually diminish the science. On the one hand we were trying to make a very scientific project, but on the other hand we were trying to make it a game, and we had leaderboards which people did not like at all, we had sound and video effects in the middle of trying to play this game—it was just overly designed. So I would encourage people: Simple is best.
For the last point I will wear my evaluator hat. Even if you don’t have the funds to do an evaluation, I think it’s a really important process to imagine or pretend that you are going to evaluate your projects. What that does is make you really think about the plan. It makes you immerse yourself in thinking about what your intended outcomes are for the program and for the participants. You can use simple tools like logic models or theory of change to put on a graphical representation of what you’re providing the audience and the activities that they’re doing, and the things that you’ll get out of it, and how you think all of those align with outcomes.
We often develop projects with these preconceived notions of how they will need to be. Until you actually map it out and share it with other stakeholders, other people who are involved, and realize that you have a lot of assumptions about how your project will achieve those outcomes, it is really easy to carry on this process of developing something without having real alignment between those goals and what the people are actually doing.
So even if you don’t have the money to do an evaluation, just make pretend. Pretend that you actually are going to evaluate it so that you can start delving into these planning tools early in your process. And there are lots and lots of resources out there to help people think about how to build their project with that evaluative framework in mind.
This presentation was a part of the workshop Engaging the Public: Best Practices for Crowdsourcing Across the Disciplines. See the full report here.