[Wiki Loves Monuments] 'evaluation'

Jaime Anstee janstee at wikimedia.org
Mon May 4 02:44:19 UTC 2015


Greetings Wiki Loves Monuments list members,


We really appreciate all your interest in this report. We regularly seek
program leader input to interpreting the data and developing next steps for
learning from case examples. This applies to volunteer program leaders and
grantees and we encourage continued participation in this shared learning
effort. All our channels are open for you to choose how to reach us. The
goal of our team and our reports is to serve movement partners, so we want
to make sure we’re hearing and responding to your main concerns about this
year’s iteration of the Evaluation Reports.


That said, a lot of this thread seemed to be based on misunderstandings. We
wanted to clear some of them up, particularly around the report’s
background and inputs:


(1) Data collection efforts

Several people in this thread have asked how we gathered the data that we
used in these reports. We got data from the voluntary data collection
survey[1], grant reports and their linked event pages, blogs or
supplemental reporting, and by using  online tools that are also available
to the community[2]. This year’s project was first announced in September
with a clear outline of the metrics sought. Data collection and input of
metrics was open September through December initially and then extended
through February. The extension to February was to extend direct reporting
inputs two months longer to allow for grantees, whose program data were
first mined from their grant reports, time to connect us to specific
program leaders to fill in the gaps [3], as well as last call to the
community along with our published list of identified programs January
2015.


(2) Data limitations

Some people in the thread have been concerned about the limitations of the
data. We agree that we should be transparent about this, so each report has
a special page that reviews the limitations of the data captured.
Importantly, the Wiki Loves Monuments evaluation report is part of an
expanded folio of the beta reports [4] modeled and discussed last year. As
a set of reports we present overall limitations to the reporting and issues
with data access across each program [5] in the overview of the reporting
[6]. In those sections, we explicitly present the response rates of program
leaders who reported directly, for Wiki Loves Monuments, that portion is
39% who program leaders which report directly.


 (3) Diversity of goals

The issue of diverse goals for programs is also included among limitations
overall and highlighted on the Wiki Loves Monuments limitations page [7]
where we point out that, yes, eight different goals were selected by at
least 50% of those reporting directly.  These reports are part of a
discovery process with which we have engaged in on-going dialogue about
challenges with metrics for quality, tools accessibility, tracking and
privacy issues, issues with valuation across different socio-economic
contexts, varied interests and foci, and other complexities of measuring
impact across the movement. We will continue to have those conversations as
we look to improve measurement strategies for understanding movement-wide
efforts and impact.


(4) Over-simplification

Some of you were also concerned about over-simplification, and that nuance
is lost when writing simple summary statements such as  “The average Wiki
Loves Monuments contest …”  “...hurt ... to see.” We wrote these TL;DRs
explicitly in response to feedback on the beta version of these reports
last year. When we proposed the summaries then, we were told that would be
appreciated. Truthfully, these can be really painful statements to have to
write because we know they are, by definition, over-simplifications.
However, we made that compromise in order to make the information
accessible to many different audiences of readers.


Importantly, rolling up metrics across several different points of program
implementation is a difficult task. By definition it sacrifices complexity,
as does developing easy to digest snip-its of information that are
requested by so many who are inundated by information in their inboxes.
So, yes, if you want the details, please skip them and read the more
detailed narrative, or use them to help guide your interest to where you
wish to read more deeply, there is a lot of data to wade through, we have
worked to make it as accessible as possible. We have tried to format in a
linguistically and visually consistent fashion to make these different
reading routes available, but differentiated, for different reader
preferences. Please feedback on how this is working and continue to share
potential solutions as we are always open to improvements.


This email does not answer every question raised on the thread; since we
expect some of these questions will be asked again in the future we have
outlined the most important questions asked, and answered on the report
talk page [8].  Please let us know if we have overlooked any and join us
there so that we can continue the discussion and have the information
documented in a central location in order to use it in future strategy.


In behalf of the Learning and Evaluation team, thank you for your time and
participation in learning together.


Sincerely,


Jaime Anstee


Links:


[1] Data Collection Announcement and Blog Announcement

https://meta.wikimedia.org/wiki/Grants:Evaluation/News/Round_II_Announcement

http://blog.wikimedia.org/2014/09/24/quantitative-versus-qualitative-more-friends-than-enemies/

[2] Tools (Wikimetrics, GLAMorous, CatScan, Quarry)

http://tools.wmflabs.org/

[3] Evaluation Reports (beta)

https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2013
<https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Important_definitions>

[4] Blog on “Filling in the Gaps”

http://blog.wikimedia.org/2015/01/29/fill-in-the-gaps/

[5] Overall reporting limitations and data access

https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Limitations


https://meta.wikimedia.org/wikiEvaluation/Evaluation_reports/2015/Data_access
<https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Limitations>

[6] Reporting Overview (If you are new to the reports and evaluation
initiative we suggest starting at the Important Definitions page and
working your way right through the other tabs to answer your curiosities:

https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Important_definitions

[7] Wiki Loves Monuments report limitations

https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Wiki_Loves_Monuments/Limitations

[8] Wiki Loves Monuments evaluation report talk page

https://meta.wikimedia.org/wiki/Grants_talk:Evaluation/Evaluation_reports/2015/Wiki_Loves_Monuments


________________________________________


Jaime Anstee, Ph.D

Program Evaluation Specialist

Wikimedia Foundation


Imagine a world in which every single human being can freely share in the
sum of all knowledge. Help us make it a reality!

https://donate.wikimedia.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.wikimedia.org/pipermail/wikilovesmonuments/attachments/20150503/26efaede/attachment-0001.html>


More information about the WikiLovesMonuments mailing list