Canadian Supreme Court rules against Google in favor of worldwide court orders



Artwork by Sandro Botticelli, public domain.

Last month the Supreme Court of Canada issued its ruling in the Google v. Equustek case, holding that Google must remove search results worldwide for URLs leading to web pages selling goods that violate Equustek’s trade secrets. We intervened in the case on behalf of Google, and we respectfully disagree with the court’s decision. When national courts impose international judgments, they risk trespassing the free expression rights of people living around the world, both to publish and access information online. This impact could be felt across the globe, particularly by sites such as the Wikimedia projects, which host content that some countries claim should not be freely available.

As noted in our previous blog post on this lawsuit, the case concerned the sale of products which appear to have been based on trade secrets owned by Equustek and taken unlawfully by a competitor. The Canadian court never held a full trial, but rather ruled in the context of an interlocutory (i.e. temporary) injunction. This type of injunction is supposed to last a short time to minimize harm while a trial is ongoing and offers a party a temporary legal solution to the matter they are seeking to fix in court. Unfortunately, with the way the Canadian court system works, Equustek does not need to set a date to complete the trial. There is a good chance that Equustek will use the temporary order without moving forward with the trial, effectively making it final.

Equustek filed a court application against Google, a third party that wasn’t in the underlying case, because it could not reach the actual infringer. That infringer continued to sell infringing goods, but had moved its business outside Canada. On the other hand, because Google is a large company and has many users, Equustek thought it could solve its problem by suing Google, even though Equustek admitted that Google wasn’t part of the original lawsuit and wasn’t breaking any laws itself. The Canadian courts found that because Google was enabling others to find the infringing products, Equustek could get a Canadian court to order Google to delist certain URLs from search results, and that this delisting right extended to every domain Google owns, no matter where the user viewing the search results was located.

Google appealed this case on jurisdiction and free expression grounds. In October 2016, the Wikimedia Foundation filed an intervention, similar to the amicus briefs we often file in American courts. Many media, free expression, and digital rights groups did the same. In our brief, we urged the court to consider the free expression concerns that arise from worldwide orders, the negative impact and dangerous precedent this could set for the ability to find and access information online, and to exercise general respect for the differing laws of other nations.

Ultimately, the Canadian Supreme Court disagreed with our position. The court focused on Equustek’s trade secret claim, which the court observed would be a legal wrong in most jurisdictions, while saying that this order requiring Google to delist all sites that involved the sale of goods in question, did not not implicate freedom of expression.

The court’s opinion focuses particularly on Google as a major internet company with substantial resources. The opinion notes that if Google thinks there is some conflict between the laws of Canada and another country, they can come back to the Canadian courts and ask for the order to be modified. The same would apply if Google could point to an effect on freedom of expression in this case. This may be possible for Google, but the court failed to acknowledge the difficulty that similar orders could present for smaller, more scarcely-resourced organizations such as the Wikimedia Foundation. This could force members of an organization to travel to a foreign court several times over to seek modifications of overly broad injunctions and can pose a significant financial obstacle, especially for newer or smaller websites. A wave of such orders could stifle a website before it ever has the chance to get off the ground.

Though the case did not reach the outcome we had hoped to see, limiting factors may prevent the court’s ruling from being read too broadly. First, the case does not apply to everyone on the internet, as the power exercised by the Canadian courts here relied in part on the fact that Google (the U.S. company) was actually selling ads to Canadians. A website that was not targeting citizens of a particular country for commercial sales may not have been subject to an order like this one. The decision also does not address cases that do not involve trade secrets or the unlawful sale of a product. For example, it does not address the sorts of free expression issues that the Wikimedia projects often face, such as a disputed copyright in a remix.

However, a dangerously expansive reading of this case could be used to seek global orders that place limits on free expression and broad access to knowledge. In our view, the court did not adequately take into account the potential of misuse of its decision, despite the large number of intervenors explaining the harmful implications to which a broad reading of the case may lead. Worryingly, large entertainment industry associations—who were also represented at the Supreme Court—have already pointed to this decision as precedent for allowing them to obtain global orders outside of the trade secret context.

With similar demands for global delisting coming from countries in Europe, this case may encourage courts around the world, including those in countries with weaker free expression protections, to attempt similar rulings in order to block access to information worldwide. Ultimately, such actions harm the ability of the Wikimedia movement to create and share knowledge freely.

Jacob Rogers, Legal Counsel
Wikimedia Foundation

We would like to extend our sincerest gratitude to McInnes Cooper, and in particular to David Fraser for their excellent representation in this matter. We would also like to extend special thanks to legal fellow Leighanna Mixter for her assistance in preparing this blog post.

from Wikimedia Blog

Investing in our shared future, supported by AI: Announcing the Scoring Platform team



Illustration by Mun May Tee-Galloway, CC BY-SA 4.0.

On January 12, 2015, an editor by the name of Blank123456789 noted that “LLAMAS GROW ON TREES” in the article about Dog intelligence.  Within a second, the edit was flagged by an algorithm as potentially problematic

Another Wikipedia editor named IronGargoyle saw this flagged edit in an advance curation tool called Huggle.  With a glance, he was able to identify the edit as problematic and strike it down. This whole interaction took a matter of seconds. A vandal vandalizes, and a patroller supported by advanced vandalism detection artificial intelligences (AIs) sees the problem and corrects it.

Out of the 160,000 edits that the English Wikipedia receives every day, about 4,000 (2.5%) are vandalism, which has a very specific meaning on Wikipedia: editing (or other behavior) deliberately intended to obstruct or defeat the project’s purpose.  Reviewing the hundreds of thousands of edits that take place every day would be a monumental task–but the volunteer community of Wikipedia editors has managed quite well, something that is today largely due to AIs that have been developed to support them.

AIs make the work of maintaining massive encyclopedias, dictionaries, databases, and more much easier by making a lot of large scale tasks (like counter-vandalism and article quality assessment) much easier and quicker to spot and handle.  Historically, the AIs that have helped Wikipedians were built and maintained by volunteers. While these systems filled a critical infrastructural role, they were generally only available for the English Wikipedia and did not scale well.

Over the past few years, I have been working alongside a large group of volunteers on a core technology that makes basic AI support for wiki-work much more accessible to non-AI specialist developers.  Named “ORES,” it is an artificial intelligence service that makes predictions about which edits are vandalism, which new page creations are problematic, and which articles are ready to be nominated for Featured status. (See our past posts about how it works and measuring content gaps with ORES)

Without a doubt, the project has been a breakaway success. The beta feature has 26,000 active users and over 20 third party tools, is actively running in production, and has received positive write-ups in Wired, MIT Tech Review, and the BBC. As a result, we’ve become a leader in conversations around detecting and mitigating biases, and have built collaborations with researchers at UC-Berkeley, UMN, CMU, Télécom Bretagne, and Northwestern.

Developing and maintaining ORES requires a lot of consistent effort and vision, and we recently requested resources from the Wikimedia Foundation to formally support the project. With a budget and broader mandate in place, we can now focus on bringing new ORES models to production, improving performance, and extending accountability.

Meet the Scoring Platform Team

Photo by Myleen Hollero/Wikimedia Foundation, CC BY-SA 3.0.

The new Scoring Platform team is led by Aaron Halfaker, a principal research scientist who authored a series of studies into Wikipedia’s newcomer decline and designed Snuggle, a newcomer socialization support tool.  ORES is the next item on Dr. Halfaker’s research agenda.  He hypothesizes that by enabling a broader set of people to build powerful, AI-driven wiki tools, some of Wikipedia’s fundamental socio-technical problems may become much easier to solve.

Photo by Mardetanha, CC BY-SA 4.0.

Amir Sarabadani will be continuing his work as a quasi-volunteer and contractor for our peer organization, Wikimedia Germany.  Amir has developed several bots and bot-building utilities that are used to maintain content in Wikipedia and Wikidata.  Amir has been a core contributor since the early days of the volunteer-driven “Revision Scoring as a Service” project, and is the primary author of our insanely popular Beta feature—the ORES Review Tool.

Photo by Adam Wight, CC BY-SA 3.0.

As of this month, the team is welcoming their first full-time, budgeted engineer, Adam Wight.  He has worked with the Wikimedia Foundation’s fundraising team since 2012, volunteered for ORES and the Education Program. Outside of computers, he’s done a few eclectic things like helping to start “The Local” food co-op and People’s University, an open-air school on subjects ranging from philosophy to the history of adventure playgrounds and practical blacksmithing.  Adam is currently working out the details of an auditing system that will allow humans to more effectively critique ORES’ predictions.

Where we plan to go next

In the next year, the Scoring Platform team to work in three new directions:

  • Democratizing access to AI. We’ll increase the availability of advanced AIs to more wiki communities.  Small, but growing communities need AI support the most, so we’ll be targeting these emerging communities to make sure they are well supported.
  • Developing new types of AI predictions.  The team is currently experimenting with new types of AIs for supporting different types of Wikipedians’ work.  We’re collaborating with external researchers to develop prediction models.
  • Pushing the state of the art with regards to ethical practice of AI development.  AIs can be scary in all sorts of ways.  They can perpetuate biases in hidden ways, silence the voices of those who don’t conform, and simply operate at speeds and scales far exceeding mere humans.  We’re building a human-driven auditing system for ORES’ predictions so that human contributors will have a new and powerful way to keep ORES in check.

Until now, ORES was primarily a volunteer-driven project.  With minimal financial support, a ragtag team was able to build a production-level service that supports 29 languages and 35 different Wikimedia project wikis.  The ORES Review Tool (a simple tool for helping with counter-vandalism work) has been a breakaway success, with over 26k editors installing the beta feature before it was enabled by default.

How to learn more and get involved

The Scoring Platform team welcomes collaboration and volunteers to get involved with the project.  See the team’s page and our technical blog for more information about how to get involved.  See ORES’ documentation for more information about using the service or getting support for your wiki.  Or join the larger community of people interested in applying AI to make wikis work better via our mailing list and IRC channel (#wikimedia-ai on freenode).

Aaron Halfaker, Principal Research Scientist, Scoring Platform team
Wikimedia Foundation

from Wikimedia Blog

Join Wikimedia volunteers and free knowledge leaders for Wikimania 2017 in Montréal



Photo by Alain Carpentier, edited by Sting, CC BY-SA 3.0.

Registration is open for Wikimania 2017, the annual conference celebrating Wikipedia and its sister free knowledge projects. The event will take place in Montréal, Canada from August 9–13, the first time ever in Canada. Free knowledge leaders and hundreds of volunteer editors will come together at the Centre Sheraton Montréal for three days of talks, discussions, meetups, training, and workshops.

At this event, the biggest wiki gathering of the year, attendees will discuss the advancement of free knowledge, the role of academia, cultural institutions, and technology in free knowledge, privacy and digital rights, and the future of the Wikimedia movement. The conference’s programming spans a diverse range of topics, from modeling and ingesting performing arts-related data into Wikidata to evaluating the impact of journals on Wikipedia (and dozens more).

This year’s conference—which is co-organized by the local independent affiliate organization, Wikimedia Canada and the Wikimedia Foundation— will also present a unique opportunity for participants to engage in Wikimedia 2030: a global consultation to define Wikimedia’s future role in the world. Over the past six months, hundreds of volunteer editors, Wikimedia affiliates, experts, researchers, donors, readers, and partners have joined the conversation to consider what the world will look like in 2030, and what we want to achieve as a free knowledge movement. At Wikimania 2017, participants will be able to join in-person sessions and conversations to discuss what we’ve learned so far and where we want to go in the future.

What you’ll find at this year’s Wikimania

Andrew Lih (User:Fuzheado), who has attended every Wikimania since 2005, says that he’s learned something new every time he’s attended Wikimania. “No matter the logistical complications, there has never been a bad Wikimania because the people in the community are so incredibly interesting and productive,” he says. “Just getting them together in one room always yields great collaborations.”

This year, Andrew says he’s particularly looking forward to the first time French has been a significant part of any Wikimania conference. “We’re collaborating with the Bibliothèque Nationale du Québec (BAnQ) during the event, as they have expressed interest in projects beyond just Wikipedia—in Wikisource and Wikidata,” he says. A keynote speech will feature Frédéric Giuliano, the Archivist-Coordinator at BAnQ, in conversation with Hélène Laverdure, Curator and Director General of the National Archives at BAnQ.

Other keynote speakers include:

  • Susan N. Herman, the President of the American Civil Liberties Union.
  • Jimmy Wales in conversation with Gabriella Coleman, the Wolfe Chair in Scientific and Technological Literacy at McGill University, facilitated by Evan Prodromou, an Internet entrepreneur and wiki and software developer based in Montréal.
  • Esra’a Al Shafei, a Bahraini human rights activist and outspoken defender of free speech, and founder of, a network of online platforms that amplify under-reported and marginalized voices.
  • Katherine Maher the Executive Director of the Wikimedia Foundation, the nonprofit organization that supports Wikipedia and its sister projects, in conversation with Christophe “schiste” Henner,  Chair of the Board of Trustees of the Wikimedia Foundation.

In addition, there are more than 100 community-submitted talks and more than a dozen workshops on topics including making access affordable, collaboration under censorship, and legal threats to free knowledge. Wikimedia Canada will also feature programming related to Canada’s cultural heritage and free knowledge communities.

“Wikimedia Canada members will present inspiring projects at Wikimania 2017, the result of successful collaborations with several Canadian public and private institutions,” explains Wikimedia Canada president Benoit Rochon. “The archives of BAnQ (Bibliothèque et archives nationales du Québec) on Wikimedia Commons which have been viewed more than 30 million times, the first Aboriginal encyclopedia in Canada (Atikamekw, one of the largest and still active First Nations language) on Wikipedia and the WikiMed conference are all unique accomplishments we will proudly share with the international Wikimania participants.”

Wikimania registration is now available through the Wikimania registration page.

Remote Participation

If you cannot attend Wikimania in person, there are still opportunities to follow along. Select sessions will be livestreamed throughout the conference on YouTube and Facebook Live on the Wikipedia Facebook page; more information will be available in early August. You can also follow @Wikimania and the hashtag #wikimania on Twitter. We will round up presentations and highlights for both Meta and our blog after the conference ends.

Melody Kramer, Senior Audience Development Manager, Communications
Wikimedia Foundation

from Wikimedia Blog

Introducing training modules: Multilingual resources for combating harassment



Photo by Rey Ramon/US Air Force, public domain.

When incidents of online harassment or abusive behavior arise across our movement, community leaders and functionaries often need to make difficult judgment calls.

To help community leaders arbitrate and resolve these incidents, the Wikimedia Foundation Support and Safety team is launching six online training modules for functionaries and community governance groups to use as a resource.

These multilingual resources, designed in conjunction with community leaders from across the movement, are intended to help community leaders respond consistently to harassment or abuse both online and off, and cover topics related to keeping events safe and managing online harassment. In this post, we’ll detail how the modules were designed, what they contain, and how to learn more.

Designing the modules

The six training modules were purposefully designed using a community-centered approach: Communities have the best insight into their own projects, speak the language of their projects, and are on the front lines when dealing with incidents of online and offline harassment.

Responses to the 2015 harassment survey suggested that improving Wikimedia’s governance  and creating standardized information about harassment and abuse was of great interest to the community.

Work on the training modules began by launching a series of multilingual surveys. We asked key community groups about the challenges they have faced with harassment, how their current workflows and tools help or hinder them in dealing with abuse, and, if warranted, what they considered effective methods of training.

Several dozen members of various Wikimedia communities responded, and these responses indicated the challenges currently facing stewards, administrators, Arbitration Committee members, and others responsible for mediating and resolving issues with user interactions.

We also reached out to outside academics and industry professionals who research or work in the fields of online collaboration and community health. They provided us their thoughts on what content to include, as well as the best methods to deliver this type of training.

In addition to these direct surveys and interviews, we invited opinion on what form these training modules should take, where they should be hosted, and how they should be presented. Should they be long-form, detailed modules, or presented in shorter, more easily digestible chunks?

Responses indicated a preference for these training modules to be presented as “slides” – short-to-medium-length interactive sections concerning specific topics that could be individually linked to and shared.

We assessed Meta-Wiki and the “Training” setup there, but determined that both were not effective for presenting this type of information. However, the Program and Events Dashboard checked all of the necessary boxes: training can be interactive, accessible – and, crucially, available in multiple languages.

With that decision made, it was time to really dive into the content.

Creating the content

Starting in October 2016, the Support and Safety team spent three months working on content, focusing on two distinct topics—”Keeping Events Safe” and “Dealing With Online Harassment“.

The first drafts of both modules covered the vast intricacies of harassment on the Wikimedia projects. Once they were complete, we asked for feedback about the drafts from the Wikimedia community. The feedback helped us narrow our focus and really think about what content needed to be included.

As a result, we split the original training model for “Dealing with Online Harassment” into five separate subsections:

  1. Harassment fundamentals
  2. Other forms of harassment
  3. Communication best practices
  4. Handling reports
  5. Closing cases

The content was then marked up for translation on Meta-Wiki. The Dashboard is set up so that translated text on Meta-Wiki can be imported into the Dashboard, which allows for the usual translation method to also apply here. This meant that translators don’t have to leave Meta-Wiki to translate content. This feature allow allowed the Dashboard to use Translate Extension’s features such as Aggregate Groups, which allow multiple related pages to be translated at the same time.

Some translation has been completed by staff or contractors as time permits, while other translation has been provided by the incredible volunteer translator community.

We’d like to thank the many volunteers who have worked on this effort. Hebrew is now totally translated (by User:Lionster), while Satdeep Gill translated the vast majority of content into Hindi. Polish is three-quarters done, and Vietnamese is more than halfway to completion. Other languages, such as Japanese, Arabic, Greek, Dutch, and Bangla are quite far along the road to completed translations.

We on the Support and Safety Team would like to use this blog post-slash-announcement to give our heartfelt thanks to those involved with translations. These modules contained a lot of text, and we understand that much of it was very wiki-focused and detailed. This sort of content is by no means easy to translate, so we fully appreciate all of those who took the effort to develop these modules for use in their home communities.

If your language isn’t covered, you can help! Links to translate each module are on Meta-Wiki.

How to learn more

The Support and Safety team will have representatives in attendance at Wikimania in Montréal, where we’ll be talking about the training modules and the process behind them. We’ll also be introducing documentation on how to create modules from scratch, as well as advising contributors on what the modules might be best-used for. We hope you’ll join us there. We can also be reached at with questions or comments and will do our best to assist.

Joe Sutherland, Community Advocate, Community Engagement
Wikimedia Foundation

from Wikimedia Blog

Combating misinformation, fake news, and censorship



Photo by Jamain, CC BY-SA 3.0.

Few people have faced the dangerous consequences of unresolved conflict as personally as Ingrid Betancourt did in 2002, when the then-presidential candidate was kidnapped by the Revolutionary Armed Forces of Colombia (FARC) for six years. She now encourages others to protect free knowledge and credible sources as an advocate for peace.

Betancourt recently told the Wikimedia Foundation that she believes that “values are important in the spreading of free knowledge. … Fake news is dangerous. Spinning the news is very dangerous. You can distort information to obtain a result.” One of the biggest threats to trustworthy information starts with how people evaluate sources (read how Wikipedians do it). Some may focus on content created with a profit motive, others point to government-controlled propaganda, while others say the problem starts with fake news (the subject of a recent discussion at Yale University attended by members of the Wikimedia legal team).

With so many different aspects to focus on (or neglect), misinformation threatens to delay all kinds of efforts to strengthen trustworthy knowledge across political and social divides.

Thinking about misinformation in two ways: content and access

As part of the Wikimedia 2030 strategy process, researchers at Lutman and Associates and Dot Connector Studio assessed over one hundred reports, articles, and studies to review how misinformation threatens the future of free knowledge. The assessments include dozens of powerful examples of how misinformation can have far-reaching and devastating consequences.

The researchers had a big task set out for them, so they divided and conquered the broad scope of misinformation trends by splitting the matter into two categories: first, a “content” category which focuses on trends that affect sources used by Wikimedians to develop reliable information and an “access” category, which refers to “how and whether Wikipedia users are able to use the platform.” Their framework allows for the comparison of different sources (technology, government/politics, and commerce) which fuel disruption of verifiable source usage and access to trustworthy information.

Consider how the sources of reliable information (in this framework, that’s the content category) is influenced by misinformation that is created and/or shared by governments and political groups, for example:

This spread of misinformation online is occurring despite recent growth in the number of organizations dedicated to fact-checking: world-wide, at least 114 “dedicated fact-checking teams” are working in 47 countries.

Looking into the future, what’s safe to expect? First, global freedom of expression will wax and wane depending on national and international political developments. Less clear is whether global trends toward autocracy will continue—or whether free societies will have a resurgence, grappling successfully with pressures on the press and academy, and the politicization of facts as merely individual biased perspectives.

Second, we can expect that politically motivated disinformation and misinformation campaigns will always be with us. Indeed, the phenomenon of “fake news,” misinformation, or “alternative facts” can be traced to some of the earliest recorded history, with examples dating back to ancient times.

The Wikimedia movement will need to remain nimble and editors become well-versed in the always-morphing means by which information can be misleading or falsified. It will be helpful to keep abreast of techniques developed and used by journalists and researchers when verifying information, such as those described in the Verification Handbook, available in several languages.

While they were tasked to inform the Wikimedia community about the prospects for future trustworthy knowledge, the researchers’ insights may also provide materials and insights for discussions held by other professionals challenged with misinformation and falsified materials, including researchers and academics, journalists, policy-makers and thinkers.

For Betancourt, ensuring “equality among human beings” requires us to talk to people we disagree with. While it may appear to be a counter-intuitive method, finding common ground is an essential part of establishing the guidelines of trustworthy knowledge-sharing in politically toxic environments. In her experience as a real-life hostage, “not talking or refusing to communicate… was a worse attitude than communicating.”

How can the Wikipedia community combat misinformation and censorship in the decades to come? The researchers offered their own suggestions, and we invite you to join us to discuss the challenges posed by this and other research.

Margarita Noriega, Strategy Consultant, Communications
Wikimedia Foundation

Chart by Blanca Flores/Wikimedia Foundation, CC BY-SA 4.0.

from Wikimedia Blog

WikiWomenCamp kicks off to bring more inclusivity to Wikipedia



Photo by Carolina De Luna/Wikimedia Mexico, CC BY-SA 4.0.

Wikimedia Mexico invited women from all over the world to Mexico City with a question and a challenge: What does a truly inclusive Wikimedia movement look like, and how do we attain it?

The camp dived straight into collaboration, endeavoring to topple one monumental barrier for thousands of editors: Wikipedia’s ever-pervasive gender gap.

Five years in the making, the meetup brought together an all-star team of contributors from every corner of the movement, such as:

The importance of women’s participation in editing Wikipedia was already common knowledge among the over 50 WikiWomen in attendance. After all, according to WikiProject Women in Red,  less than 20% of Wikipedia’s contributions are from women, and LGBTQ members of the community report feeling excluded—or worse, endangered—by daring to contribute.

The community is working together to combat this trend: Volunteers lead initiatives like Editatona and Women in Red continue to move the needle every day, and Foundation employees have been intensifying their efforts to combat online harassment and increase diversity on Wikipedia, especially during Women’s History Month.

That’s why a large portion of the event was reserved for building better support systems, online and off. Whose voices are we missing from the movement, for example? Who are we inadvertently silencing? How can we make editing safer for trans Wikipedians, who often edit anonymously for fear of being harassed or doxxed online? Wikimedia México even hired translators with extensive backgrounds in supporting women’s rights in digital spaces, like Erika from Dominemos las Tecnología.

Photo by Blossom Ozurumba, CC BY-SA 4.0.

The meetup challenged everyone to offer more than just support: more often than not, women were excited to band together in small sessions, learning as much as they could from (and about) each other before the inevitable end of the weekend.

An introductory course on Wikidata, breakout sessions on incorporating editing skills in education, and over a dozen lightning talks inspired each other to start online groups and communities that will hopefully thrive long after WikiWomenCamp has come to an end.

On top of learning new skills and building a stronger offline community, WikiWomenCamp was a fantastic opportunity to talk strategy. Adele Vrana, the Wikimedia Foundation’s director of strategic partnerships, invited the community to tell the Foundation in no uncertain terms what was working, and what isn’t. The responses championed the need for greater connectedness to the movement and each other.

Aubrie Johnson, Social Media Associate
Wikimedia Foundation

from Wikimedia Blog

Bringing the magic of classical music to Ukrainian speakers



Mozart graffiti, 2013. Photo by Vitold Muratov, CC BY-SA 3.0.

Collaboration has reached a new level in Kiev, Ukraine, where professional musicians are bringing the magic of Mozart, Chopin, and more classical composers to Ukrainian speakers and releasing the work under free licenses.

The latest installment in the long-running World Classics in Ukraine (WCU) project came last month. On June 18, Ukrainian musicians presented parts of The Marriage of Figaro, Don Giovanni, and Magic Flute, all operas from Mozart that premiered in the 1780s–90s. On the following day, another set of musicians performed Chopin’s Polish songs, a set of 19 texts that Chopin set to music at various points in the 1800s.

You can hear, watch, read, sing, and download the music for yourself. Audio and video from the performances, along with Ukrainian scores and lyrics, are available on Wikimedia Commons. Their free licenses mean that anyone, anywhere can use the works with a minimum of stipulations, such as attributing the creators and releasing any remixes under a similar copyright license.

Wrapping up one of the concerts. Photo by Василь Шевченко, CC BY-SA 4.0.

Organizer and Ukrainian Wikipedian Andriy Bondarenko says that everyone should have these sorts of works in their native language so that it can make deep connections within an individual. “World Classic in Ukraine’s goal is to give the public access to translations of all the masterpieces of vocal music,” he says.

Obtaining the necessary ingredients for these performances has occasionally proved to be a challenge. The translations for the three Mozart operas performed last month, for example, came from historical translations completed by two people, both by now deceased. WCU project supporters had to work with their heirs to get permission to use the translations, proofread them for accuracy (as some information had been lost over the years), and upload them under free licenses. In another example, WCU was able to collaborate with Olena O’Lear, a well-known Ukrainian translator, to obtain a complete version of Dido and Aeneas.

This hasn’t always been successful. Bondarenko says that the team “managed to find translations of two songs performed by Borys Ten, but because Ten left no heirs, they are both orphan works.”

Still, they soldier on. With the latest recordings, WCU has obtained over 200 minutes of classical music. Bondarenko has hopes to bring this into the dozens or even hundreds of hours over the upcoming years. “The bulk of the great classics have yet to be translated,” he says. “All we need are the translators.”

Ed Erhart, Editorial Associate, Communications
Wikimedia Foundation

Interested in starting a similar project in your own community? Read more about a similar but unrelated initiative in Spain to recruit musicians, record concerts, and release the results under free licenses.

from Wikimedia Blog

Community digest: Wikipedia for Peace, editing to celebrate diversity at WorldPride Madrid; news in brief



Photo by Malopez 21, CC BY-SA 4.0.

WorldPride is an international event that aims at promoting LGBT issues by holding parades, festivals, and cultural activities during the celebrations of Stonewall riots anniversary. The event has been held previously in Rome, Jerusalem, London, Toronto, and this year in Madrid, Spain.

We decided to join WorldPride with a Wikipedia event, so that we could help highlight LGBT issues by adding content about them on Wikimedia projects. 15 participants from Russia, Kyrgyzstan, Ukraine, Poland, UK, Germany and Spain attended the event, held from 23 to 27 June 2017. The group created 49 new articles in 10 different languages, took and uploaded more than 100 photos to Wikimedia Commons, and more.

The editing workshop took place at Medialab-Prado, a cultural space in the city center of Madrid. During the event, Wikipedian DaddyCell advised the group on possible topics to write about and helped the new editors learn basic editing skills.

The event was organized by Wikipedia for Peace, a community project to improve Wikipedia’s content on social movements, justice and peace. So far, two writing camps were held for the project in Austria, in 2015 and 2016, organized by Wikimedia Austria and Service Civil International Austria. In 2017, the project expanded to Germany and Spain, this year’s host for the WorldPride event.

Contributing to Wikimedia projects wasn’t our only activity at WorldPride. Our free time activities included attending Mayte Martin’s concert, a concert for the benefit of functional diversity LGBT people in Matadero, and city tours in Madrid. Also we took the opportunity to attend the human rights summit in Madrid that began on 26 June.

The organizers provided free tickets for our participants and two press passes to take photos during the talks. Some memorable moments were meeting Frank Van Dalen, the vice-president of InterPride, and the talk of Jóhanna Sigurðardóttir, the former prime minister of Iceland and the first openly gay head of a government. The next day we were invited to attend the round table discussion in the main auditorium. Florence Claes, from Wikimedia Spain, held a vital discussion on “the internet and social networks role in making minorities visible.”

The next WorldPride will be held in New York in 2019, and we are considering plans for a similar event in New York.

Saskia Ehlers, Wikimedia Germany
Rubén Ojeda, Wikimedia Spain

In brief

Wikimedia affiliates update: The Wikimedia Affiliations Committee (AffCom) has recognized four new user groups. The Wikipedia Library user group will aim to combine and multiply collaboration with libraries and librarians, from edit-a-thons hosted at libraries, to the Wiki Loves Libraries outreach campaign, to the broader institutional and publisher outreach of the Wikipedia Library, to a single forum open to all Wikimedia community members and any librarians interested in working with Wikipedia. Odia Wikimedians aims at bringing together contributors to Odia-language Wikimedia projects, as well as individuals who contribute to other Wikimedia projects on topics related to the Odia language, the Odia people, and the Indian state of Odisha. Wikimedians of Cameroon aims at supporting Wikimedia projects in Cameroon, supporting Cameroonian Wikimedians. Hindi Wikimedians will be working on supporting Wikimedia projects and the contributors to them in the Hindi language. Congratulations to the new affiliates.

Belgian Wikipedians celebrate freedom of panorama with a photography contest: This month, the Wikimedia community in Belgium is holding Wiki Loves Public Spaces photography contest. This month marks one year since freedom of panorama laws have come into force in Belgium. The change allows photographers to take and freely share photos of buildings and works of art in public spaces.

MedinaPedia: Participants of the MedinaPedia project in Tunisia started their initiative to install QR codes on monuments in the Medina of Tunis. The QR codes will let the monument visitors get information from Wikipedia about every monument in different languages.

Swedish court rules against freedom of panorama: The Swedish Patent and Market Court ruled against Wikimedia Sweden (Sverige) in a lawsuit filed by Visual Copyright Society in Sweden (see previous blog coverage). Wikimedia Sweden, an independent chapter in the country, had created a database of Swedish public art with photos from Wikimedia Commons.

Outreach activities in Nigeria: Last week, the Wikimedia user group Nigeria held a workshop on basic Wikipedia training at Nigerian Institute of Journalism in Lagos. The user group has also participated in the Open Data Day 2017, where Wikipedian Sam Oyeyele gave an introductory workshop to Wikidata.

Cycle three of the movement strategy discussion has started: In the first two cycles of the Wikimedia movement 2017 strategy, the community has expressed their opinions on what the movement should achieve and what challenges and opportunities are facing the movement. This cycle is dedicated to considering the challenges identified by the research and exploring how we may want to evolve or respond to changes in the world around us. In July, each week, a new challenge and insights will be posted, so that you can share how it connects to or changes your perspective on our future direction. Learn more and join the discussion on meta.

Compiled and edited by Samir Elsharbaty, Digital Content Intern
Wikimedia Foundation


from Wikimedia Blog

Yale Law School and the Wikimedia Foundation create new research initiative to help preserve and protect the free exchange of information online



Photo by Daniel Schwen, CC BY-SA 3.0.

The Wikimedia Foundation and the Information Society Project (ISP) at Yale Law School recently expanded their longstanding collaboration to focus on raising awareness and conducting research related to threats against the intermediary liability protections that enable online platforms to act as neutral third parties in hosting user-generated content. Through the Wikimedia/Yale Law School Initiative on Intermediaries and Information (WIII), the Wikimedia Foundation will support a Research Fellow, based at Yale Law School. The initiative will support research on policies, legislation, and threats related to intermediary liability and hyperlinking.

Intermediary liability protections are a critical component of the open internet, supporting the free exchange of information online. Threats to hyperlinking and efforts to hold intermediaries liable for user content have become areas of increasing concern to the Wikimedia movement.  Over the past several years, international policies and litigation have threatened to undermine the ability for users and platforms to freely link across the web. The need to monitor and raise awareness around these threats, and to promote ways that help safeguard the freedoms currently in place, has become more vital than ever.

You may be asking yourself: What is intermediary liability?

For more details on intermediary liability, see our policy website.

Every day thousands of people contribute text and images to the Wikimedia projects, develop and support self-governing editorial policies, and work collaboratively to evaluate and resolve conflicting views about  facts, relevance, or the copyright status of a work. The Wikimedia Foundation acts as an intermediary, or a neutral third party, by hosting and supporting Wikimedia projects without controlling what people write and contribute to the sites. As a consequence, the Wikimedia projects are neutral, open platforms where people are free to share knowledge and learn.

Intermediary liability protections shelter intermediaries—such as internet service providers, search engines, social media platforms, and the collaborative projects hosted by the Wikimedia Foundation from liability for the content they host. In our case, these laws and regulations allow the Wikimedia projects to host users’ contributions from around the world without being held legally responsible for the expression of those users. These protections undergird the fundamental attributes of an open internet—for example, the ability to link to websites throughout the world and contribute new content via online platforms. Intermediary liability protections relieve the Foundation of what would otherwise be the near-impossible obligation to make constant editorial determinations about the tens of thousands of edits made to Wikimedia sites each hour.

It is essential to retain these protections for intermediaries and the user-generated content that they host in the face of recent threats that call into question the ability to freely hyperlink to other websites. National governments, through legislation or in some cases naked exercises of authoritarian power, are increasingly demanding that intermediaries block, delist, or remove online content that they deem undesirable or unlawful. Such content includes political criticism and dissent, hate speech, defamation, and content that may violate the privacy or copyright protections of a given country (but not others). In this way, governments and other third parties are increasingly trying to make intermediaries legally responsible for their users’ speech and activities. This is effectively a form of censorship, treating the intermediary as a proxy for the speaker, and imposing huge burdens and restrictions on the free and open exchange of information online.

WIII will aim to generate broader awareness and research around this subject — in part, through the introduction of a dedicated research fellowship position at Yale Law School. Initially, the Research Fellow will focus on advocating the “right to link” and understanding link censorship laws and litigation. The Research Fellow will also conduct broader research related to intermediary liability, organize academic events, foster collaboration and cross-pollination of ideas to protect intermediary liability, support the development of creative legal and policy solutions to the issue, and lead other activities to advance the core goals of the initiative. Applications for the Research Fellow position are being considered on a rolling basis; application requirements are available on the ISP website.

WIII grew out of an ongoing academic affiliation and collaboration between Yale Law School and the Wikimedia Foundation and is made possible, in part, by funding from the Wikimedia Foundation. Given Wikimedia’s mission to build a world in which everyone can freely share in knowledge, one of the Foundation’s fundamental activities is to directly contribute to and participate with the research and educational mission of Yale Law School and other institutions of higher education that support free and open internet principles and free access to knowledge. Yale Law School students and faculty in particular, along with members of the Wikimedia Foundation have participated in symposia, presentations and conferences hosted by either the ISP or the Foundation. Yale Law School students and recent graduates have held internships and fellowships at the Wikimedia Foundation, and Yale Law School researchers have engaged in research with the assistance of Foundation staff.

Eileen Hershenov, General Counsel
Zhou Zhou, Legal Counsel
Wikimedia Foundation

For more information, see Yale Law School’s press release.

from Wikimedia Blog

The metamorphosis of Wiki Loves Butterfly



Photo by Sayan Sanyal, CC BY-SA 4.0.

“Butterflies have always enthralled human eyes, thanks to their exquisite and diverse texture and coloration, beauty, seemingly amazing metamorphosis, and carefree flight,” says Ananya Mondal, who goes by Atudu on Wikimedia projects.

Mondal, who by profession is a clinical nutritionist, is certainly not immune to this phenomenon. The large diversity and variety of butterfly species in her home region of West Bengal, India, has fueled such an interest in them that she is now known as the “Butterfly Wikimedian.” Still, her hunger for more knowledge was not met by Wikipedia at the time. She had several questions—like just how many species of butterflies are there in West Bengal, and if there were species yet to be uncovered—that she wanted answers to.

She decided to take matters into her own hands by adding one more question: could there be better documentation of these creatures on Wikimedia sites, particularly on her native Bengali Wikipedia?

Photo by Sayan Sanyal, CC BY-SA 4.0.

As it turned out, Mondal was able to answer that for herself. She created Wiki Loves Butterfly, a two-year effort to improve Wikimedia’s coverage of butterflies in West Bengal. Along with her co-leader Sandip Das, Mondal drew up copious amounts of documentation to support the project before it launched in March 2016. This included subject bibliographies, lepidopterists, and prime butterfly spotting locations in the region, in addition to a list of amateur and professional photographers who were active in the subject area.

“Of note,” Mondal says, “I included these people, with widely differing fields and areas of expert, as edit-a-thon and wiki contributors. I then used that to initiate an extensive Wikimedia outreach program.”

Photo by Tamaghna Sengupta, CC BY-SA 3.0.

With these individuals, along with several Wikimedians and students, Mondal went out into the field. “I followed the activities of people involved to get suggestions for best practices,” she says, and accomplished several other tasks:

  • Traveled and documented those butterfly hotspots
  • Collaborated with subject matter experts to identify pictured species
  • Solicited help from local guides and photographers
  • Created Wikipedia articles
  • Studied butterfly morphology, habitats, behavioral aspects, host plants, life cycle, and more

The Wiki Loves butterfly’s first part ran from March 2016 to June of this year, and saw ten new users, over a hundred new articles, 650 images—nearly half of which are used on Wikimedia projects, and nearly fifty have been noted by the community for their quality. 163 individual species have been captured on film, 30 of which had no photograph on Wikimedia Commons before, and 13 of which had no article on the English Wikipedia.

The second part of the project is running from now until March 2018, and you can join.

Ed Erhart, Editorial Associate, Communications
Wikimedia Foundation

from Wikimedia Blog