Introduction

As of today, June 5th, 2023, a large number of moderators, curators, contributors, and users from around Stack Overflow and the Stack Exchange network are initiating a general moderation strike. This strike is in protest of recent and upcoming changes to policy and the platform that are being performed by Stack Exchange, Inc.1
We have posted an open letter addressed to Stack Exchange, Inc. The letter details which actions are being avoided, the main concerns of the signees, and the concrete actions that Stack Exchange, Inc. needs to take to begin to resolve the situation. Striking community members will refrain from moderating and curating content, including casting flags, and critical community-driven anti-spam and quality control infrastructure will be shut down.

However, the letter itself cannot contain all of our concerns, and we felt it was important to share some of the background and details that were not included in the letter in the interest of brevity. We also wanted to touch upon several points at the same time that are related to Stack Exchange, Inc.’s recent behavior.

Background

A history of the Artificial Intelligence policy

On December 5th, 2022, Stack Overflow moderators instituted a “temporary policy” banning the use of ChatGPT in particular on the site. This was instituted due to the general inaccuracy of the answers, as well as that such posts violate the referencing requirements of Stack Overflow. The moderator team kept an eye on community feedback to guide it, and support welled beneath this policy. Similar policies were enacted across the network.

Within the next several days, thousands of posts were removed and hundreds of users were suspended for violating this policy.

Over the next few months, Stack Exchange, Inc. staff assisted in the enforcement of this policy. This included adding a site banner announcing the ban on these posts as well as editing and adding Help Center articles to mention this policy. Moderators were also explicitly given permission to suspend for 30 days directly in such cases, skipping the escalation process that is generally encouraged.

On May 29th, 2023 (a major holiday for moderators in the US, CA, UK, and possibly other locations), a post was made by a CM on the private Stack Moderators Team2. This post, with a title mentioning “GPT detectors”, focused on the rate of inaccuracy experienced by automated detectors aiming to identify AI- and specifically GPT-generated content – something that moderators were already well aware of and taking into account.

This post then went on to require an immediate cessation of issuing suspensions for AI-generated content and to stop moderating AI-generated content on that basis alone, affording only one exceptionally rare case in which it was permissible to delete or suspend for AI content. It was received extremely poorly by the moderators, with many concerns being raised about the harm it would do.

On May 30th, 2023, a version of this policy was posted to Meta Stack Exchange and tagged , making this a binding moderator policy according to the Moderator Agreement. The policy on Meta Stack Exchange differs substantially from the version issued in private to the moderators. In particular, the public version of the policy conspicuously excludes the “requirements” made in private to immediately cease practically all moderation of AI-generated content.

The problem with the new policy on AI-generated content

The new policy, establishing that AI-generated content is de facto allowed on the network, is harmful in both what it allows on the platform and in how it was implemented.

The new policy overrode established community consensus and previous CM support, was not discussed with any community members, was presented misleadingly to moderators and then even more misleadingly in public, and is based on unsubstantiated claims derived from unreviewed and unreviewable data analysis. Moderators are expected to enforce the policy as it is written in private, while simultaneously being unable to share the specifics of this policy as it differs from the public version.

In addition to these issues in how Stack Exchange, Inc. went about implementing this policy, this change has direct, harmful ramifications for the platform, with many people firmly believing that allowing such AI-generated content masquerading as user-generated content will, over time, drive the value of the sites to zero.

A serious failure to communicate

Throughout the process of creating, announcing, and implementing this new policy, there has been a consistent failure to communicate on the part of Stack Exchange, Inc. There has been a lack of communication with moderators and a lack of communication with the community. When communication happened, it was one-sided, with Stack Exchange, Inc. being unwilling to receive critical feedback.

An offer by Philippe, the Vice President of Community, to hold a discussion in the Teachers’ Lounge moderator-only chatroom took days to be realized. During that conversation, certain concerns were addressed3, but the difficult questions remained unanswered – particularly about the lack of communication ahead of time.

The problem with AI-generated content

This issue has been talked about endlessly, both all around the Stack Exchange network and around the world, but we feel it’s important to highlight a few reasons why several communities, not just Stack Overflow, decided to ban AI-generated content. These reasons serve as the backbone not only for our moderation stance against AI-generated content, but also why we feel confused and betrayed by Stack Exchange, Inc.’s sudden decision to halt our efforts to enforce our community-supported decision to ban it.

To reference Stack Overflow moderator Machavity, AI chatbots are like parrots. ChatGPT, for example, doesn’t understand the responses it gives you; it simply associates a given prompt with information it has access to and regurgitates plausible-sounding sentences. It has no way to verify that the responses it’s providing you with are accurate. ChatGPT is not a writer, a programmer, a scientist, a physicist, or any other kind of expert our network of sites is dependent upon for high-value content. When prompted, it’s just stringing together words based on the information it was trained with. It does not understand what it’s saying. That lack of understanding yields unverified information presented in a way that sounds smart or citations that may not support the claims, if the citations aren’t wholly fictitious. Furthermore, the ease with which a user can simply copy and paste an AI-generated response simply moves the metaphorical “parrot” from the chatbot to the user. They don’t really understand what they’ve just copied and presented as an answer to a question.

Content posted without innate domain understanding, but written in a “smart” way, is dangerous to the integrity of the Stack Exchange network’s goal: To be a repository of high-quality question-and-answer content.

AI-generated responses also represent a serious honesty issue. Submitting AI-generated content without attribution to the source of the content, as is common in such a scenario, is plagiarism. This makes AI-generated content eligible for deletion per the Stack Exchange Code of Conduct and rules on referencing. However, in order for moderators to act upon that, they must identify it as AI-generated content, which the private AI-generated content policy limits to extremely narrow circumstances which happen in only a very low percentage of AI-generated content that is posted to the sites.

This isn’t just about the new AI policy

While a primary focus of the strike is the potential for the total loss of usefulness of the Stack Exchange platform caused by allowing AI-generated content to be posted by users, the strike is also in large part about a pattern of behavior recently exhibited by Stack Exchange, Inc.

The company has once again ignored the needs and established consensus of its community, instead focusing on business pivots at the expense of its own Community Managers, with many community requests for improved tooling and improving the user experience being left on the back burner. As an example, chat, one of the most essential tools for moderators and curators, is desperately out of date, with two-minute, high improvement changes being ignored for years.

Furthermore, the company has repeatedly announced changes that moderators deem would cause direct harm to the goals of the platform, this policy on AI-generated content among them. The community, including moderators and the general contributor base, was not consulted nor asked for input at any point before these changes were announced, phrased in a manner that indicated that there was no possibility of retraction or even a trial period.

Some of these planned changes have been temporarily held off due to controversy, this strike influencing those decisions, but that does not change the recent tendency of Stack Exchange, Inc. to make decisions affecting the core purpose of the site without consulting those most affected.

The events of the last few weeks seem like history repeating itself. Stack Exchange, Inc. ventures into a new pursuit, this time, generative AI, in contrast with the community’s interests, makes a decision at odds with all feedback available to them, ceases communications with us, and we go on strike. This is similar to what happened last time the community moderators prepared to go on strike.

How we resolve this

Even though the strike may end, many community members are not comfortable with returning to the status quo before the AI policy itself, if nothing else changes. The strike’s focus on the AI policy is not downplaying the significance of SE’s other actions. We deserve much more than just retracting the AI policy. Stack Exchange already made promises after the 2019 debacle that they have since failed to meet. We are worried that Stack Exchange will continue down the same path once the situation calms down.

While the recent actions by Stack Exchange, Inc. are in conflict with the community and take a significant step backward in terms of the relationship between the company and the community, we do not think that our relationship is beyond repair. We do however worry that we are nearing the point at which it cannot be repaired anymore.

While it certainly may be true that the company wants to meet our needs, and wants to care for us, the reality is that this is not happening. It is time to wake up and realize what must be done. Stack Exchange, Inc. is not acting in our interest. It is time to do so.

What the striking users want

For the strike to end, the following conditions must be met:

  • The AI policy change retracted and subsequently changed to a degree that addresses the expressed concerns and empowers moderators to enforce the established policy of forbidding generated content on the platform.
  • Reveal to the community the internal AI policy given directly to moderators. The fact that you have made one point in private, and one in public, which differ so significantly has put the moderators in an impossible situation, and made them targets for being accused of being unreasonable, and exaggerating the effect of the new policy. Stack Exchange, Inc. has done the moderators harm by the way this was handled. The company needs to admit to their mistake and be open about this.
  • Clear and open communication from Stack Exchange, Inc. regarding establishing and changing policies or major components of the platform with extensive and meaningful public discussion beforehand.
  • Honest and clear communication from Stack Exchange, Inc. about the way forward.
  • Collaborate with the community, instead of fighting it.
  • Stop being dishonest about the company’s relationship with the community.

A change in leadership philosophy toward the community

We need business leadership to meet minds with the community members and community managers, because currently, it appears that leadership ignores them.

Immediate financial concerns appear to drive feature development. The community also has feature development wants and needs, but no substantial consideration is given to those needs, let alone resource allocation. The lack of merit leadership gives to the community and CMs even leads to its own business decisions being reckless and harmful, like the AI policy.

Leadership needs a change in philosophy to one that treats the community as more than a product, and values its needs and expertise. Such a philosophy is evidently currently missing, and leadership takes the expertise of its own product for granted. Leadership needs to represent this philosophy in actually allocating resources based on community needs as well as its own, and informing its feature development using the expertise of the community. Development can be guided by both business and community needs!

In conclusion

The sites on the Stack Exchange network are kept running smoothly by countless hours of unpaid volunteer work, and, in some cases, projects paid for out of pocket by community members. Stack Exchange, Inc. needs to remember that neglecting and mistreating these volunteers can only lead to a decrease in the goodwill and motivation of those contributing to the platform.

A general moderation strike is being held until the concerns laid out in the open letter and this post are addressed. Moderators, curators, contributors, and users, you are welcome to join in by signing your name in the strike letter.

If you would like to sign the open strike letter, but do not have a Stack Overflow account, please reach out to @mousetail (start a new room).


1While we’re aware that the legal name of the company is “Stack Exchange, Inc.”, the name “Stack Overflow” is more recognizable, and thus used in the open letter. The “Inc.” serves to demonstrate that our concerns lie with the corporate entity, and not the site itself, its moderators, or individual employees.

2Stack Exchange, Inc. provides a free Stack Overflow for Teams instance for Stack Exchange moderators, allowing moderators to store and share private information, bug reports, documentation, and communication with SO staff.

3This includes another planned change to the foundational systems of the platform that has the potential to facilitate unprecedented levels of abuse. (This was referred to as “the second shoe” during the planning stages of the letter and the strike, as in “waiting for the other shoe to drop”.) This has been delayed indefinitely while parts of the plan are reconsidered.

Read More