Jump to content

User:Jiayingchi.lucie

From Wikipedia, the free encyclopedia

Wikipedia Advising Report

[edit]

Dear my colleagues of Wikimedia foundation,

I'm honored to have the opportunity to help the Wikipedia Community become a more convenient, practical, and warmer community for the users. Acknowledged by the public, Wikipedia is a widely used encyclopedia containing information across subjects and heavily relies on volunteering human contributors. In addition, the decision of incorporating Generative AI into Wikipedia should be made carefully. In my point of view, I support the idea of using Generative AI but also would like to add moderation and regulatory policies for it use. In the following paragraphs, I would like to provide two detailed and actionable suggestions and one concern for Generative AI incorporation, and hopefully the advice provides a new perspective for my colleagues on what should we think about doing and changing.

First of all, introducing Generative AI to Wikipedia is a good news for the users by benefiting them in multiple aspects. To improve the users' experience, one suggestion is to having the AI as a tutorial and educational assistant. Having the AI as a teaching assistant can help the new comers get familiar with Wikipedia pages and tool bars in a short time. It serves as a tutorial assistant and guides the new comers to navigate the pages such as how to edit an article and how to use the 'talk page' to communicate with other users. Covered in class discussions, we learned that a big challenge for new comers is the nervous of using non-familiar websites, struggling with how to use it; And, once they get familiar with the online community, start to use it, and gradually develop bondings with the community, they are more likely to stay. Integrating an AI guider would effectively make the first move - quickly jump into the community and no need to struggle.

Moreover, having AI act as a regulatory service (educational purpose) that holds all the rules and regulations can provide users with things like a chatbot to ask questions to, or as a site manager, that can easily detect and advise users on rules like copyrights and proper citations. Wikipedian can easily join the challenging discipline of researching articles, finding references, and making connections, all with the help of AI. And whenever the users make a correction or contributes to an article, the generative AI can make a pop up compliment or a " a good thumb emoji" on their user page. This emotional and visual stimuli can increase attention span for both new comers and old wikipendians by making the participants' experience more positive and more joyful!

Secondly, incorporating AI into Wikipedia can improve the daily user experience and help Wikipedia moderators, creating a more streamlined moderation system and safe communication environment. Having multi-layered filtering systems, such as Disqus software, and other AI filters, with the occasional human oversight can save time and make things more efficient. In fact, having AI in these matters is not only efficient, but more thorough than having everything monitored by hard working Wikipedians. Most edits are minor additions to existing pages and can be tedious to look through. Adding a bot to go through each edit can help prevent oversight and can create more time for moderators to actually correct the things that need to be changed. It can also help flags suspicious editing and offensive comments (in the Talk page). In addition, AI will not be a simple program that can be side-stepped. It is a learning system that can improve over time, becoming increasingly thorough and accurate. There will always be a human aspect in wikipedia, but how we contribute can be adjusted and improved. By adding AI into the mix, we can create a safer, focused learning platform for all users.

Thirdly, here are some concerns regarding AI to be addressed. First, there is the pressing concern about privacy and security online. Having AI incorporated into Wikipedia essentially gives access to all the log in and security information of users. This can be fixed with limited AI incorporation only in some areas of Wikipedia, ie. excluding the privacy and data of the community. Secondly, the accuracy of AI generated responses and texts can be unpredictable. It is common for AI to create long-winded sentences, incorrect information, or cite incorrect sources. We can address this by having moderators and constantly seeking to improve the prompts which we use to ask the AI questions. For example, have an AI page that is configured with specificity and accuracy in mind or have prompts that can be selected to limit error.

Last but not least, I would like to explain why the two advice in the first two paragraph should be considered. The suggestions incorporate a supportive AI to help moderate the Wikipedia community comes from the success of "no sleep" community. "No sleep" community set us a good example of keeping users and maintaining high quality contributions by having an active moderators and a technological system that mitigated norm violations. Having a generative AI system is not only for creating contents but also to assist our Wikipedia community to moderate the community. Moreover, having easy access to answers (AI helper) is helpful for the new comers and a welcoming vibe can be effective for all the users' retention. Learned from the NPR system and made a small adjustment applying in our wikipedia case, I believe letting AI take some tedious jobs such as going through minor editions can help prevent oversight and create more time for the human moderators or editors to do more meaningful tasks. Learned about the examples of successful online communities, I may think about the above advice and consider to make some changes.

Thank you for reading my advising report,

Jiaying Chi (Lucie)