Skip to main content
Anna e só

Outreachy report: November 2025

Summary

Program activities

This December 2025 cohort will be very different: it will be first to not have our traditional intern chats or bi-weekly assignments. We will send emails to all interns requesting at least three reports: (1) one talking about their community bonding activities, perceptions, and expectations about their internships (2) one about their midpoint progress (3) one about their overall progress during the internship, and the results and the artifacts they’ll leave for the community.

The matter of communities withdrawing from the contribution period was the one that concerned me the most. Their complaints weren’t new: they couldn’t find skilled applicants. Applicants engaging with their communities were relying heavily on LLMs to make low quality contributions. They weren’t ready to deal with such influx, and needed time to think about a policy for LLM usage in their community repositories and participations in programs such as Outreachy.

This issue made me sign up for AI fluency classes and learn about Dakan and Feller’s AI Fluency Framework. I became more familiar with models our applicants may use the most, and with services offering them (Perplexity, Copilot). I followed exercises to understand the structure of outputs that were more commonly used in our contribution period. I reflected about what made LLM usage so problematic in mentor-mentee interactions (all moral, environmental, and ethical considerations aside).

I realized that a lot of people see LLMs as a tool that will guide them to a shortcut. A shortcut that will allow them to spend minutes, and not hours, on problems they have to solve. Something that will radically reduce the time they need to employ to think. And that’s the core of the problem with LLM usage in our contribution period: applicants aren’t engaging with the problem. Applicants aren’t learning about the problem. They may see the contribution period as a monotonous task to be completed, not as an essential step to learn and to grow as a professional. They completely miss a learning opportunity.

One interesting thing about Dakan and Feller’s framework is their emphasis on how LLMs rely on your expertise. They say we have to develop four main competencies to develop fluency in AI systems: delegation (understanding what you can delegate to an AI system), description (understanding how to describe the problem and the expected outcome), discernment (evaluating the output for correctness, accuracy) and diligence (being transparent about LLM usage, taking resposibility for LLM outputs and disclosing LLM usage, understanding LLM systems, their limitations, their data access policies). I would say applicants have issues with delegation, discernment and diligence—description may be well-supported by the documentation provided by mentors, coordinators and other community volunteers. And the less experienced in the subject matter they are, the more they will rely on the decisions and assumptions made by LLMs. The less they will understand about the outputs they’re submitting. One saying has been repeated over and over in conversations about this phenomenon: “A fool with a tool is still a fool.”

We’re reaching out to participating communities to understand the impacts of LLM usage in this contribution period. We’re looking forward to hear from communities with an established AI policy—they seemed to have very few issues compared to the communities who didn’t have one ready. We hope to both assist communities in developing an AI policy aligned with their strategies and goals and to have a better messaging around LLMs for the next cohort. And we’re considering including those discussions in the Open Mentorship Handbook we’re creating.