Skip to content

Copilot & Co Tutorial

What About AI Coding Assistants Based on Generative AI?

Disclaimer: This tutorial introduces a few use cases of AI coding assistants powered by generative AI. It also highlights their benefits, such as boosting productivity and improving development practices, while pointing out their limitations.

GitHub Copilot, originally designed for students using the GitHub Education offer, is used here as an example. However, you can follow this tutorial with other AI coding assistants like Codeium, Cursor, Supermaven or Tabnine, which offer similar features. We'll use the term AI coding assistant throughout the tutorial.

This tutorial was written in the fall of 2024 to give second-year Computer Science students in the BUT program an opportunity to experiment with AI coding assistants, at a time when these generative AI tools were rapidly emerging. Since then, these tools have evolved quickly and now produce fewer hallucinations than they did at the time this tutorial was written.

In this tutorial, you will learn how to:

You will also find some Useful Resources that were consulted while writing this tutorial.

This tutorial comes with a reflective questionnaire.
Feel free to download and fill out this form as you go through the tutorial.

To get started, go to the Installation if you need to set up GitHub Copilot in your favorite IDE. If it's already installed, you can begin directly with Getting Started with the assistant using your preferred AI coding assistant.

About Pedagogical Strategies

This tutorial is designed to help students develop both their technical skills and critical thinking when working with an AI coding assistant. It offers an opportunity to explore, reflect, and grow through the use of AI.

To develop students' critical thinking, this pedagogical approach relies on a three-step, iterative and incremental learning process:

  1. Experiment
    Explore and test the assistant by following the guided tutorial. Observe its behavior, discover its capabilities… and recognize its limitations.
    Learners interact with the tool by observing its behavior and discovering its strengths and weaknesses through hands-on practice.
    Several scenarios were specifically designed to highlight its limitations, particularly by attempting to trigger hallucinations (incorrect or misleading responses generated by the language models at the core of generative AI). Due to the tool’s probabilistic nature, such hallucinations may not appear consistently, which is why they are explicitly explained in the tutorial.

  2. Analyze the AI-generated output
    Take a step back using a reflective questionnaire. Think about what worked, what didn't, and why. Use your own expertise to interpret the assistant's suggestions.
    This is a time for guided reflection, supported by a questionnaire, explanations provided in the tutorial, and the activation of one's own expertise.

  3. Adjust the AI's suggestions
    Refine your prompts, correct errors, and make informed decisions.
    Finally, students are invited to revise, adapt, and refine the assistant's suggestions by exercising their own judgment. This is where autonomy comes into play, as learners take an active role in their own learning process.

This approach is guided by three key principles: a progressive integration of the tool, a critical validation of its outputs, and a fostered autonomy through adjusting suggestions, reflecting on decisions, and learning to use AI tools thoughtfully and effectively.

Research Publication

This educational experience led to a research paper presented at ITiCSE 2025 :
Developing Critical Thinking with AI Coding Assistants: An Educational Experience Focusing on Testing and Legacy Code.

Want to Discuss?

For questions or feedback, join the conversation on the GitHub Issues page.

To suggest changes or contribute, feel free to open a pull request.

License

This document is licensed under CC BY-NC-SA: Creative Commons Attribution – NonCommercial – ShareAlike

Learn more about Creative Commons licenses