Commençons par dire que ” dans la mesure où l’outil d’évaluation ” universel ” n’existe pas, chaque formateur doit développer son propre outil d’évaluation. Néanmoins, il n’est pas nécessaire d’inventer la roue à nouveau” (e-MEL 2017, p. 3). Nous vous proposons ici quelques exemples dont vous pouvez vous inspirer pour créer vos propres outils.
Comme vous le savez, dans une rubrique, vous pouvez adopter deux méthodes :
– their knowledge and understanding of diversity issues in video games as well as video games genres (qualitative assessment)
– their inquire-and-analysis skills with regards to two specific aspects: a) the ways in which diversity is represented; b) how commercial pressures and medium/genre characteristics influence meaning production (qualitative and quantitative assessment)
– their application of skills and knowledge with regards to use of genre elements and how they are used to communicate students’ understanding and analysis (qualitative and quantitative assessment).
Table 3 – Rubric example no. 1 (video game design)
The full lesson plan can be downloaded here: https://mediasmarts.ca/sites/default/files/pdfs/lesson-plan/Lesson_First_Person.pdf
2. The second example is section that can be used to assess learning in a video production activity (Table 4)
Comme les grilles d’évaluation, les pré-tests et les post-tests sont des mesures traditionnelles de l’apprentissage. L’objectif d’un pré-test est d’évaluer les connaissances préalables des élèves sur les questions d’éducation aux médias avant l’enseignement et de fournir une base de référence pour mieux planifier les activités pédagogiques. Les connaissances préalables peuvent également être évaluées de manière moins structurée, par exemple lors d’une séance de remue-méninges. Le post-test détermine si les élèves ont amélioré leur compréhension et leur application des concepts fondamentaux de l’éducation aux médias grâce aux activités analytiques et créatives prévues dans le plan de cours.
L’ensemble des questions pourrait porter sur les points suivants
1. Connaissance par les élèves des médias et de la façon dont ils construisent la réalité : Les publicités montrent-elles toujours les gens et les choses tels qu’ils sont dans la vie réelle ?
3. La capacité des élèves à appliquer les concepts de l’éducation aux médias à différents types de contenus. Par exemple : Quel est le public cible de ce dessin animé ?
4. Compréhension par les étudiants des systèmes par lesquels les messages médiatiques sont construits et diffusés. Par exemple, les programmes de télévision semblent gratuits, mais qui les paie en fin de compte ? Les programmes de télévision semblent gratuits, mais qui les paie en fin de compte ? Comment les radiodiffuseurs commerciaux font-ils des bénéfices ? Quelle est la différence avec la radiodiffusion publique ?
5. Compréhension par l’élève de sa propre relation avec les médias et de la manière dont ils influencent sa vie. Par exemple : Dans quelle mesure est-il important pour moi de porter ce que mon influenceur favori sur les médias sociaux me suggère ?
Cette combinaison de questions permet de mesurer à la fois la connaissance du contenu et les compétences liées au processus. Vous pouvez conserver les résultats des pré-tests (ou des discussions de brainstorming) pour les comparer aux post-tests afin d’évaluer les progrès des élèves.
Students’ self-assessment can be a powerful method not only to evaluate learning but to actually improve it. That’s the case, for example, with Jason Deehan’s experience (2016). Being more interested in shifting the focus from results to the process (that is, how something was learned rather than what was learned), he decided to create a rubric to be used by students as self-assessment hoping that this will improve their learning. Usually, he would ask students to write papers and then grade them, recording on a Google Sheet all the extra comments and concerns he would have with each student’s writing. Going over these records, he noticed that certain students, while getting good grades, continued to make the same kinds of errors in every writing assignment they made. “Clearly they were not growing as writers”, he comments. That’s when self-assessment came as a possible solution to this impasse and a way to build more reflection/metacognition competences in his students’ writing process.
Voici comment il décrit l’activité :
“We just finished watching (a heavily censored version of) the film 12 Years a Slave and afterwards wrote a POV letter from one of the characters to another. I had the students pull up their letters on their Google drive.[su_tooltip title=”POV” text=” Point of view (POV) letters are a kind of assignment where students are asked to write a text (like a letter or an essay or public announcement) taking someone’s point of view and arguing for it. ” position=”left”][/su_tooltip]
On the screen, I projected the rubric I created for this particular assignment.
I showed the rubric to the students and we discussed the project and what my expectations were. Then, I showed the students a breakdown of the rubric – a simplified listing of my expectations. These were broken down into 3 main criteria: Organization, Conventions and Ideas.
Then, we went through each part of these criteria while students followed along in their letters.
We started with the most basic of the criteria (Organisation): in the rubric I asked for a three-paragraph letter. So, I had students count their paragraphs. Did they have three paragraphs? If so, then they gave themselves a score out of 10.
Next, my rubric asked for each paragraph to perform a particular function. For instance:
Paragraph 1 was a summary of the life of one of the film characters (the hero of 12 Years a Slave, Solomon Northup).
Paragraph 2 was an overview of the emotions (both positive and negative) that Solomon felt about his first owner, Mr Ford.
Paragraph 3 was a persuasive paragraph challenging Mr Ford’s beliefs about slavery and asking him to give up slavery and join the abolitionist movement.
I asked the students to review their paragraphs and determine if each of their 3 paragraphs focused on the above three objectives. If so, they gave themselves a score out of 10 for each paragraph.
This was the end of criteria 1, so students calculated the average and came up with a total out of 10.
We did the same for Conventions (looking at spelling, punctuation, capitalization, etc. – scored out of 5) and Ideas (focusing on the evidence students provided to support their ideas – scored out of 10).
At the end, the students had a total score out of 25 for their letter.
Now, while the students were assessing their own work, they were also correcting/changing their work.
And, some of you might feel like this is cheating. After all, we were supposed to be grading a final version of a project.
That’s not how I see it. I did not tell students what changes to make in their letter. I simply showed them what the expectations were. They reviewed their work and they determined whether or not they met the expectations. And, if they did not, they made the necessary changes.
This self-assessment process was about improving writing. It was about learning. And the students certainly learned and improved their work.
Self-assessment is a powerful tool that triggers some deep thinking. Labeled Evaluation, the ability to critically think about your own work rests at the top of Bloom’s Taxonomy.
Self-assessment helped me learn also. I learned that my rubrics, although seemingly clear, probably need to be simplified and then clearly explained before we begin a project. Also, I need to build in a middle step in the process – a step similar to that above – where students can review and edit their work.
This experience reminded me of a quote about the writing craft: “Nothing good is ever written. Everything great is rewritten.”
************
Another example of self-assessmenttool are the grids developed by Maria Ranieri in the toolkit Digital and Media Literacy Education (2013). (The toolkit is available in Italian, English, German and Romanian. http://virtualstages.eu/it/download/)
L’une des unités de la boîte à outils porte sur les questions de protection de la vie privée. Dans une activité, les étudiants doivent créer un glossaire de termes, puis auto-évaluer leur apprentissage à l’aide d’une grille (tableau 5). Dans une autre activité, ils doivent auto-évaluer la création d’un guide de confidentialité en ligne pour les utilisateurs (tableau 6). Par ailleurs, comme toutes les activités comprennent un travail de groupe, il est possible d’adapter ces grilles pour procéder à une évaluation par les pairs, comme nous le suggérons dans le paragraphe suivant.
Peer assessment can be particularly useful in case of group work. As you have probably experienced in this kind of work sometimes there is a tendency to swing perhaps too enthusiastically to assess the entire group. Yet, it is important to find a balance between individual and group assessment, as the latter, most obviously, can encourage freeloading. Of course, we know that all students, in principle, contribute to the success of a project in many diverse ways; but we also know that, in practice, if less motivated students know that their grade will be the same as the highly motivated ones in their group, they might just decide to work less. As Gibbs (2010) acutely points out, ‘Allocating a single group mark to all members of a group rarely leads to appropriate student learning behaviour, frequently leads to freeloading, and so the potential learning benefits of group work are likely to be lost, and in addition students may, quite reasonably, perceive their marks as unfair.’ Therefore, to tackle these problems, peer assessment may help. Ultimately, when considering an individual’s contributions to a group task the only people who know what the respective group contributions have been are the members of the group themselves. In a way, group work ‘naturally’ lends itself to peer assessment. You could, for example, require students to keep a project collaborative logbook, a blog or some form of portfolio that allows each student to demonstrate his/her individual performance within the group.
Il convient également de réfléchir à la manière dont vous recueillez les évaluations des pairs. Vous pouvez le faire de manière anonyme (ce qui réduit l’anxiété des étudiants à l’idée de s’évaluer les uns les autres) ou dans le cadre d’une discussion ouverte (ce qui donne aux étudiants la possibilité de se défendre). L’approche que vous déciderez d’adopter dépendra invariablement de votre connaissance des étudiants, de la taille de la cohorte et des expériences des étudiants en matière de travail de groupe et d’évaluation par les pairs.
Le tableau 7 vous donne un exemple de feuille de travail que vous pouvez remettre à vos élèves pour les aider à évaluer le travail de groupe de leurs camarades. Une fois qu’ils ont rendu leur feuille, vous pouvez intégrer les résultats de l’évaluation par les pairs aux vôtres.
Deehan, J. (2016). Self-Assessment: A Powerful Tool to Improve Student Learning and Understanding https://www.edutopia.org/discussion/self-assessment-powerful-tool-improve-student-learning-and-understanding
e-MEL (e-Media Education Lab) (2017), Evaluation toolkit, https://e-mediaeducationlab.eu/en/evaluation-toolkit/
Gibbs, G. (2010) The assessment of group work: lessons from the literature. https://www.plymouth.ac.uk/uploads/production/document/path/2/2425/Assessing_Group_Work_Gibbs.pdf
Ranieri M. (2013), A Toolkit for Digital & Media Literacy Education http://virtualstages.eu/it/download/