Atta Gebril Breaks Down Common Myths About Teaching a Second Language
For many students, standardized assessments are the most dreaded exams because of their high-stakes nature and the months that go into preparation. An exam score could have a lifelong impact on students’ future, with decisions ranging from passing a course, qualifying for a degree program, landing a job or even acquiring citizenship.
Given the power tests hold over one’s future and the emotions and trepidation stirred up by them, students often find themselves creating theories on how to get a high score, according to Atta Gebril, associate professor in the Department of Applied Linguistics.
In his recent book, Assessment Myths: Applying Second Language Research to Classroom Teaching, co-authored with Lia Plakans from the University of Iowa, Gebril disentangles eight myths that reflect the most common views about language testing.
Myth 1: Assessments are all about writing tests and using statistics.
Writing tests and analyzing their results are part of a teacher’s role, but many other things are equally important in understanding language assessment, noted Gebril. “Assessment can take form in a less formal approach such as checking in with students and altering lessons based on their progress,” he said. “Other ways are using small group discussions about a course reading to evaluate the students’ development of oral skills; answering scaffolding questions as students complete a task to evaluate how they learn with some meditated support; and implementing short quick writes to inform teachers what a student has learned from a lesson.”
Implementing some of these assessment tactics requires understanding that evaluating a student is more than just administering tests and quizzes. “A wide range of class activities serve the purpose of evaluating students’ understanding, and we need to carefully and systematically consider how we use them to make decisions,” he said.
Myth 2: A comprehensive final exam is the best way to evaluate students.
For Gebril, a comprehensive final exam, which is the traditional way of evaluating students in some countries like Egypt, does not show the general picture. “A final exam could be helpful in terms of deciding who should pass a course, obtain a degree or get admitted to a program,” he noted. “It doesn’t help in terms of learning opportunities. That is why we need to inject formative assessment activities throughout the semester, where diagnostic information is collected and used by instructors to inform their own teaching practices and improve learning activities.”
With formative assessment practices, teachers have the flexibility and option to go beyond the formal test structure and provide different types of assessment tools. “By all means, a comprehensive final exam limits the scope and content teachers can cover and the types of tasks they can use,” he said.
Myth 3: Scores on performance assessments are preferable because of their accuracy and authenticity.
While a performance assessment is a valuable approach to test language ability, strategic planning is needed to carefully assess performance. Scoring rubrics or scales, for example, are useful tools that can help teachers accurately capture language ability.
What’s useful about integrating creative approaches in exam scoring is that it creates positive “washback,” or impact, on classroom teaching and practice, noted Gebril. “Students become more engaged and aware of the standards for language proficiency, and it encourages them to work hard to reach a certain level.”
Myth 4: Multiple choice exams are inaccurate measures of language, but are easy to write.
Multiple choice exams are good in terms of covering different areas from a textbook, Gebril pointed out. However, in order to tap into higher-order thinking skills, multiple choice questions (MCQs) need to be written well and other testing techniques should be provided. “MCQs tap into language knowledge rather than language performance, so teachers should give writing and oral tasks where students can show their performance.”
A key issue about including writing and oral exercises is the lack of resources in some schools. “If teachers have a large sample size of test-takers, they’ll need time to allow students to practice their writing and raters to evaluate their performance,” Gebril explained. “One of the advantages of MCQs is that they’re objective items, since we usually have one correct answer. However, writing and oral activities require more than one rater, given their subjective nature.”
Myth 5: We should only test one skill at a time.
In academic course work, students are often required to use more than one skill to complete an assignment like conducting research, finding references and synthesizing information, explained Gebril. In most cases, students draw on different sources, especially with writing and speaking. “Testing one skill at a time does not reflect the reality of how students learn,” Gebril said. “That is why it is not an effective assessment strategy.”
The TOEFL exam is an example of how different skills are tested by measuring the ability of test-takers to synthesize information from multiple sources. According to Gebril, the purpose of these tasks is to simulate practices in academic contexts. “The good thing about integrated assessment tasks is that they bring positive washback in language classes,” he said. “Instead of teachers focusing on independent writing, students can focus on how to synthesize information from different sources.”
Myth 6: A test’s validity can be determined by looking at it.
Validity is an essential quality of any test. A test’s validity gives confidence to teachers, administrators and other users when they use a test for certain purposes, noted Gebril. “A test’s validity is helpful because we gain trusted evidence about the student’s language ability, which provides a more accurate profile,” he said.
However, tests have their own constraints; therefore, collecting evidence about a student’s performance on different subjects is always valuable. “Tests are often constrained by time, taken in one sitting on one day, and thus cannot provide evidence for a learner’s ability to improve over time,” explained Gebril. “A portfolio of work from a full semester or term could fill this gap.”
Myth 7: Issues of fairness are not a concern with standardized testing.
A key issue commonly raised about standardized tests is that they’re written in a one-size-fits-all format, declared Gebril. “In some cases, you hear rumors about exams not being fair, which has some truth built into it, but it’s not entirely true,” he said. “Testing agencies consider fairness when developing the content of the exam and also have review committees to check any content or language-related biases.”
In other cases, equitable access to test preparation is beyond their scope. For instance, there are prep courses such as Kaplan and Princeton Review that not everyone can afford. It is likely that students who come from poor families will not have access to these resources. “Issues such as these should be addressed within a social justice framework, not only by testing agencies,” Gebril affirmed.
Myth 8: Teachers should not prepare students for tests.
For Gebril, having high concentrations of test prep activities in class doesn’t necessarily lead to meaningful test scores. “Most teachers are pulled into the direction of teaching for the test, while neglecting other learning objectives and activities,” he said. “A more effective strategy is to have activities tapping into learning objectives and improving learners’ abilities, rather than only improving students’ test-taking skills.”
Test preparation is often perceived negatively because it comes at the expense of real learning in class. Certain variables should be taken into account such as issues of fairness, access and validity. “There are alternatives that both teachers and students have,” Gebril said. “Testing agencies provide manuals and test samples for test-takers. In addition, there are countless online resources that are easily accessible and specifically target standardized test preparation.”