Sat, 27 Feb 1999

In search for better English teaching system

By Setiono

JAKARTA (JP): A shift of emphasis of English-language teaching in the country from the structurally oriented approach to communicative approach unfortunately does not have significant implications in communicative language testing.

The central concern up to now has been the improvement of communicative language teaching. Attempts have been made to gear teaching objectives toward the achievement of communicative competence. The curriculum has been designed in such a way that it places a high priority on language functions in the form of dialog (see Basic Course Outline 1994) as one of the basic premises of communicative language teaching. Furthermore, the sequencing of teaching materials is determined by considerations of content and meaning to maintain and encourage student interest.

Critical suggestions for better English-language teaching have also been put forward. Alwasilah, for example, in his two-part article English-language teaching must be reformed in The Jakarta Post of Dec. 8 and Dec. 9, proposed that the teaching of English as a foreign language (TEFL) be put in the framework of national language planning, and that it involve a political and cultural context. Moreover, in my article Flawed system in language teaching in the Dec. 28 Post, I also suggested that the current 1994 curriculum be revised by taking into account pragmatic aspects.

However, issues related to language testing and assessment of student language performance in the communicative paradigm have undoubtedly become matters of complete indifference. This is due to insufficient knowledge of theoretical testing formulation, weak conceptualization of the existing testing theory, as well as the inherent problems in the intricacies of communicative testing which demand arduous efforts to overcome.

It is an understandable condition as many bureaucrats assigned to the field of teaching and probably testing English have no professional background in the above-mentioned fields (see also Alwasilah's article in the Jan. 8 Post.

Testing is an indispensable academic activity that cannot be separated from teaching and learning in the teaching-learning interactions. A test is a beneficial instrument or tool in academic life due to its potential in revealing so many facets.

Tests themselves and the test results are of considerable significance not only for teachers and learners, but also for curriculum designers, material developers and researchers. As for teachers, a test can provide a basis that can be used as significant feedback so they are able to construct better tests. A test can also facilitate in ascertaining which parts of the programs are regarded as burdensome by the class, and whether the Specific Instructional Objectives (SIO) planned by the teacher have been accomplished. In addition to evaluating student performance, a test can also be used to evaluate and then enhance teachers' instructional effectiveness.

To learners, a test serves as an aid to reinforce and motivate their self-understanding and evaluate their learning strategies. Furthermore, data obtained from a test may also provide sources of information for curriculum designers and material developers to assist in revising and redesigning the concepts of a suitable and comprehensive syllabus, as well as in developing teaching materials from various perspectives.

Finally, a test may offer significant evidence for language researchers in finding out how language is learned or acquired, what strategies or procedures the learner is employing in the discovery of the language, and whether and to what extent a certain teaching approach can be applied effectively and successfully.

Given the mutual interconectedness between teaching and testing, the crucial question that ought to be addressed in this latest English-language teaching orthodoxy is the extent the present existing language test actually measures the students' communicative language performance.

The currently employed English-language test suffers from several deficiencies.

The existing test used at the elementary and high school level known as EHB -- basically designed in multiple-choice questions -- seems to have been haphazardly constructed and ignores the concept of what is known as validity, specifically face validity. This type of test validity can potentially affect the students' test performance.

The problem stated in the stem (the initial part of each multiple-choice item) is not presented clearly and does not convey enough information to indicate the basis of which the correct option response should be selected.

Furthermore, the options are sometimes ambiguously written and might permit those taking the test to opt for more than one correct answer. Spelling, grammar and punctuation in both the stem and options are sometimes inaccurate.

Finally, multiple-choice questions exclude the importance of context. Decontextualization of test designs, particularly if its purpose is to assess the students' language ability communicatively, might convey the impression that language is measured artificially and could ultimately provoke ambiguity and confusion.

Poorly written test construction may not be accepted and used by candidates who are to perform the test; even if it is used, the candidate's reaction to it may mean their performance is not indicative of true ability.

The tests are still aimed at measuring the students' mastery of language usage (competence) rather than language use (performance). Instead of assessing the students' productive skill, the existing tests are constructed in the forms of multiple-choice questions, fill-in-the-blank questions and matching type of questions. From the three types of language tests above, it is multiple-choice questions that are widely used despite increasing criticism against them.

The legitimacy of this test type in measuring the students' language performance is in fact being questioned and even challenged. It is argued that despite the fact that the test type can be claimed to possess an extremely high standard of reliability and concurrent validity, its claim to other types of validity remains suspect.

Moreover, as multiple-choice questions never test the ability to communicate in the target language of English, and nor do they evaluate actual performance, there is no guarantee that the score reflects the students' genuine language ability.

A further criticism of the use of the multiple-choice format is voiced by Weir (1990), who argues it presents choices that otherwise might not have been thought of; if a divergent view of the world is taken, it might be argued there is sometimes more than one right answer to some questions, particularly at the inferential level. Thus, what the test constructor has inferred as the correct answer might not be what other readers infer.

Even the Test of English as a Foreign Language (TOEFL) -- which is used as an instrument to determine whether someone's English ability is adequate for university study in the U.S. -- cannot be claimed to measure ones' language performance. Bachman (1986), for example, investigated the lack of context associated with many of the TOEFL items and concluded that the majority of the tasks measure only grammatical competence, with only a handful tapping illocutionary competence.

If we wish to provide learners of English with the ability to use the language in an appropriate and purposeful context, and to perform a set of tasks in a foreign language, the test then ought to be designed so it reveals not only the learners' knowledge about the form of the language, the competence mentioned earlier, but also the extent to which the learners are actually able to demonstrate this knowledge in a meaningful communicative situation, or performance.

Knowledge of language rules or language usage in fact counts for nothing unless the user is able to combine them to form a connected discourse in new and appropriate ways to meet the linguistic demands of the situation in which he wishes to use the language.

Given the above limitations, there is an urgent need for Indonesian language-testing developers to construct communicative language tests that truly reflect the present communicative paradigm. The following aspects must be given serious attention in any attempts to design communicative tests.

Communicative language test must be student-centered. Since students typically have considerable experience of their own language learning, it would seem only reasonable therefore to take account of their language experiences and their views of what to know and not to know of a language. Future innovation in testing should perhaps pay more attention to the students' own informed view on assessment and on the method which will enable them to perform to the best of their ability.

Student-centeredness also means recognizing the need to take account of the students' self-evaluation.

Communicative language tests should be as direct as possible, that is when we wish to assess one's particular skill, we assess it in terms of what he can do to perform that particular skill. For instance, if we wished to know his skill in writing a composition, we would assign such real-life tasks as writing a report, a letter, a memo, etc.

Communicative language tests must take into consideration nonlinguistic elements such as cultural background and prior knowledge of the learners because they can potentially affect the students test performance.

Communicative language tests should attempt to reflect the real-life situation; the tasks candidates have to perform should involve realistic linguistic, situational, cultural and discourse aspects.

Finally, any constructions of communicative language tests should consider the degree of variety of illocutionary acts involved in the tasks.

The writer is a member of the teaching staff in the English department of the School of Education at Atma Jaya University, Jakarta.