In the interview, participants were asked eleven standard questions. The first three questions compensated for the defect mentioned earlier in the questionnaire by having teachers explain more about their choice with probing questions. There was also a question inspecting the types of supplementary materials used by teachers, which partly manifested the subjective factor of personality – the preference of particular materials. This question was somewhat drawn on Lee and Bathmaker’s (2007) with respect to “Self-developed Materials,” “Past Examination Papers,” and “examination papers from other schools,” all of which are classified as prior codes for later analysis.
The next five questions were aimed at collecting data about teacher sources of beliefs. These questions were drawn on the theoretical framework of the factors affecting teachers’ beliefs – experiences, contexts, knowledge, and personalities. For instance, questions four and five elicited how experiences impact the respondents’ belief of supplementary materials. Having identified teachers’ inability to articulate their beliefs as a confounding factor, the researcher added prompting questions that help participants recall and link to their realities. Finally, questions nine to eleven were intended to elicit how teachers practice selecting, designing, and adapting supplementary materials, which also adhered to the framework identified earlier. In order to overcome the risk that teachers may report “what should be done rather than what is actually done in class” (Fang, 1996), examples were elicited from teachers to support what they had articulated.
The interview questions were adapted from those Chappell, Bodis, and Jackson (2015), which centered on teachers’ cognition and classroom practice on IELTS test preparation courses. In the current version for this research, the question list was shortened and adjusted to match the framework pinpointed earlier. The information in the interview helped to address research questions number two and three about sources of belief and the extent to which teachers’ beliefs are congruent with their practice. For the sake of data analysis, priori themes were identified as experience, personality,
knowledge, context – in relation to teachers’ beliefs, and selection, adaptation, and design – in relation to supplementary material development. Despite the given themes, the researcher also looked for any other themes emerging during the process of data collection and analysis.
As “belief” is a multifaceted concept, it apparently requires more than a single measurement method, no matter how optimized or advanced that method is. By utilizing a combination of two different data-collecting methods, measurement artifacts can be avoided. The more methods are employed, the more profoundly and thoroughly characteristics of a concept are tapped into (Johnson & Christensen, 2014, p. 293). This idea bears much resemblance to what Hammersley (2008) has identified: triangulation as validity checking (i.e., drawing data from a different source to alleviate the probability of false conclusion), indefinite triangulation (i.e., focusing on divergence in informants’ accounts and thus, overlook on validity), and triangulation as seeking complementary information (i.e., emphasizing the conventional ideas about combining different methods to exploit their strengths and minimize weaknesses).
3.6. Pilot Study
3.6.1. The questionnaire
In order to guarantee the validity of the questionnaire, it was first reviewed by two teachers who have master’s degrees in the field. Based on the feedbacks of those experts, the questionnaire was edited in terms of language in questions and the number of items. Specifically, there were 66 items in total, with the last part of the questionnaires that catered for the sources of beliefs. In fact, it was the modified version of a section in Nguyen’s (2012) and Cheng’s (2018) survey of teachers’ beliefs about teaching CLT in the classrooms. The items in this section covered the four identified factors influencing teachers’ beliefs. However, this section has been omitted, as mentioned previously, that
teachers may not be able to evaluate their sources of belief and thus, provide unreliable data.
Although checked by the experts and approved by the supervisor, the study was also piloted with a limited number of teachers (n= 17). After the respondents completed the questionnaire, they were required to share their opinion about the items in case there was any misleading language. In this version of the questionnaire, there were only 40 items entirely measuring four variables, including “the sources of supplementary materials,” “the concept of supplementary materials,” “the reason to develop supplementary materials,” and “the criteria to develop supplementary materials.” These variables were coded as “Sources,” “Concept,” “Reason,” and “Criteria” for the shake of computer input. The results of the pilot were entered into SPSS-26. However, the Cronbach’s Alpha for the whole questionnaire was high (.885). The alphas of individual variables varied greatly. “Sources” and “Reasons” only obtained .637 and .662 respectively, which is rather low as the coefficient alpha should be “greater or equal to .7 for researches purposes” (Johnson & Christensen, 2014, p. 246). On the other hand, “Concept” and “Criteria” achieve high coefficient alpha at .749 and .843, in turn. After the second revision of the questionnaire, it was indicated in SPSS that all of the items in “Sources” had a low coefficient, while if an item in “Reason” is left out, the alpha will be .725. Consequently, the official questionnaire contains 31 items.
3.6.2. The interview
Similar to the questionnaire, although the questions had been reviewed by the supervisor and updated. It was given to another teacher who has a master’s degree in language teaching to evaluate the languages and terms. After that, a sample interview was conducted with an IELTS teacher. As a result, there was hardly any issue that occurred and the estimated time for the interview was 20 minutes. The information
provided by the teachers was sufficient to analyze in accordance with the priori themes and codes.
3.7. Data analysis methods
The questionnaires were sent to 146 teachers; however, as for the reasons stated previously, the final sample was 131. The responses collected were imported into a file, and then, a computer software called SPSS version 26 was used to produce descriptive statistics (frequency for demographic data) and the mean scores of items in the subscales (concepts, reasons, and criteria regarding developing supplementary materials). Furthermore, the statistic procedure of Analysis of variance (ANOVA) was used to inspect the differences among means of teachers’ groups. Since data in the SPSS program must be in the form of numbers, all the variables were assigned a corresponding number. For instance, with the genders, males were coded as number 1, while number 2 was for females.
Following the quantitative analysis, data collected from the interview were analyzed with the process suggested by Creswell and Creswell (2018). First of all, the raw recorded data were transcribed and stored in text files. Each interviewee was given a pseudonym, and there was a separate list of the interviewees’ real names and their pseudonyms. Following this, the transcribed data were read altogether so that the researcher could have an overall view of what the interviewees had said. The next step is coding all of the data, and at this point, the content analysis approach was adopted, in which there were both pre-defined codes and codes emerging from the data (Dawson, 2009). A code is simply defined as “a name” or “a label” given to a piece of text (Cohen et al., 2018). In the present study, coding refers to assigning labels to teachers’ responses. Some open codes that guide the process correspond with the theoretical framework: Personal experiences as a learner (experience), Knowledge from teacher training courses (knowledge), Other teachers’ practices (context), Relevance to the IELTS examination
(selection), and Adding more materials (adaptation). The aim of coding is to reduce the data so that findings can be interpreted from the perspective of the research questions. Therefore, responses from teachers were condensed in the coding process. Table 3.2 depicts an example of data condensation.
Table 3. 2
Examples of data condensation
Condensed remark | Code | |
“If the supplementary materials are immediately relevant to the topics that I teach in classes, they will be a way of consolidating knowledge and offering extra practice and resources.” (Teacher P7) | consolidating knowledge and offering extra practice and resources | Providing extra exercises |
“It is important that students can read and find the materials eye-catching. As a result, they can internalize knowledge better.” (Teacher P5) | Students can read, eye-catching | Materials’ layout |
Có thể bạn quan tâm!
- Selecting Supplementary Materials For Ielts Training
- Adapting Supplementary Materials For Ielts Training
- An investigation into teacher's beliefs and practice about developing supplementary materials for ielts learners at language centers in Ho Chi Minh city - 8
- An investigation into teacher's beliefs and practice about developing supplementary materials for ielts learners at language centers in Ho Chi Minh city - 10
- Have Never Seen Such A Multipurpose Textbook That Is Highly Updated
- An investigation into teacher's beliefs and practice about developing supplementary materials for ielts learners at language centers in Ho Chi Minh city - 12
Xem toàn bộ 192 trang tài liệu này.
During the analysis process, NVIVO-12, a computer software program, was used to assist the coding procedure. The program is capable of managing data, managing ideas, visualizing data, and reporting from the data, which is useful in terms of determining the boundaries or characteristics of an investigated issue (Bazeley & Jackson, 2013). Specifically, NVIVO-12 helped to categorize data by creating “nodes” to provide storage areas for references to the coded text. Those basic and fundamental nodes were treated
as codes, and 34 codes were generated and presented with examples in Appendix E. On the other hand, at higher and more aggregate levels, those nodes representing ideas of a tendency in respondents’ answers were considered themes or subthemes (Woolf & Silver, 2018). Figure 3.4 shows an example of a coding tree representing the codes, subthemes, and themes that emerged in the data analysis.
Figure 3. 4
Coding tree representing the codes, subthemes, and themes
Providing extra
exercises
Compensating for the weaknesses in main coursebooks
IELTS commercial
textbooks
Filling the gaps between coursebooks and learners’ expectations or the examination
Replacing the inappropriate content in coursebooks
Authentic materials
Raising interests among learners
IELTS websites and blogs
Improving lexical and grammatical capacity
Main coursebooks’ addons
Bearing a resemblance to the real test
Materials from sources other than IELTS
Improving teachers’ own language knowledge
Materials written by test takers achieving a high score
The notion of supplementary materials in IELTS classrooms
Purposes of supplementary materials for IELTS classrooms
Benefits of supplementary materials for IELTS classrooms
Sources of supplementary materials for IELTS classrooms
Then, themes and descriptions emerged as major findings under the research questions. These themes and descriptions were presented in the form of a detailed discussion with subthemes, perspectives from interviewees, and their quotations. Table
3.3 below shows the categorization of themes in relation to the research questions.
Next, data from both phases were correlated and cross-classified together to examine their relationship. Finally, data were compared and merged into a comprehensive whole to determine the main factors affecting teachers’ beliefs and the relationships between beliefs and practice.
Table 3. 3
The categorization of themes in relation to the research questions
Theme | |
1. What are teachers’ beliefs about developing supplementary materials for IELTS courses? | a. The notion of supplementary materials in IELTS classrooms |
2. What are the factors affecting teachers’ beliefs about developing supplementary materials for IELTS courses? | b. Internal and external sources |
3. How do teachers’ actual classroom practices align with their beliefs about developing supplementary materials for IELTS courses? | c. Teachers’ practice of developing supplementary materials d. The convergences and divergences between teachers’ beliefs and practice |
3.8. Reliability and validity
Reliability and validity are considered the two most vital attributes that researchers should take into account when using a measuring instrument. While reliability refers to the consistency of results obtained from an instrument on different occasions, the validity of an instrument is attained when the instrument is measuring what it is supposed to measure (Ary et al., 2010, p. 224; Johnson & Christensen, 2014, p. 239; Kornuta & Germaine, 2019, pp. 51-52 ). According to Cohen et al. (2018), it is impossible to thoroughly eliminate all the threats to these two properties, but rather strategies should be implemented to alleviate these threats (p. 245). Depending on the types of instruments used in research, different strategic approaches are adopted.
As mentioned in the earlier section, the principal instruments for the current study included a questionnaire and an interview. Although the question items in these tools were self-generated, they were designed and triangulated based on previous studies in the field. The instruments were then reviewed by three experts and piloted with teachers sharing the same characteristics as the sample of the investigated population. These are the strategies proposed by Kornuta and Germaine (2019) regarding self-developed research instruments (p. 60). Despite that, the issue of reliability and validity was given careful consideration for both the questionnaire and the interview.
As for the questionnaire, the statements measuring those variables were designed based on the theory of supplementary materials, and thus the threats to the construct validity were somewhat attenuated. The content validity was, moreover, guaranteed when the question items were judged by experts in the area of investigation. Another strategy for strengthening the validity of the questionnaire was to ensure the anonymity of participants (Cohen et al., 2018, p. 278). This was aimed at avoiding socially desirable responses. The use of the four-point Likert scale was also supportive as it refrained respondents from opting for the safe middle choice. In addition to validity, reliability