Editor’s Note: As regular readers know, I have been teaching IB Theory of Knowledge classes for years and blogging about it (see “Best” Lists Of The Week: Theory Of Knowledge Teaching Resources). I have had concerns about the marking of our student Oral Presentations and their essays, and have heard from many other TOK teachers around the world who have shared similar concerns. Here, my talented colleague John Perryman writes about specifically about those issues.  Please share your thoughts in the comments section.

John Perryman has been teaching inner city honors classes for thirty years at Luther Burbank High School in South Sacramento and IB for half of that time. He deeply committed to the 100% low income, ethnically diverse population and supports granting wider access to honors programs like IB than is traditional in the USA.

 

My students and I (at our urban, low income, high school) greatly enjoy the TOK course.    For most of the many years I have taught IB, the scores I (and my fellow instructors at this site) submitted to IB were a bell curve around the middle mark band, and were generally affirmed as accurate by the processes of the IB organization.

For the past few years, however, the IB organization has been moderating my school’s TOK scores downward (for both of our instructors).    And I have been forced into a somewhat bizarre yearly cycle of “guess and check” wherein I attempt to guess what the IB ToK examiners want, guide the students toward that goal, submit papers, and then see if I guessed right.   Our school pays extra for the examiner comments on my student’s papers in order to make more educated guesses. I found paying for other feedback to be ineffective because the feedback was disconnected from real life student work examples.   As this process has continued, I have attempted to figure out whether the downward trend is based on a decline in my students are writing that I have been unable to detect, or whether the downward trend is due to changes in practice by the IB examiners.

One hypothesis has to do with markbands.   I have been using the markbands published in the 2015 Teachers guide on the official rubric.   The IB TOK examiners are no longer using those markbands; they are using markbands 1 point lower as indicated on the subject component results that are sent to diploma program coordinators after the spring examinations.   

I wrote MYIB about this issue, and they claimed that the change had been published to TOK teachers.    I don’t remember receiving any revision of the rubric or 2015 teacher’s guide, but maybe I missed it. I have found nothing about his on the MY IB website in “examiners comments.”     Thus, if I entered scored an essay as a “5” on the TKPPF Form or into IBIS, meaning that the essay was a lower middle markband essay, and then the IB examiners agreed that it was a lower middle markband essay, they would lower the essay score to a “4”.    And so I look like I grade too liberally to them. Likewise, this change might be affecting the calculation of the final grade for the students.

Another set of issues has to do with minimum word count.    The TOK guide used to have clear language specifying that students needed a 1200 word minimum.    For some unknown reason, the TOK examiners removed the minimum word count language.

The 1200 word minimum appears to be still present however, albeit in a defacto manner,   I had one student submit a well written paper of 1165 words.  Both I and the initial examiner scored it as a middle markband paper.    But a third examiner, presumably a superior, intervened and lowered the score to a “3” with the comment that papers below 1200 words were “highly unlikely to score in the mid or upper levels”.   I submitted this paper for rescore, but have not yet received the results.

Another issues has to do with personalization.   The old rubric explicitly had 25% of the essay score based on personalization, and self-awareness.     The newer 2015 rubric is vague on the issue, asking only for “real life examples”. The 2015 teachers guide has contradictory language – in one place calling for personalization, and in another warning against the use of unsupported “anecdotes”.   The TOK essay prompt paper calls for candidates to “refer to other parts of the IB programme and to your experiences as a knower”. Thus the guidance language, to me, is unclear.

In the May 2018 set of essays, several of my students, writing to Prompt #2 (“…with knowledge doubt increases”) discussed their personal experiential journey moving from great confidence in their family’s tribal shamanism, to increasing doubt as they interacted with the larger community and gained knowledge about other religions.    I thought these were appropriate real life examples. The examiner, however, labeled each of these “Anecdote” and down scored those essays. A few weeks ago, I wrote MYIB asking for clarification on whether students could use their personal experiences as real life examples or whether I should guide students away from this practice; I haven’t gotten a response yet.   For now, I am telling the students to avoid all personal examples.

Another issue has to do with the dual nature of the 2015 Rubric.   Essentially, there are two rubrics on the same paper. The upper half of the paper is criterion based.  The lower rubric is holistically based. I have always used the upper criterion based rubric because I can use it to tell students how to improve their writing.   I fear, however, that the examiners use both rubrics and then chose the lower score or alter their interpretation of criteria to match their holistic impression. Most of my students are struggling writers – they struggle, but can succeed at writing papers that meet the criteria for middle and (rarely) upper markband papers.    But most these students are writing in their second language, and are unable to write with the nuance and sophistication needed to appear lucid, or insightful*. Their command of English is basic, and so the dual nature of the 2015 rubric limits their score.

On the rare occasion that I have had a native English Speaker, whose command of English was nuanced and accomplished, I found that the Examiners office upgraded their scores from the criteria based score I issued.

Lastly, of course, is the IB examiner’s office gross misuse of the Presentation Planning Form as a summative report form and the sole source from which they moderate.   Their decision to mostly ignore the scores and comments from instructors (who actually saw the presentation) and to refuse to watch the actual presentations, has been widely ridiculed in the TOK section of the online curriculum center and is actively damaging the reputation of the entire IB organization.  

Unfortunately, this practice by the IB examiner’s office potentially has a double negative effect on student TOK scores.  Not only is the 1/3 of the total TOK score from the oral lowered, but these lowered oral scores might be triggering intervention to artificially lower the scores of the essays so as to match the artificially lowered oral score.         The 2015 Teachers guide says on page 58 that IB checks if “an anomaly has been identified, for example, in the correlation between the marks for the presentations and the essays of students.”   Although the teacher’s guide claims that this check is to help identify schools for closer scrutiny of their orals, this style “performance management” is rife with peril as it can easily change the behavior of the examiners more than it changes the behavior of the candidates.   For most of my students, who test in English because the IB organization does not recognize their primary language, their oral scores and their essay scores are regularly and appropriately wildly different because either oral competence precedes written sophistication when learning a new language, or the student’s innate shyness creates a radically different result between the two mediums of communication.