TBL: Assessing RAP results

Chelsea Bullock
4 min readMar 10, 2021

When using the Team-Based Learning strategy, it is important to assess whether the students are ready for solving real-world problems in the Application Exercises. To ascertain the students’ preparedness, you need to be able to discern and interpret the readiness assurance process (RAP) results.

Photo by Carlos Muza on Unsplash

To dissect the RAP results effectively, we have now create a new dashboard to help teachers analyse the iRAT and tRAT results in the context of the lessonby providing a set of charts and tables with quick insights on the students and teams’ performance in real-time.

TBL Monitor — Teams RAP Report

As part of the TBL Monitor (a special set of reports on TBL learning designs), the new RAP dashboard displays the Students & Teams Chart

The Students & Teams chart shows a graphic representation of the scores for correct answers for both iRAT (average for each team) and the team results of the tRAT.

By looking at the chart, you are able to figure out which team might have their members better prepared (by looking at the average iRAT results -yellow bar). As the tRAT bar (in aqua) show the number of first attempt correct answers of teach team you quickly will be able to discern the (expected) increment attribute to team effort.

Perhaps more importantly, ascertain anomalies. For instance the last team on the left of the chart “Wolf Pack” shows that on average the students individually scores for the iRAT is higher that the tRAT, being this situation uncommon that begs further analysis.

Summary RAP table

The summary tables shows you the same data as the chart but includes the percentage increment between the iRAT average and the tRAT scores.

Additionally it highlights the highest and lowest values on each column showing you areas where you might require further details.

RAP Summary

For instance, we might want to look at the “Alley Cats” team to further explore their highest score — you can see specific teams’ results by clicking on the teams name in the Summary table above.

Team report

For instance for the “Alley Cats” team, student Ota Mayu had the highest iRAT score while two other team members share the lowest mark. While individually Ota Mayu didn’t have an increase in the score in the tRAT, other team members saw their scores raised by an average 28%.

Now the other case that requires further exploration is of team “Wolf Pack” which saw their tRAT score being lower than the average iRAT score.

What it is strikingly surprising on the Wolf Pack case is that student Pang Dawei score 80% on the iRAT, but then results of the tRAT was almost 40% lower. More over, Olaf shows a decrease of 16% as well — whereas the remaining students maintain the same score.

There could be many reasons for this case.

To analyse this case further, you can click on the numbers in the iRAT correct answers column for each student to see what questions where they individually failed and compare these with the tRAT answers (by clicking on the number in the iRAT correct answers header.

If student Pang had correct answers in the iRAT but then in the tRAT other options were chosen instead, it might mean that the Pang failed to convey the reason why he chose the correct answer on the iRAT in the first place.

This analysis can be easily shown in the iRAT report for TBL monitor.

Additionally to discover knowledge gaps and misunderstandings before proceeding to the Application Exercises, take a look at the tRAT reports which gives you a breakdown by questions of all teams.

Burning questions reports will also highlight misunderstanding and misconceptions which should be able to help to facilitate discussions and prepare the mini-lecture.

--

--

Chelsea Bullock

I’m a Communication Manager and Outreach Officer at LAMS (Learning Designer App).