At Qpercom, we reacted fast to the COVID-19 pandemic by developing video capabilities for our software in order to serve higher institution clients who were forced to move clinical examinations like OSCEs and MMIs online and remote.
Platforms like Observe allow the examiner to fully facilitate the examination remotely, confirm the identity of the student or candidate, interact with the student and assess their performance.
Assessment validity is proven but what about the safeguarding of academic integrity?
Many universities use human proctoring, or invigilation, to prevent suspicious behaviour or activity during exams and this concept has also moved online, triggered by the pandemic.
What do Online Proctoring platforms offer to universities?
In an article by Àrno et al published for the International Journal of Distance Education Technologies (Volume 19, Issue 2, April-June 2021) and entitled “State-of-the-Art of Commercial Proctoring Systems and Their Use in Academic Online Exams”, a range of online proctoring platforms are investigated. The authors found that there are three types of proctoring systems currently used by universities:
1. Live Proctoring Programs – where a human third-party in a remote location acts as the proctor and has the right to assess the situation for any form of unfair actions and the right to interrupt the exam if they feel malpractice occurs.
2. Recorded Proctoring Programs – where the student’s behaviours are recorded and then reviewed after the exam to check for any possible flags that “signal doubt in an examinee’s activities”.
3. Automated Proctoring Programs – where an automated system, built into the platform, uses AI (artificial intelligence) to highlight anomalous or illicit activities
The main challenge with proctoring moving online is guaranteeing that proctoring systems have the same level of quality as they would when operating during a physical exam in a face-to-face setting.
The state-of-the-art commercial proctorering systems investigated by Àrno et al offer a range of different features and technologies. The authors used internet search engines to find each platform and scored them based on criteria like monitoring functions (recording the room, real-time audio analysis, head/eye movements of the student), LMS integration and device support. One of the most frequent functions used by online proctoring platforms was document identification (ID confirmation), a feature that is offered as standard in Qpercom’s clinical assessment platform Observe.
They found that other popular features were recording the webcam feed and a recorded physical room scan while audio analysis and scanning of the student’s head and eye movements were less common.
Platforms like the Italy-based 110 Cum Laude application claims to work with existing conferencing software like Zoom, Teams and Meet while others like the Honorlock proctoring service are Google Chrome plugins that disable certain functions on the side of the student (copy & paste, multiple monitors) and records the student’s webcam for authentication.
Overall, there was a high variability between the proctoring program characteristics.
How do Online Proctoring Platforms perform during a remote exam?
Using an online proctoring platform to guard against assessment malpractice means introducing a second or, in some cases, third software programme or platform to the remote exam process. The exam may be facilitated by an assessment platform like Qpercom or by a video conferencing application like Zoom and the online proctoring programme adds an extra element.
In two case studies performed by Àrno et al, they tested the stability and viability of using an online proctoring platform in a live exam situation.
In the first case study, an automated commercial proctoring platform was used to monitor student behavior over the course of two remote exams. All of the students were recorded and while anomalies were only deducted in a few of the live exams (5.6% in one exam and 3.8% in the other), a visual analysis of the content of all the recorded video clips conducted after the exams found many more cases of anomalous activities. The required time to conduct the retrospective video analysis was nearly 24 hours.
The authors are clear that in this case, anomalies detected during the exam did not paint the full picture and a lengthy post-exam manual review of the video footage was required to highlight all of the anomalies.
In the second case study, a complicated manual proctoring process was observed. Students were asked to complete the exam using a video conferencing platform while also using a smartphone or tablet to film their monitor. This was overseen by two human proctors who joined the video conferencing platform for the exam to verify that the student had successfully placed all devices in the correct position. The proctors then stayed on the call for the duration of the exam to monitor student behaviour.
According to the authors of the paper, monitoring student behaviour during the exam presented a number of audio issues. The microphone from the student’s computer was active in the video conferencing platform while the microphone on the student’s device which was filming their position during the exam was also active. This resulted in audio interference in the form of echoes while outside sounds such as trucks passing were also amplified.
Logistics and network issues made this way of proctoring extremely difficult and the simultaneous viewing of the student’s behaviours with the view of their computer monitors was a challenge.
In conclusion, the paper’s authors felt that while manual proctoring required more administrative time (with the pre-exam upload of student IDs helping the process), it provided fewer technical issues compared to automated proctoring. However, the process was slowed down owing to the lack of functionality with the chosen video conferencing platform to allow a switch from one student to another, a feature that comes as standard in Qpercom Observe.
Students who engaged in automated proctoring reported system crashes, general slowdowns and computer issues related to the network connection speed which caused disconnections and delays. The automated process required a significant amount of time (24 hours) after the exams to review the video clips of the exam for suspicious activities.
More research is needed on the use of online proctoring systems in conjunction with other online platforms that facilitate exams.
What does this mean for Qpercom’s clients?
We don’t believe outcomes can be improved by adding an online proctoring platform to the remote examination process. Consider an OSCE and MMI scenario whereby every 5-10 minutes, students/applicants interact (orally and visually) with another examiner or actor. There is a risk, with the way that automated proctoring works, that this might be ‘picked up’ as suspicious behaviour from the proctoring/invigilation perspective. There is actually no scientific research or reports available about the reliability of automated proctoring of clinical skills assessments.
Qpercom Observe offers important elements of the proctoring process as standard in its video-integrated assessment platform, such as a ‘Watch’ tab for monitoring clinical examinations and the customary ID check at the start of exams or during the exams. However, using an off-the-shelf platform like Zoom, Meet or Teams with an online proctoring package is discouraged by Àrno et al 2021, due to the additional time needed to completely pre-exam activities, like setting up multiple devices and the intricacies of monitoring the entire room that a student is located in. Furthermore using automated proctoring, all recordings need to be re-assessed post-exam as automated proctoring might flag the moving of exam stats, for example from ‘reading time’ and ‘station’, as suspicious behaviour during the live exam.
In the award winning (2019) book Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners or in our case proctoring/invigilating exams —that we have stopped demanding that this AI technology actually works.
More research is needed on what Àrno et al call “a crucial challenge for improving the quality of the current automated proctoring systems”. We in Qpercom need to be aware of the limitations of what computers and software programs can do.