Back in July, I was intrigued by the use of a chatbot for conducting university admission at Leeds Beckett University, wondering what happened to FAQ sheets and web forms.
So, last week I took a look at the UFI-supported learner-facing Ada chatbot, developed by IBM and Bolton College. Operational since April 2017, Ada is a platform-neutral, consistent, reliable support tool for everyday enquiries. It is also emerging as a tool for answering sophisticated queries regarding qualifications, and more.
Chatbots play into the notion of ‘Voicefirst’ persona services, commonly used in digital assistants such as Amazon Alexa, and applications such as Google Maps, which can “...convincingly simulate how a human would behave as a conversational partner” by using natural language processing. They also align with the ‘Calm Technology’ concept: whereby technology shouldn’t be the centre of attention or overwhelm us, but should communicate with us from our periphery vision.
Powered by IBM’s Watson suite of services, Ada uses Watson Assistant to integrate into the college’s database systems and other solutions (as well as hard-coded answers), while also using IBM Watson Speech-to-Text and Text-to-Speech engine.
Wondering out loud how long the tech would take to deploy, I was given an opportunity to set up a chatbot by the helpful IBM person. I was pleasantly surprised at how easy it was to get up-and-running: simple support calls and Q&A could be set up within minutes, not days – clearly a ton of benefit for a learner-facing organisation. When social media forms part of the relationship awarding bodies and exam owners build with vendors and exam candidates, I appreciated how chatbots need to be part of the mix.
What problems are we solving? Ten years ago, question/ item banking and test creation systems highlighted coverage gaps, duplication and item degradation in exam papers and qualifications. Today, chatbots are highlighting the inconsistencies and gaps in an exam sponsor/ awarding body’s knowledge base to aid/ empower learners, which has hitherto been covered by first line or back-office staff.
Making sense of the data held by exam owners/ sponsors, technology providers and support services, and then making it work to the benefit of learners is key: so how can this help in the world of digital exams, qualification and eAssessment?
Test Building I’m often struck by the demographic of item and exam writers. Typically senior in experience and age, are chatbots appropriate in test building and for this particular demographic?
Reading Laurie Orlov’s paper ‘The Future of Voice First Technology and Older Adults’, there is a strong argument to suggest that those exam builders with age-related visual and motor difficulties could benefit enormously from the technology – liberating huge amounts of knowledge and experience.
Also, rather than get to grips with an FAQ list or grapple with a user manual, a chatbot can help expedite the on-boarding of item writers or examiners in using an item creation tool, or how to raise queries effectively and efficiently.
For example, by querying jargon or overly complex words for the exam’s level, the chatbot could help with ensuring the exam item is culturally sensitive and at the appropriate language levels. Equally, a chatbot can also help exam writers identify duplicate items or other weaknesses in an item bank – very handy if an exam owner is paying freelancers for written exam items.
While speech requests maybe an anathema within an open plan office, Orlov suggests that home workers (or those working with just text-on-screen) will not be particularly inhibited. It’s certainly viable to see a chatbot being part of a cohort team, to act as a verifier of item quality and appropriate use of language within an item.
Test Delivery Test readiness plays a big part in successful delivery, reducing the hassle and uncomfortable experience of preparing for an exam date. For example, if data points have been fulfilled, then a chatbot can facilitate a test booking. Is there a seat at my local test centre? Can it be booked, paid and confirmed for me? Seat yield management data, E-commerce integration, and GPS location can all be linked for a hassle-free experience.
Remote proctoring/ invigilation has benefited learners by providing more access to exam sessions, but has also encouraged debate on what constitutes exam security and what the exam ‘service encounter’ should be.
Checking candidate identity/ credentials and asking them obligatory security questions could feasible be performed by an integrated tech combination of chatbot, webcam, signature pad, smart speaker and smartphone. Test centre managers could benefit - free to deal with outlier queries, such as special accommodations.
However, opportunities for malpractice remain: the decreasing size of chatbot-enabled smart speakers may make them undetectable to an inexperienced invigilator. How realistic is it that, in the near-future, a candidate can murmur surreptitiously to a tiny smart speaker within an exam room?
Test Analysis Making sense of how an exam is performing can be lost within the complexity of dashboards and business analytics tools. For example, effective indexing of items and artefacts can give technology providers the ability to formulate structured replies to subject specific enquiries from qualification managers. Also, learning managers can provide structured replies to their exam candidates. Natural language processing means that questions such as ‘How many of our directly managed centres have greater than 80% pass rate?’, can be handled quickly by the chatbot.
What’s happening now? Listening to a number of awarding organisations and training providers, there is an emerging trend to ‘get our data back’ from tech providers, post-GDPR. One facet of this is keep the technology provider as a SaaS service, but to move the data either back on-premises or onto a private, managed environment.
Coupled with the emerging paradox between front-end systems developers striving for accessible, clean UX and UI, while back-end developers build increasingly sophisticated functionality, driven by customer need, chatbots can provide an effective third-way in certain areas. Again, similar to investing in eAssessment systems, time spent up-front pays dividends when associates and staff refine the language the chatbot understands queries and how it conducts conversations.
Talking with the IBM folks, Watson Discovery works out at USD$0.0025 per enquiry. Mapped against the cost of providing first-line support and administrative help, the opportunity to up-skill the chatbot’s human counterparts to better serve learners is very encouraging.
What’s next? The onus is on exam owners to build trust with their learners and stakeholders to access and use their micro-nuggets of information to better serve learners and empower their staff/ associates. Chatbots and the voicefirst concept is now serving the emerging needs of key learner groups, educators and exam support staff.
Put simply: If your organisation has already invested in assessment or qualifications management software, your assessment technology supplier needs to have chatbot functionality on their roadmap within the next three years.
Comments