AI Table Tennis Coach
Innovative table tennis video analysis on iOS and Android! AI identifies game highlights, longest rallies, and offers multi-angle insights.
The client: a Norwegian EdTech company aiming to simplify how schools and universities run digital assessments. Their mission is to help educators grade more efficiently, track progress more clearly, and make testing fairer for students with diverse learning needs. CHI Software simplified and optimized the automated grading system, enhanced the detection of reworded content, and adjusted the difficulty levels of questions.
The client, a technology provider in the education sector, had already launched their digital automated assessment platform. This grading tool for schools helps manage exams with online assessment – but as usage grew, teachers started getting overwhelmed by the manual grading process. The plagiarism detection tools were effective, but only in straightforward cases. So while schools had plenty of data, they couldn’t do much with it because they lacked the tools to turn that data into actionable insight.
We joined in after the platform had already gone live. Our role was to help improve what existed. That meant automating the slow parts, strengthening weak spots, and adding features like intelligent plagiarism detection without disrupting the platform’s ongoing daily use. Everything had to work in real classrooms and in real time.
One of the main challenges was the time-consuming process of grading open-ended responses. The client sought a solution that could expedite the process without compromising consistency or fairness in feedback.
Students may think they've outsmarted homework with AI, but savvy teachers can turn the tables using the very same technology. Previously existing tools weren’t enough to catch more subtle forms of cheating, so the company wanted to bring in advanced language models that could recognize paraphrased content and rewritten answers, not just direct copying and pasting.
There was a need to make sense of past assessment results and spot patterns in how students were progressing. Patterns in mistakes, misconceptions, or hesitation revealed where learners struggled. By analyzing this, teachers could adjust instruction, offer targeted support, and address issues promptly before they turned into lasting gaps.
Standardized tests are just one part of the equation, reflecting students’ real progress in their learning journey. The client’s goal was to develop a personalized learning assessment system that could adjust the difficulty in real-time based on each student's performance during the process.
Automated AI Grading Software: This platform enables teachers to grade written answers more efficiently. It utilizes AI, but still allows for human review when necessary.
Intelligent Plagiarism Detection: The system flags copied answers and also catches ones that have been rewritten just enough to slip past simpler tools.
Performance Dashboards: Educators receive a quick overview of how students are performing, with visual summaries that make it easier to track progress or identify potential issues early.
Personalized Evaluating: Tests can adjust themselves in real time while they are being taken by changing question difficulty based on each student’s responses – this helps to keep the assessment fair and focused.
LMS Compatibility: This automated grading software integrates with systems schools already use, so it seamlessly fits into daily routines without the need to switch platforms.
Instructional Feedback: The automated grading system analyzes student performance over time and provides teachers with suggestions and requirements that may help shape the next lesson or identify areas that were missed.
In order to better integrate information collected from different platforms and merge them into a unified data warehouse, we used AWS Glue to handle the extraction and processing, while Apache Spark on Databricks managed high-volume transformations with stability and speed.
Deep learning models were trained for automatic grading of open-ended responses using AWS SageMaker and pretrained models hosted on Hugging Face. The machine learning components were based on open-source models, which were fine-tuned to address the specific needs of educational assessment. For intelligent plagiarism detection, we built neural networks capable of identifying paraphrased content and contextual similarities.
Interactive dashboards were developed to provide educators with real-time insights, directly connected to the centralized warehouse. Data was carefully structured and optimized for performance monitoring and predictive analysis benefits.
Adaptive assessments were introduced through machine learning models that respond to student input during testing. Personalized learning features helped to improve the accuracy of learning content recommendations over time.
We built custom API connectors to ensure compatibility with major Learning Management Systems. Secure OAuth-based authentication was employed to safeguard data and ensure trust during system synchronization.
We built an automated grading software with an architecture centered around AWS EKS for managing containers, and using AWS S3 for reliable data storage. The solution also included AWS Lambda, which is able to handle dynamic workloads without requiring manual scaling.
We were able to automate routine educational tasks like grading, creating reports, and organizing student data using smart systems that respond to specific triggers. These workflows were coordinated using AWS Step Functions, ensuring reliability with low maintenance.
Data was protected through AES-256 encryption and TLS 1.3 transfer protocols. Real-time monitoring and logging were handled via AWS CloudWatch, giving teams full visibility into potential risks and system performance.
Teachers were spending too much time grading written answers. By adding smart automation, the team reduced grading time by 70%, making the process much faster and easier to manage.
The existing checker effectively caught obvious cases of plagiarism but missed instances of paraphrasing. The goal was to reduce missed plagiarism by approximately 30% and increase the trustworthiness of the results.
Instead of waiting for students to fail, the idea was to flag them earlier. The CHI Software team aimed to detect signs of trouble 45% sooner, using data from past evaluations.
Students have diverse learning styles and abilities, so the goal was to create tests that accommodate each individual’s performance. This approach led to a 50% increase in engagement and helped them feel supported during adaptive assessments.
Since institutions were already using LMS platforms, the solution needed to integrate with existing systems. The rollout occurred with 0% disruption during implementation.
As more students continued to join, the platform had to stay reliable. The team was targeting 100%, even during busy exam periods, and successfully maintained uninterrupted access throughout peak usage times.