Posts on psychometrics: The Science of Assessment

Before the introduction of online exams in the education sector, the mentioning of the word ‘exams’ was met with anxiety. Exams were limited to four walls and you could cut the tension in the exam center with a razor. The furious scribbling of papers, the sharp glances from the hawk-eyed invigilators, and the constant ticking of the wall clock is not any experience many will forget.

But then came the internet and there was a better way to assess students. Online exams, though not popular at the time, provided a better way to develop and deliver tests. Using psychometric methods such as Computerized Adaptive Testing and Item Response Theory, assessments became more reliable and secure. Delivery mechanisms including remote proctoring provided students with the ability to take their exams anywhere in the world.

However, despite all these numerous benefits, online exams remained in the dark till the pandemic hit. Many educational institutions and businesses embraced online exams and made them the core of their systems. Forward a year later, and the deployment of vaccines is underway. Many education institutions are confused about which examination models to stick to. Should you go on with the online exams model they used when everyone was stuck in their homes? Should you adopt hybrid examination models, or should you go back to the traditional pen-and-paper method?  

This blog post will provide you with an evaluation of whether offline exams are still worth it in 2021. 

Offline Exams; The good, the bad, and the ugly

The Good

Offline exams have been a stepping stone towards the development of modern assessment models that are more effective. We can’t ignore the fact that there are several advantages of traditional exams. 

Some advantages of offline exams include students having familiarity with the system, development of a social connection between learners, exemption from technical glitches, and affordability. Some schools don’t have the resources and pen-and-paper assessments are the only option available. 

The Bad and The Ugly

However, the pen-and-paper method of assessment has not been able to ensure that exams that achieve their core objectives. The traditional method of assessments is paved with uncertainties and inaccuracies.

 How do you develop and decide the main learning objectives? How do you measure performance and know what to do to improve learning outcomes? And how do you evaluate student strengths and weaknesses? These are just a few questions that the traditional assessment method can’t answer.

 Below is a list of challenges pen-and-paper methods face from test development to evaluation:

1. Needs a lot of resources

From test development to evaluation, pen and paper methods require a lot of resources. Resources can range from high human resource fees to materials needed to develop and deliver the exams to students. 

2. Lack of seamless collaboration and scalability

The ability to cater to a bigger audience is important for productivity and saving resources. However,  the pen-and-paper method offers no room for scalability. Only a fixed number of students can take exams at a certain period. This is not only expensive but also wastes valuable time and increases the chances of leakage.

3. Prone to cheating

Most people think that offline exams are cheat-proof but that is not the case. Most offline exams count on invigilators and supervisors to make sure that cheating does not occur. However, many pen-and-paper assessments are open to leakages. High candidate-to-ratio is another factor that contributes to cheating in offline exams.

4. Poor student engagement

We live in a world of instant gratification and that is the same when it comes to assessments. Unlike online exams which have options to keep the students engaged, offline exams are open to constant destruction from external factors.

Offline exams also have few options when it comes to question types. 

5. Flawed evaluation system

To err is human”

 But, when it comes to assessments, accuracy, and consistency. Traditional evaluation methods are slow and labor-intensive. Instructors take a long time to evaluate tests. This defeats the entire purpose of assessments.

6. Poor result analysis

Pen-and-paper exams depend on instructors to analyze the results and come up with insight. This requires a lot of human resources and expensive software. It is also difficult to find out if your learning strategy is working or it needs some adjustments. 

A glimpse into online exams

Also referred to as digital exams or e-exams, online exams are delivered over the internet. The best online examination platforms include modules to facilitate the entire examination process, from development to delivery. Online exams provide instructors with the ability to diversify question types and monitor the assessment process. Using learning analytics, they are also able to modify learning methods to increase the quality of output. 

Online exams work like your typical pen-and-paper methods but with more accuracy, scalability, and reliability. The grading of the papers is done automatically after the assessment is done, depending on the question types. For essays, instructors can use online essay scoring. E-examinations have improved the assessment process and those are just a few examples. 

Here are some pros and cons of online exams to help you contemplate whether online exams are for you

The pros of online exams

1. Scalability

Unlike traditional testing methods which have a fixed number of people who can take an exam in a fixed time, online exams can cater to bigger audiences. This saves education institutions a lot of resources that would be invested in developing and managing examination centers. 

2. Automated report generation and visualization

This is the greatest advantage online exams have over offline exams. The automated report generation and visualization functions integrated into online assessment platforms enable instructors to accurately gauge learning outcomes. This gives them actionable insight to improve the learning process. 

3. Accessibility

Online exams can be taken from anywhere in the world. All one needs is a computer and an internet connection. This has given students access to knowledge from global learning institutions.

4. Support for diversified question types

Unlike traditional exams which are limited to a certain number of question types, online exams offer many question types option. Multiple Choice Questions, video assessments, coding simulators, and many other question types are supported. With this kind of freedom, instructors are able to decide which question types are fit for certain topics. 

5. In-built psychometrics

Psychometrics is an important part of an assessment as it ensures the development of high-quality tests. The implementation of psychometrics into traditional pen-and-paper methods is a difficult process and depends on the experience of instructors. 

With online exams, you can easily capitalize on it through tech-enhanced items, automated test assembly, Computerized Adaptive Testing, etc. 

6. Improved academic integrity

Cheating is the biggest concern when it comes to online exams. Most people wonder, ‘Isn’t giving students access to a computer and a high-speed internet connection just handing over answers to students?’ Well, that is far from the truth. 

In fact, online exams are safer than offline exams. Online exams are protected using advanced technologies such as Lockdown browser, IP-based authentication, AI-flagging, and many other strategies.

 Check out this article to learn how online exams are secured.  

7. Environmental friendliness

Sustainability is an important aspect of modern civilization.  Online exams eliminate the need to use resources that are not environmentally friendly such as paper. 

The cons of online exams

1. Digital transformation challenges

The process of transitioning examination from offline models to online platforms is one that requires intense planning and resources. However, this barrier can be eliminated easily by creating awareness among students and instructors on how to capitalize on digital assessments.

You can also hire firms with experience in migrating to digital assessments to help in the process. 

2. Academic integrity concerns

Cheating concerns still remain a turn-off for institutions that wish to transition to online exams. There are many ways students can circumnavigate security protocols to cheat in exams. Some ways include impersonation, external help, surfing the internet, and many others. 

However, these cheating ‘tricks’ can be avoided using Assess’ online assessment software with security features such as lock-down browser, IP-based authentication, and AI-powered remote proctoring. 

Offline Exams vs Online exams

traditional approach vs modern approach

Offline exams vs E-examinations


Are offline exams still worth it in 2021? No, they are not. As we have seen from the above sections, the traditional exam approach has several flaws that are barriers to effective assessment. However, it’s not as simple as that. There are many instances where the traditional approach would be a better option. Some instances include when students can’t afford infrastructure. But, when you are looking to conduct high-stakes examinations, online exams are the best option. 

How Assess Can Help 

Transitioning from offline exams to offline exams is not a simple task. That is why Assess is here to help you every step of the way, from test development to delivery. We provide you with the best assessment software and access to the most experienced team. Ready to take your assessments online?



Assessment is an important part of the learning process as it helps enhance the quality of learning outcomes. With the increased adoption of online assessments, especially during and after the Covid-19 pandemic,  it is important to put in place practices that ease the development of effective online assessments.


This is because well-crafted online assessments positively influence the ways in which students approach problems. Online assessments also provide many benefits compared to traditional assessments. Some benefits include; improved grading accuracy, accessibility, improved feedback methods, and many others.


But, developing effective online assessments is not an easy task. There are a lot of forces at play.


 This 2-part blog series aims to provide you with actionable tips and strategies that can help you improve your online assessments.


But, before getting into the nitty-gritty, let’s see some characteristics of high-quality online exams;

Characteristics of High-quality online assessments

Here are some characteristics to look for in good online assessments;


  • Fair, defensible, and bias-free
  • Cost-effective and practical
  • Keep track of progress (Short-term and long-term)
  • Flexible and able to scale
  • Provide a real learning experience
  • Include a scoring system that reflects a mastery and not a gross score
  • Should provide diversified question types
  • Provide good feedback mechanisms and actionable insight
  • In alignment with the involved curriculum and standards
  • Reliable and accessible to everyone


Now let’s design some effective digital exams!

Use online quizzes to spot student misconceptions

Spotting knowledge gaps is an important part of assessments and online quizzes can help with this. This approach involves giving lecture videos to students before class and then testing them on the same. The answers and feedback are given immediately, preferably with some guidance. 


The preferable question type for this strategy is the ‘Fill-in-the-blank’ type or MCQ. 5 attempts are given to each student and they can score full marks if they score correctly within that time.


The student responses can then be analyzed, especially the first attempt, and the insights used to shape the learning experience in a good way. 

Capitalize on Adaptive Testing

The benefits of adaptive testing are too numerous to miss out on.


What is adaptive testing? Adaptive testing is the delivery of a test to an examinee using an algorithm that adapts the difficulty of the test to their ability.  It also adapts to the number of items used (no need to waste their time). 

It’s like the High Jump in Track & Field.  


You start the bar in a middling-low position.  If you make it, the bar is raised.  This continues until you fail, and then it is dropped a little.  Or if you fail the first one, it can be dropped until you succeed.


Some benefits of integrating adaptive testing in your strategy include shorter tests, improved test security, individualized exam pacing, and increased motivation. For more information about adaptive testing, check out this blog post

Adaptive testing options


Want to start using adaptive testing? Contact us to get started right away. 

Choose the right online assessment tools

Choosing the right online assessment software to develop and deliver exams is not something you can avoid. Not only does digital assessment software automate repetitive tasks, increase efficiency but also helps shape the learning process for better outcomes.


But, with the concentrated market of online assessment software, it can be difficult to find the right software. Choosing software that does not align with your assessment strategy can be catastrophic.


 A good online assessment platform should cater to your needs in every step of the test development cycle. It should offer the best functionality and have a world-class team at your disposal. 


Yet, that can’t be enough to choose software that will help you develop effective online assessments. Here is the perfect resource to help you choose the appropriate tool. 

Understand the ‘Why’ Of assessments

Having a clear definition of why you are developing an assessment is key to making them effective. John Biggs and Catherine Tang, in their constructive alignment theory, argue that assessment tasks (AT) and teaching-learning outcomes (TLA) are created to make sure that students achieve their intended learning outcomes (ITL).


 Assessments should be developed based on the ILOs for particular topics. Different learning outcomes are achieved when a variety of assessment types are used. 


Effective online assessments have clear goals and create a learning process that ensures students have a chance to self-learn, practice, and receive actionable feedback. 


Multiple Choice Questions

Multiple-choice questions are unavoidable when it comes to online assessments. They feature all the benefits an overwhelmed teacher would wish to have in an exam, easy to deliver, easy to develop, and easy to score.  While they often face a poor reputation, they remain so common for a reason: they provide the most bang for your buck in assessment, namely contributing to reliable/valid scores without soaking too much time from teachers or students.


But, the art of developing effective MCQs is one that very few possess. To help you save some time here is how to develop effective multiple-choice questions.


  • Be strategic when developing MCQs. This is because many tutors fall into the trap of how simple it is to create MCQs, that they lose grip of the big picture. 
  • The responses should be direct. No fluff! 
  • Avoid options like ‘None of the above or ‘All of the above like the plague. Slip them into the assessment occasionally though. 
  • Make them engaging 
  • Language matters. A lot! Avoid grammatical errors or logical contradictions. 
  • Ensure consistency in both the right answers and the distractors. 
  • Stems should be direct and clear. 
  • Eliminate barriers such as sensitive issues or bias for students


The effective online assessment checklist

That is a lot of information to digest, but here is a simple checklist to help you know if you are on track to develop online assessments that are effective.


  • What defines success in your online assessments? You should have well-established performance metrics to make sure that your examinees get the best out of your entire learning journey?
  • What type of feedback system do you have? The feedback method should help both the student and the teacher get something out of the examination process.
  • How diversified are your online assessments? You should offer different question types depending on learning objectives.
  • Do you have a good online assessment tool? A good online exam tool should provide all the necessary functionality to develop good exams. Feel free to check out this blog post to understand what makes a good online assessment software. 
  • Does your strategy involve peer assessments or self-assessments?
  • Do your assessments empower and motivate students to give their best?
  • To what extent do you involve your students in the assessment process? Make sure to involve them as much as possible. 
  • Are your exams in alignment with the best psychometrics and international standards?
  • How secure and defensible are your exams? Feel free to check out this resource to understand online exam security.
  • How often do you try different assessment strategies? You should keep running tests using different approaches to see what works and what doesn’t.

Final thoughts

Did you check most of the boxes in that list? If yes, you are on your way to developing effective online assessments. If not, try implementing the strategies discussed in the blog post. 

Developing good online exams is not an easy task. It requires a lot of dedication and time. 

If you get stuck, feel free to contact us and we will help you with your entire digital assessment journey. Not only do we have a complete suite of online assessment software to help you develop and deliver effective online exams, but also have an experienced team to guide you every step of the way. 


Resources for extra reading


Computerized adaptive testing ( CAT, adaptive testing, computer-adaptive testing, and adaptive assessment) is an AI-based technology that has existed since the 1980s. But, most assessments in the world still don’t capitalize on the benefits of adaptive testing.

There is, of course, a  large body of scientific research on the topic over the past 40+ years, but many organizations are not aware of the benefits of adaptive testing, do not have the expertise, or think it is simply too complex and expensive (spoiler alert: it isn’t!).

What is adaptive testing?

Adaptive testing is the delivery of a test to an examinee using an algorithm that adapts the difficulty of the test to their ability, as well as adapting the number of items used (no need to waste their time).  It is sort of like the High Jump in Track & Field.  You start the bar in a middling-low position.  If you make it, the bar is raised.  This continues until you fail, and then it is dropped a little.  Or if you fail the first one, it can be dropped until you succeed.

For more info, visit this post.

Benefits of adaptive testing

As you might imagine, by making the test more intelligent, adaptive testing provides a wide range of advantages.  Some of the well-known benefits of adaptive testing, recognized by scholarly psychometric research, are listed below.

Shorter tests

Research has found that adaptive tests produce anywhere from a 50% to 90% reduction in test length.  This is no surprise.  Suppose you have a pool of 100 items.  A top student is practically guaranteed to get the easiest 70 correct; only the hardest 30 will make them think.  Vice versa for a low student.  Middle-ability students do no need the super-hard or the super-easy items.

Why does this matter?  Primarily, it can greatly reduce costs.  Suppose you are delivering 100,000 exams per year in testing centers, and you are paying $30/hour.  If you can cut your exam from 2 hours to 1 hour, you just saved $3,000,000.  Yes, there will be increased costs from the use of adaptive testing, but you will likely save money in the end.

For the K12 assessment, you aren’t paying for seat time, but there is the opportunity cost of lost instruction time.  If students are taking formative assessments 3 times per year to check on progress, and you can reduce each by 20 minutes, that is 1 hour; if there are 500,000 students in your State, then you just saved 500,000 hours of learning.

More precise scores

CAT will make tests more accurate, in general.  It does this by designing the algorithms specifically around how to get more accurate scores without wasting examinee time.

More control of score precision (accuracy)

CAT ensures that all students will have the same accuracy, making the test much fairer.  Traditional tests measure the middle students well but not the top or bottom students.  Is it better than A) students see the same items but can have drastically different accuracy of scores, or B) have equivalent accuracy of scores, but see different items?

Better test security

Since all students are essentially getting an assessment that is tailored to them, there is better test security than everyone seeing the same 100 items.  Item exposure is greatly reduced; note, however, that this introduces its own challenges, and adaptive test algorithms have considerations of their own item exposure.

A better experience for examinees, with reduced fatigue

Adaptive tests will tend to be less frustrating for examinees on all ranges of ability.  Moreover, by implementing variable-length stopping rules (e.g., once we know you are a top student, we don’t give you the 70 easy items), reduces fatigue.

Increased examinee motivation

Since examinees only see items relevant to them, this provides an appropriate challenge.  Low-ability examinees will feel more comfortable and get many more items correct than with a linear test.  High-ability students will get the difficult items that make them think.

Frequent retesting is possible

The whole “unique form” idea applies to the same student taking the same exam twice.  Suppose you take the test in September, at the beginning of a school year, and take the same one again in November to check your learning.  You’ve likely learned quite a bit and are higher on the ability range; you’ll get more difficult items, and therefore a new test.  If it was a linear test, you might see the same exact test.

This allows adaptive tests to be used quite frequently for formative assessments in schools; it is an ideal fit.

Individual pacing of tests

Examinees can move at their own speed.  Some might move quickly and be done in only 30 items.  Others might waver, also seeing 30 items but taking more time.  Still, others might see 60 items.  The algorithms can be designed to maximize the process.

Want to learn more?

Interested in learning more about the benefits of adaptive testing?  If you want a full book, I recommend Computerized Adaptive Testing: A Primer by Howard Wainer.  Prefer a short article for now?  Here’s my favorite.

Want to implement the benefits of adaptive testing?

The first step is to perform simulation studies to evaluate the potential benefits for your organization.  We can help you with those, or recommend software if you prefer to do it on your own.  Ready to develop and deliver your own adaptive tests?  Sign up for a free account on our platform!

The two terms Norm-Referenced and Criterion-Referenced are commonly used to describe tests, exams, and assessments.  They are often some of the first concepts learned when studying assessment and psychometrics.

Norm-referenced means that we are referencing how your score compares to other people.  Criterion-referenced means that we are referencing how your score compares to a criterion such as a cutscore or a body of knowledge.

Do we say a test is “Norm-Referenced” vs. “Criterion-Referenced”?

Actually, that’s a slight misuse.

The terms Norm-Referenced and Criterion-Referenced refer to score interpretations.  Most tests can actually be interpreted in both ways, though they are usually designed and validated for only one of the other.

Hence the shorthand usage of saying “this is a norm-referenced test” even though it just means that it is the primarily intended interpretation.

Examples of Norm-Referenced vs. Criterion-Referenced

Suppose you received a score of 90% on a Math exam in school.  This could be interpreted in both ways.  If the cutscore was 80%, you clearly passed; that is the criterion-referenced interpretation.  If the average score was 75%, then you performed at the top of the class; this is the norm-referenced interpretation.  Same test, both interpretations.

What if the average score was 95%?  Well, that changes your norm-referenced interpretation (you are now below average) but the criterion-referenced interpretation does not change.

Now consider a certification exam.  This is an example of a test that is specifically designed to be criterion-referenced.  It is supposed to measure that you have the knowledge and skills to practice in your profession.  It doesn’t matter whether all candidates pass or only a few candidates pass; the cutscore is the cutscore.

However, you could interpret your score by looking at your percentile rank compared to other examinees; it just doesn’t impact the cutscore

On the other hand, we have an IQ test.  There is no criterion-referenced cutscore of whether you are “smart” or “passed.”  Instead, the scores are located on the standard normal curve (mean=100, SD=15), and all interpretations are norm-referenced.  Namely, where do you stand compared to others?

Is this impacted by item response theory (IRT)?

If you have looked at item response theory (IRT), you know that it scores examinees on what is effectively the standard normal curve (though this is shifted if Rasch).  But, IRT-scored exams can still be criterion-referenced.  It can still be designed to measure a specific body of knowledge and have a cutscore that is fixed and stable over time.

Even computerized adaptive testing can be used like this.  An example is the NCLEX exam for nurses in the United States.  It is an adaptive test, but the cutscore is -0.18 (NCLEX-PN on Rasch scale) and it is most definitely criterion-referenced.

Building and validating an exam

The process of developing a high-quality assessment is surprisingly difficult and time-consuming. The greater the stakes, volume, and incentives for stakeholders, the more effort that goes into developing and validating.  ASC’s expert consultants can help you navigate these rough waters.

Want to develop smarter, stronger exams?

Fill out the form below to request a free account in our world-class platform, or talk to one of our psychometric experts.


Item discrimination is a psychometric concept regarding the quality of a test item, and the point-biserial coefficient is one of several indices for this concept.

What is item discrimination?

While the word “discrimination” has a negative connotation, it is actually a really good thing for an item to have.  It means that it is differentiating between examinees, which is entirely the reason that an assessment item exists.  If a math item on Fractions is good, then students with good knowledge of fractions will tend to get it correct, while students with poor knowledge will get it wrong.  If this isn’t the case, and the item is essentially producing random data, then it has no discrimination.  If the reverse is the case, then the discrimination will be negative.  This is a total red flag; it means that good students are getting the item wrong and poor students are getting it right, which almost always means that there is incorrect content or the item is miskeyed.

What is the point-biserial?

The point-biserial coefficient is a Pearson correlation between scores on the item (usually 0=wrong and 1=correct) and the total score on the test.  As such, it is sometimes called an item-total correlation.

Consider the example below.  There are 10 examinees that got the item wrong, and 10 that got it correct.  The scores are definitely higher for the Correct group.  If you fit a regression line, it would have a positive slope.  If you calculated a correlation, it would be around 0.10.


How do you calculate the point-biserial?

Since it is a Pearson correlation, you can easily calculate it with the CORREL function in Excel or similar software.  Of course, psychometric software like Iteman will also do it for you, and many more important things besides (e.g., the point-biserial for each of the incorrect options!).  This is an important step in item analysis.  The image below is example output from Iteman, where Rpbis is the point-biserial.  This item is very good, as it has a very high point-biserial for the correct answer and strongly negative point-biserials for the incorrect answers (which means the not-so-smart students are selecting them).


How do you interpret the point-biserial?

Well, most importantly consider the points above about near-zero and negative values.  Besides that, a minimal-quality item might have a point-biserial of 0.10, a good item of about 0.20, and strong items 0.30 or higher.  But, these can vary with sample size and other considerations.  Some constructs are easier to measure than others, which makes item discrimination higher.

Are there other indices of item discrimination?

There are two other indices commonly used in classical test theory.  There is the cousin of the point-biserial, the biserial.  There is also the top/bottom coefficient, where the sample is split into a highly performing group and a lowly performing group based on total score, the P value calculated for each, and those subtracted.  So if 85% of top examinees got it right and 60% of low examinees got it right, the index would be 0.25.

Of course, there is also the a parameter from item response theory.  There are a number of advantages to that approach, most notably that the classical indices try to fit a linear model on something that is patently nonlinear.  For more on IRT, I recommend a book like Embretson & Riese (2000).


The conditional standard error of measurement (CSEM) is a concept from psychometrics which seeks to characterize error in the process of measuring examinees on a test or assessment.

What is measurement error?

We can all agree that assessments are not perfect, from a 4th grade math quiz to a Psych 101 exam at university to a driver’s license test.  Suppose you got 80% on an exam today.  If we wiped your brain clean and you took the exam tomorrow, what score would you get?  Probably a little higher or lower.  Psychometricians consider you to have a true score which is what would happen if the test was perfect, you had no interruptions or distractions, and everything else fell into place.  But in reality, you of course do not get that score each time.  So psychometricians try to estimate the error in your score, and use this in various ways to improve the assessment and how scores are used.

The Original Approach: Classical Test Theory

Classical test theory (CTT) is a psychometric paradigm that is extremely useful in many situations, but generally oversimplifies things.  Its approach to measurement error certainly qualifies.  CTT assumes that measurement error – called the standard error of measurement – is the same for every examinee.  It is calculated as SEM=SD/sqrt(1-r) where SD is the standard deviation of raw scores on the exam, and r is the reliability of the exam.

Why conditional standard error of measurement?

Early researchers realized that this assumption is unreasonable.  Suppose that a test has a lot of easy questions.  It will therefore measure low-ability examinees quite well.  Imagine that it is a Math placement exam for university, and has a lot of Geometry and Algebra questions at a high school level.  It will measure students well who are at that level, but do a very poor job of measuring good students.

There was an initial suggestion of calculating a conditional standard error of measurement in classical test theory, but at that time, a new paradigm called item response theory was being developed.  It considered error to be a function of ability, not a single number.  In the previous example, the standard error for low students should be much less than the standard error for high students.

An example of this is shown below.  On the right is the conditional standard error of measurement function, and on the left is its inverse, the test information function.  Clearly, this test has a lot of items around -1.0 on the theta spectrum, which is around the 15th percentile.  Students above 1.0 (85th percentile) are not being measured well.


Standard error of measurement and test information function

How is CSEM used?

A useful way to think about conditional standard error of measurement is with confidence intervals.  Suppose your score on a test is 0.5 with item response theory.  If the CSEM is 0.25 (see above) then we can get a 95% confidence interval by taking plus or minus 2 standard errors.  This means that we are 95% certain that your true score lies between 0.0 and 1.0.  For a theta of 2.5 with an CSEM of 0.5, that band is then 1.5 to 2.5 – which might seem wide, but remember that is like 94th percentile to 99th percentile.

You will sometimes see scores reported in this manner.  I once saw a report on an IQ test that did not give a single score, but instead said “we can expect that 9 times out of 10 that you would score between X and Y.”

There are various ways to use the CSEM and related functions in the design of tests, including the assembly of parallel linear forms and the development of computerized adaptive tests. To learn more about this, I recommend you delve into a book on IRT, such as Embretson and Riese (2000).  That’s more than I can cover here.

The California Department of Human Resources (CalHR, has selected Assessment Systems Corporation (ASC, as its vendor for an online assessment platform. CalHR is responsible for the personnel selection and hiring of many job roles for the State, and delivers hundreds of thousands of tests per year to job applicants. CalHR seeks to migrate to a modern cloud-based platform that allows it to manage large item banks, quickly publish new test forms, and deliver large-scale assessments that align with modern psychometrics like item response theory (IRT) and computerized adaptive testing (CAT).

ASC’s landmark assessment platform was selected as a solution for this project. ASC has been providing computerized assessment platforms with modern psychometric capabilities since the 1980s, and released in 2019 as a successor to its industry-leading platform FastTest. It includes modules for item authoring, item review, automated item generation, test publishing, online delivery, and automated psychometric reporting.


Multistage adaptive testing

Multistage testing

Automated item generation

automated item generation



There are many types of remote proctoring on the market, spread across dozens of vendors, especially new ones that sought to capitalize on the pandemic which were not involved with assessment before hand.  With so many options, how can you more effectively select amongst the types of remote proctoring?

What is remote proctoring?

Remote proctoring refers to the proctoring (invigilation) of educational or professional assessments when the proctor is not in the same room as the examinee.  This means that it is done with a video stream or recording, which are monitored by a human and/or AI.  It is also referred to as online proctoring.

Remote proctoring offers a compelling alternative to in-person proctoring, somewhere in between unproctored at-home tests and tests delivered in an expensive testing center.  This makes it a perfect fit for medium-stakes exams, such as university placement, pre-employment screening, and many types of certification/licensure tests.

What are the types of remote proctoring?

There are four types of remote proctoring, which can be adapted to a particular use case, sometimes varying between different tests in a single organization.  ASC supports all four types, and partners with 5 different vendors to help provide the best solution to our clients.  In descending order of security:


Approach What it entails for you What it entails for the candidate

Live with professional proctors

  • You register a set of examinees in FastTest, and tell us when they are to take their exams and under what rules.
  • We provide the relevant information to the proctors.
  • You send all the necessary information to your examinees.
  • The most secure of the types of remote proctoring.
  • Examinee goes to, where they will initiate a chat with a proctor.
  • After confirmation of their identity and workspace, they are provided information on how to take the test.
  • The proctor then watches a video stream from their webcam as well as a phone on the side of the room, ensuring that the environment is secure. They do not see the screen, so your exam content is not exposed.
  • When the examinee is finished, they notify the proctor, and are excused.

Live, bring your own proctor (BYOP)

  • You upload examinees into FastTest, which will generate links.
  • You send relevant instructions and the links to examinees.
  • Your staff logs into the admin portal and awaits examinees.
  • Videos with AI flagging are available for later review if needed.
  • Examinee will click on a link, which launches the proctoring software.
  • An automated system check is performed.
  • The proctoring is launched.  Proctors ask the examinee to provide identity verification, then launch the test.
  • Examinee is watched on the webcam and screencast.  AI algorithms help to flag irregular behavior.
  • Examinee concludes the test

Record and Review (with option for AI)

  • You upload examinees into FastTest, which will generate links.
  • You send relevant instructions and the links to examinees.
  • After examinees take the test, your staff (or ours) logs into review all the videos and report on any issues.  AI will automatically flag irregular behavior, making your reviews more time-efficient.


  • Examinee will click on a link, which launches the proctoring software.
  • An automated system check is performed.
  • The proctoring is launched.  System asks the examinee to provide identity verification, then launch the test.
  • Examinee is recorded on the webcam and screencast.  AI algorithms help to flag irregular behavior.
  • Examinee concludes the test

AI only

  • You upload examinees into FastTest, which will generate links.
  • You send relevant instructions and the links to examinees.
  • Videos are stored for 1 month if you need to check any.


  • Examinee will click on a link, which launches the proctoring software.
  • An automated system check is performed.
  • The proctoring is launched.  System asks the examinee to provide identity verification, then launch the test.
  • Examinee is recorded on the webcam and screencast.  AI algorithms help to flag irregular behavior.
  • Examinee concludes the test


Some case studies

We’ve worked with all types of remote proctoring, across many types of assessment:

  • ASC delivers high-stakes certification exams for a number of certification boards, in multiple countries, using the live proctoring with professional proctors.  Some of these are available continuously on-demand, while others are on specific days where hundreds of candidates log in.
  • We partnered with a large university in South America, where their admissions exams were delivered using Bring Your Own Proctor, enabling them to drastically reduce costs by utilizing their own staff.
  • We partnered with a private company to provide AI-enhanced record-and-review proctoring for applicants, where ASC staff reviews the results and provides a report to the client.
  • We partner with an organization that delivers civil service exams for a country, and utilizes both unproctored and AI-only proctoring, differing across a range of exam titles.

How do I select a vendor?

First, determine the level of security necessary, and the trade-off with costs.  Live proctoring with professionals can cost $20 to $100 or more, while AI proctoring can be as little as a few dollars.  Then, evaluate some vendors to see which group they fall into; note that some vendors can do all of them!  Then, ask for some demos so you understand the business processes involved and the UX on the examinee side, both of which could substantially impact the soft costs for your organization.  Then, start negotiating with the vendor you want!


Want some more information?

Get in touch with us, we’d love to show you a demo!




Finding good employees in an overcrowded market is a daunting task. In fact, according to research by Career builder, 74% of employers admit to hiring the wrong employees. Bad hires are not only expensive, but can also adversely affect cultural dynamics in the workforce. This is where pre-employment assessment software shows its value.

Pre-employment testing tools help companies create effective assessments, thus saving valuable resources, improving candidate experience & quality hire, and reducing hiring bias.  But, finding a pre-employment testing software that can help you reap these benefits can be difficult, especially because of the explosion of software solutions in the market.  If you are lost on which tools will help you develop and deliver your own pre-employment assessments, this guide is for you.

First things first: you need to understand the basics of pre-employment tests. 

What is a pre-employment test?

A pre-employment test refers to an examination given to job seekers before hiring. The main reasons for administering these tests include determining important candidate metrics such as cognitive abilities, job experience, and personality traits.  The popularity of pre-employment tests has sky-rocketed in the past years. This is because of their ability to help companies manage large banks of candidate applications.  This helps increase quality hires by providing access to a diversified network of professionals while eliminating roadblocks such as ‘Resume Spammers’.  

Types of pre-employment tests

There are different types of pre-employment assessments. Each of them achieves a different goal in the hiring process. The major types of pre-employment assessments include:

Personality tests: Despite rapidly finding their way into HR, these types of pre-employment tests are widely misunderstood. Personality tests answer questions in the social spectrum.  One of the main goals of these tests is to quantify the success of certain candidates based on behavioral traits. 

Aptitude tests: Unlike personality tests or emotional intelligence tests which tend to lie on the social spectrum, aptitude tests measure problem-solving, critical thinking, and agility.  These types of tests are popular because can predict job performance than any other type because they can tap into areas that cannot be found in resumes or job interviews. 

Skills Testing: The kinds of tests can be considered a measure of job experience; ranging from high-end skills to low-end skills such as typing or Microsoft excel. Skill tests can either measure specific skills such as communication or measure generalized skills such as numeracy. 

Emotional Intelligence tests: These kinds of assessments are a new concept but are becoming important in the HR industry. With strong Emotional Intelligence (EI) being associated with benefits such as improved workplace productivity and good leadership, many companies are investing heavily in developing these kinds of tests.  Despite being able to be administered to any candidates, it is recommended they be set aside for people seeking leadership positions, or those expected to work in social contexts. 

Risk tests: As the name suggests, these types of tests help companies reduce risks. Risk assessments offer assurance to employers that their workers will commit to established work ethics and not involve themselves in any activities that may cause harm to themselves or the organization.  There are different types of risk tests. Safety tests, which are popular in contexts such as construction, measure the likelihood of the candidates engaging in activities that can cause them harm. Other common types of risk tests include Integrity tests

Pre-employment testing software: The Benefits 

Now that you have a good understanding of what pre-employment tests are, let’s discuss the benefits of integrating pre-employment assessment software into your hiring process. Here are some of the benefits:

Saves Valuable resources

Unlike the lengthy and costly traditional hiring processes, pre-employment assessment software helps companies increase their ROI by eliminating HR snugs such as face-to-face interactions or geographical restrictions. Pre-employment testing tools can also reduce the amount of time it takes to make good hires while reducing the risks of facing the financial consequences of a bad hire. 

Supports Data-Driven Hiring Decisions

Data runs the modern world, and hiring is no different. You are better off letting complex algorithms crunch the numbers and help you decide which talent is a fit, as opposed to hiring based on a hunch. 

Pre-employment assessment software helps you analyze assessments and generate reports/visualizations to help you choose the right candidates from a large talent pool. 

Improving candidate experience 

Candidate experience is an important aspect of a company’s growth, especially considering the fact that 69% of candidates admitting not to apply for a job in a company after having a negative experience. Good candidate experience means you get access to the best talent in the world. 

Elimination of Human Bias

Traditional hiring processes are based on instinct. They are not effective since it’s easy for candidates to provide false information on their resumes and cover letters. 

But, the use of pre-employment assessment software has helped in eliminating this hurdle. The tools have leveled the playing ground, and only the best candidates are considered for a position. 

Need some help deciding how you can reap the mentioned benefits of pre-employment assessment software? Click the button below to get help.

What To Consider When Choosing pre-employment assessment software

Now that you have a clear idea of what pre-employment tests are and the benefits of integrating pre-employment assessment software into your hiring process, let’s see how you can find the right tools. 

Here are the most important things to consider when choosing the right pre-employment testing software for your organization.


The candidates should be your top priority when you are sourcing pre-employment assessment software. This is because the ease of use directly co-relates with good candidate experience. Good software should have simple navigation modules and easy comprehension. 

Here is a checklist to help you decide if a pre-employment assessment software is easy to use;

  • Are the results easy to interpret?
  • What is the UI/UX like?
  • What ways does it use to automate tasks such as applicant management?
  • Does it have good documentation and an active community?

Tests Delivery (Remote proctoring)

Remote proctoring (Courtesy of FastTest)

Good online assessment software should feature good online proctoring functionalities. This is because most remote jobs accept applications from all over the world. It is therefore advisable to choose a pre-employment testing software that has secure remote proctoring capabilities. Here are some things you should look for on remote proctoring;

  • Does the platform support security processes such as IP-based authentication, lockdown browser, and AI-flagging?
  • What types of online proctoring does the software offer? Live real-time, AI review, or record and review?
  • Does it let you bring your own proctor?
  • Does it offer test analytics?

Test & data security, and compliance

Defensibility is what defines test security. There are several layers of security associated with pre-employment test security. When evaluating this aspect, you should consider what pre-employment testing software does to achieve the highest level of security. This is because data breaches are wildly expensive. 

The first layer of security is the test itself. The software should support security technologies and frameworks such as lockdown browser, IP-flagging, and IP-based authentication. If you are interested in knowing how to secure your assessments, check this post out.

The other layer of security is on the candidate’s side. As an employer, you will have access to the candidate’s private information. How can you ensure that your candidate’s data is secure? That is reason enough to evaluate the software’s data protection and compliance guidelines.

A good pre-employment testing software should be compliant with certifications such as GDRP. The software should also be flexible to adapt to compliance guidelines from different parts of the world. 

Questions you need to ask;

  • What mechanisms does the software employ to eliminate infidelity?
  • Is their remote proctoring function reliable and secure?
  • Are they compliant with security compliance guidelines including ISO, SSO, or GDPR?
  • How does the software protect user data?

User experience

A good user experience is a must-have when you are sourcing any enterprise software. A new age pre-employment testing software should create user experience maps with both the candidates and employer in mind. Some ways you can tell if a software offers a seamless user experience includes;

  • User-friendly interface
  • Simple and easy to interact with
  • Easy to create and manage item banks
  • Clean dashboard with advanced analytics and visualizations

Customizing your user-experience maps to fit candidates’ expectations attracts high-quality talent. 

Scalability and automation

With a single job post attracting approximately 250 candidates, scalability isn’t something you should overlook. A good pre-employment testing software should thus have the ability to handle any kind of workload, without sacrificing assessment quality. 

It is also important you check the automation capabilities of the software. The hiring process has many repetitive tasks that can be automated with technologies such as Machine learning, Artificial Intelligence (AI), and robotic process automation (RPA).  

Here are some questions you should consider in relation to scalability and automation; 

  • Does the software offer Automated Item Generation (AIG)?
  • How many candidates can it handle? 
  • Can it support candidates from different locations worldwide?

Reporting and analytics

Reporting and visualization

Example of Reporting and visualization functionality to help you make data-driven hiring decisions  (Courtesy of FastTest and

A good pre-employment assessment software will not leave you hanging after helping you develop and deliver the tests. It will enable you to derive important insight from the assessments.

The analytics reports can then be used to make data-driven decisions on which candidate is suitable and how to improve candidate experience. Here are some queries to make on reporting and analytics;

  • Does the software have a good dashboard?
  • What format are reports generated in?
  • What are some key insights that prospects can gather from the analytics process?
  • How good are the visualizations?

Customer and Technical Support

Customer and technical support is not something you should overlook. A good pre-employment assessment software should have an Omni-channel support system that is available 24/7. This is mainly because some situations need a fast response. Here are some of the questions your should ask when vetting customer and technical support;

  • What channels of support does the software offer/How prompt is their support?
  • How good is their FAQ/resources page?
  • Do they offer multi-language support mediums?
  • Do they have dedicated managers to help you get the best out of your tests?


Finding the right pre-employment testing software is a lengthy process, yet profitable in the long run. We hope the article sheds some light on the important aspects to look for when looking for such tools. Also, don’t forget to take a pragmatic approach when implementing such tools into your hiring process.

Are you stuck on how you can use pre-employment testing tools to improve your hiring process? Feel free to contact us and we will guide you on the entire process, from concept development to implementation. Whether you need off-the-shelf tests or a comprehensive platform to build your own exams, we can provide the guidance you need.  We also offer free versions of our industry-leading software FastTest and– visit our Contact Us page to get started!


Estimated reading time: 7 minutes

The impact of artificial intelligence in education is hardly quantifiable but is game-changing. AI is making the world a better place by powering innovations across all industries including healthcare and education. Learning is an important part of being a contributing member of society and AI is changing how we learn. We are yet to see humanoid robot teachers as depicted in science fiction but AI is rapidly finding its way into the education industry.

 Despite the rapid advancement in technology, most education regimes used today are customized for use during the industrial age. How then is AI trying to beat Donald Clark’s claim that “Education is a bit of a slow learner?

From smart tutors and content, facial recognition to AI-flagging in improving online assessment security, AI is making a positive impact in education. This blog post aims to get into the brass tacks of AI in education.

Let’s jump right in!

How is AI changing the education industry?

Artificial intelligence has an impact on education in many ways, but here are the most dominant ones: 

AI is Powering personalized learning

Personalized learning has proven to boost engagement and improve results among students. AI systems play an important role in elevating this experience. We all have different learning abilities and what works for one student may not work for another. Machine learning solutions can be tailored to help students in different levels of the learning spectrum exist in one class.

While it’s impossible for teachers to provide personalized learning experiences to students, AI systems can easily provide individual learning opportunities for students. Machine learning algorithms can predict outcomes thereby allowing tutors to provide content based on learners’ individualistic goals and past performance. 

   AI systems not only help identify student strengths and weaknesses but also provide reports about the effectiveness of learning methods. These reports can then be used to derive ways to boost learning engagement and improve test results. 

Task automation

According to a new study, teachers spend only 43% of their time in teaching, 11% in marking tests, 7% doing administrative tasks, and 13% planning lessons. This can be tiresome and affects the teachers’ ability to provide students with a good learning experience. This is where AI systems come in handy.

The internet is full of systems that help teachers automate certain learning tasks including lesson planning, generating progress reports, and so much more! for example leverages the power of AI  to provide tutors with the ability to create, manage, and grade personalized online tests. 

impact of  artificial inteligence in education (Task automation) online item marker. 

This helps tutors to be more productive, and a lot of resources are saved in the process. 

Online Assessment Security and effectiveness

Assessment is the most important part of the learning process. Online assessments have become popular across K12 and Higher-Ed and AI systems are playing an important role in making sure that they are effective and secure. 

Here are some interesting ways AI is positively influencing online exams:

  • AI flagging

AI flagging improves assessment security by helping supervisors spot any suspicious behaviors. This can be done using audio and videos captured during the examination period. The human proctor can then take action depending on governing policies. Some actions that may indicate cheating include background audio, extras faces on the screen, and suspicious body language. 

  • AI-powered remote proctoring

Remote proctoring allows students to take online exams from anywhere while safeguarding exam integrity. All a student needs is a computer with a webcam.  AI has revolutionized how online exams are managed by powering functionality such as live streaming, lockdown browsers, and so much more!

  • Automated Item Generation

AI plays an important role in automated item generation. Despite the unavailability of AI systems that can generate item banks automatically (we may see these in the near future), platforms such as provide users with intelligent templates which can be customized to increase effectiveness.  

AI systems are playing an important role in making online exams more effective, and we are yet to see more of it in the near future. 

Facial recognition 

Facial recognition is not only useful in identifying criminals in crowds by national security organizations but also helping improve the education industry in many ways. Moreover, facial recognition is gaining momentum in educational institutions as an identification mechanism. This helps increase the security of students by identifying people who are threats to the students.

Facial recognition is also being used in some institutions to improve the learning process. The systems used in this process collect insight by reading facial cues to determine how students feel during a lesson.  These systems are also used during online exams to uphold academic integrity.

Smart Content

What is smart content? 

Sometimes referred to as “Dynamic content”,   smart content refers to a diversified collection of learning material including video lectures, textbooks, and digitized textbook guides. 

The main aim of smart content is to ease the learning process. 

AI-powered smart content solutions provide real-time feedback, practice exams, interactive content, and many other functions aimed at breaking down complex ideas into easily digestible chunks. This helps students achieve their academic goals much faster. 

Understanding and changing learning experiences

AI is not only helping in understanding the learning process but also changing it.

AI systems and machine learning algorithms collect data from learning ecosystems and use it to identify learning methods that work and those that don’t. 

Other ways AI systems are changing the learning experience include automatic grading, augmenting recruitment processes, and providing constructive criticisms to students while helping them achieve their personalized goals. 

With a projected market revenue of up to $120 billion by 2021, AI tutoring is yet another way to increase student retention by providing personalized learning. AI tutoring comes in many forms including the use of chatbots and virtual reality. 

Artificial intelligence in education; The Flip Side

Just like any innovation ever made, AI has its dark side. 

However, they are not the over-exaggerated dark sides we see in science fiction where AI tries to wipe out the entire human race. 

The Dark Side of AI in education

Here are some drawbacks of using AI in education: 

AI is expensive: The custom development and maintenance of education AI systems can be very expensive. This limits its use to well-funded institutions. However, there are a lot of budget-friendly AI education solutions on the internet that you can easily adopt.  

Data Loss and Information Security: Cybercrimes are on the rise and AI systems are no exception. 

Risk of Addiction: Technology can be addictive and this may affect the performance of some students.

Lack of human connection: AI systems can have a negative effect on the psychological well-being of students. 

The Future of AI in education

AI is growing exponentially and its future in the education industry is bright. The impact of artificial intelligence in education is immense. Not only does it increase efficiency in the learning process but also continues to revolutionize education in ways we cannot imagine.

Despite the many concerns about AI systems not being able to connect emotionally with people or replacing educators, we believe it will open a portal of opportunities in the industry. 

Are you interested in getting access to AI-powered online testing solutions? Feel free to check out our solutions and consulting services ranging from Test Development,  Computerized Adaptive Testing, Remote Proctoring, Item banking and so much more!