The United States Congress recently passed the 2016 National Defense Authorization Act.  This provides policies and authorizations for 2016 but no budgetary appropriations, which is done with subsequent legislation.  While these Acts and related ones happen all the time, they rarely have direct impact on the field of testing and assessment.

This one is different.

There is one Section that specifically addresses a major portion of the testing industry.  Section 559 of the 2016 NDAA (full text below) requires that all Professional Certifications obtained by members of the armed forces must be accredited by an independent third party.  For a discussion on the difference between certification and accreditation, read this post, but here’s more on this specific Act.

What does this mean for military certifications?

The US Military spends a lot of money to help its people gain relevant skills within the military as well as for transferring their skills to the civilian world.  If you work on information technology in the military, you probably want one of those many certifications.  If you work as a medical assistant, you probably want a certification.  If you are soon to exit the military and need to find a job, a certification is obviously beneficial.  The government will pay for these before you leave.  This of course is all good – we want to facilitate members of the military with a smooth transition to civilian life as well as a well-paying job that they like.

Of course, the private sector saw an opportunity.  There was nothing to stop anyone from sitting in their basement, writing 50 test questions on medical assisting, and putting them up on a website as a certification… and charging the US Govt hundreds of dollars each.  The legislation below is intended to stop that.  An independent third party must come in and give your certification a stamp of approval that it has been built according to generally accepted standards.  This is of course quite reasonable.  The only disconcerting piece is that the timeline is 3 years – meaning that fly-by-night operations can still keep going for now.

Who are these third parties?  There are really two:

NCCA
National Commission for Certifying Agencies

 

 

ansi-accreditation-logoAmerican National Standards Institute

 

Want to learn more about accreditation?

Do you have a certification program that you are looking to get accredited, for this or any other reason?  We specialize in helping groups like yours.  Just fill out the contact form below and one of our consultants will get in touch with you and explain more about what is needed.

[contact-form to=’sales@54.89.150.95′ subject=’Inquiry on DoD Certification requirements’][contact-field label=’Name’ type=’name’ required=’1’/][contact-field label=’Email’ type=’email’ required=’1’/][contact-field label=’Comment’ type=’textarea’/][/contact-form]

SEC. 559. QUALITY ASSURANCE OF CERTIFICATION PROGRAMS AND STANDARDS FOR PROFESSIONAL CREDENTIALS OBTAINED BY MEMBERS OF THE ARMED FORCES.

Section 2015 of title 10, United States Code, as amended by section 551 of the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015 (Public Law 113–291; 128 Stat. 3376), is further amended—

(1) by redesignating subsections (c) and (d) as subsections (d) and (e), respectively; and

(2) by inserting after subsection (b) the following new subsection (c):

“(c) Quality Assurance Of Certification Programs And Standards.— (1) Commencing not later than three years after the date of the enactment of the National Defense Authorization Act for Fiscal Year 2016, each Secretary concerned shall ensure that any credentialing program used in connection with the program under subsection (a) is accredited by an accreditation body that meets the requirements specified in paragraph (2).

“(2) The requirements for accreditation bodies specified in this paragraph are requirements that an accreditation body—

“(A) be an independent body that has in place mechanisms to ensure objectivity and impartiality in its accreditation activities;

“(B) meet a recognized national or international standard that directs its policy and procedures regarding accreditation;

“(C) apply a recognized national or international certification standard in making its accreditation decisions regarding certification bodies and programs;

“(D) conduct on-site visits, as applicable, to verify the documents and records submitted by credentialing bodies for accreditation;

“(E) have in place policies and procedures to ensure due process when addressing complaints and appeals regarding its accreditation activities;

“(F) conduct regular training to ensure consistent and reliable decisions among reviewers conducting accreditations; and

“(G) meet such other criteria as the Secretary concerned considers appropriate in order to ensure quality in its accreditation activities.”.

Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source

If you are looking around for an online testing platform, you have certainly discovered there are many out there.  How do you choose?  Well, for starters, it is important to understand that the range of quality and sophistication is incredible.  This post will help you identify some salient features and benefits that you should consider in first establishing your requirements and then selecting an online testing platform.

 

Types of online testing platforms

There are many, many systems in existence that can perform some sort of online assessment delivery.  From a high level, they differ substantially in terms of sophistication.

  1. There are, of course, many survey engines that are designed for just that: unproctored, unscored surveys.  No stakes involved, no quality required.
  2. At a slightly higher level are platforms for simple or formative assessment.  For example, a learning management system will typically include a component to deliver multiple choice questions.  However, this is obviously not their strength – you should use an LMS for assessment no more than you should use an assessment platform as an LMS.
  3. At the top end of the spectrum are assessment platforms that are designed for real assessment.  That is, they implement best practices in the field, like Angoff studies, item response theory, and adaptive testing.  The type of assessment effort being done by school teachers is quite different from that being done by a company producing high-stakes international tests.  FastTest is an example of such a platform.

This post describes some of the aspects that separate the third level from the lower two levels.  If you need these aspects, then you likely need a “Level 3” system.  Of course, there are many testing situations where such a high quality system is complete overkill.

Another consideration is that many testing platforms are closed-content.  That is, they are 100% proprietary and used only within the organization that built it.  You likely need an open-content system, which allows anyone to build and deliver tests.

 

Test development aspects

Reusable items: If you write an item for this semester’s test, you should be able to easily reuse it next semester.  Or let another instructor use it.  Surprisingly, many systems do not have this basic functionality.

Item metadata: All items should have extensive metadata fields, such as author, content area, depth of knowledge, etc.  It should also store item statistics or IRT parameter, and more importantly, actually use them.  For example, test form assembly should utilize item statistics to evaluate form difficulty and reliability.

Form assembly: The system needs advanced functionality for form assembly, including extensive search functionality, automation, support for parallel forms, etc.

Publication options: The system needs to support all the publication situations that your organization might use: paper vs. online, implementation of time limits, control of widgets like calculators, etc.

Item types: The field of assessment is moving excitedly towards technology enhanced items (TEIs) in an effort to assess more deeply and authentically.  While this brings challenges, online testing platforms obviously need to support these.

Best practices in online testing

Supports standards: The workflow and reports of the system should facilitate the following of industry standards like APA/AERA/NCME,  NCCA, ANSI, and ITC.

Psychometrics:  Psychometrics is the Science of Assessment.  The system needs to implement real psychometrics like item response theory, computerized adaptive testing, distractor analysis, and test fraud analysis.  Note: just looking at P values and point-biserials is extremely outdated.

Logs: The online testing platform should log user and examinee activity, and have it available for queries and reports.

Reporting: You need reports on various aspects of the system, including item banks, users, tests, examinees, and domains.

 

System aspects

Security: Security is obviously essential to high stakes testing organizations.  There are many aspects though: user roles, content access control, browser lockdown, options for virtual proctoring, examinee access, proctor management, and more.

Reliability: The online testing platform needs to have very little downtime.

Scalability: The online testing platform needs to be able to scale up to large volumes.

Configurability: The functionality throughout the system needs to be configurable to meet the needs of your organization and individual users.

 

Learn more!

Contact solutions@assess.com to learn how a professional online testing platform can positively impact your organization, or click the button below to see our Contact Us page.

 

Education, to me, is the neverending opportunities we have for a cycle of instruction and assessment.    This can be extremely small scale (watching a YouTube video on how to change a bike tire, then doing it) to large scale (teaching a 5th grad math curriculum and then assessing it nationwide).  Psychometrics is the Science of Assessment – using scientific principles to make the assessment side of that equation more efficient, accurate, and defensible.  How can psychometrics, especially its intersection with technology, improve your assessment?  Here are 10 important avenues to improve assessment with psychometrics.

10 ways to improve assessment with psychometrics

  • Job analysis: If you are doing assessment of anything job-related, from pre-employment screening tests of basic skills to a nationwide licensure exam for a high-profile profession, a job analysis is the essential first step.  It uses a range of scientifically vetted and quantitatively leveraged approaches to help you define the scope of the exam.
  • Standard-setting studies: If a test has a cutscore, you need a defensible method to set that cutscore.  Simply selecting a round number like 70% is asking for a disaster.  There are a number of approaches from the scientific literature that will improve this process, including the Angoff method and Contrasting Groups method.
  • Technology-Enhanced Items (TEIs): These item types leverage the power of computers to change assessment my moving the medium from multiple-choice recall questions to questions that evaluate more deeper thinking.  Substantial research exists on these, but don’t forget to establish a valid scoring algorithm!
  • Workflow management: Items are the basic building blocks of the assessment.  If they are not high quality, everything else is a moot point.  There needs to be formal processes in place to develop and review test questions.
  • Linking: Linking and equating refer to the process of statistically determining comparable scores on different forms of an exam, including tracking a scale across years and completely different set of items.  If you have multiple test forms or track performance across time, you need this.  And IRT provides far superior methodologies.
  • Automated test assembly: The assembly of test forms – selecting items to match blueprints – can be incredibly laborious.  That’s why we have algorithms to do it for you.  Check out TestAssembler.
  • Distractor analysis: If you are using items with selected responses (including multiple choice, multiple response, and Likert), a distractor/option analysis is essential to determine if those basic building blocks are indeed up to snuff.  Our reporting platform in FastTest, as well as software like Iteman and Xcalibre, is designed for this purpose.
  • Item response theory (IRT): This is the modern paradigm for developing large-scale assessments.  Most important exams in the world over the past 40 years have used it, across all areas of assessment: licensure, certification, K12 education, postsecondary education, language, medicine, psychology, pre-employment… the trend is clear.  For good reason.
  • Automated essay scoring: This technology is just becoming more widely available, thanks to a public contest hosted by Kaggle.  If your organization scores large volumes of essays, you should probably consider this.
  • Computerized adaptive testing (CAT):  Tests should be smart.  CAT makes them so.  Why waste vast amounts of examinee time on items that don’t contribute?  There are many other advantages too.

Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source

Assessment Systems will be attending the 2014 CLEAR Annual Educational Conference, held Sept. 11-13, 2014 at the New Orleans Marriott. The annual conference is attended by over 400 members of the regulatory community, and represents several countries from all over the world.

This year Assessment Systems will be showcasing our latest technologies for organizations who administer licensure/certificate examinations:

  • The flexible Credentialing Management System (CMS), Certifior™ – ideal for managing higher-stakes assessments such as a professional regulatory licensure/certificate program, including the management of your own test location network
  • Computer-Based Online Testing with the FastTest™  System – allowing you to develop and administer tests online of much higher quality
  • Tech-enhanced item formats for more effective and engaging assessment
  • The latest in test security enhancements, such as remote proctoring and browser lockdown options

Be sure to stop by Booth 14, and speak with Sean Finn or Christa Zaspel about how your organization could benefit from Assessment Systems. To attend the 2014 CLEAR Conference, please click here to learn more. We are excited to see you soon in beautiful French Quarter!

Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source

Background of the NCCA Annual Report

As a way of ensuring that accredited certification programs continue to provide high-quality certifications, the National Commission for Certifying Agencies (NCCA) requires the submission of an NCCA annual report.  The report includes operational information, but also statistics regarding the psychometric performance of your exams.  Psychometrics remains a black box to many certification professionals, so I provide some explanations below on the required statistics.  Note that these statistics must be reported for each form of each exam, separated by certification program.  So if you offer four certifications, each with two forms, you are going to have to calculate and submit eight sets of statistics.

NCCA provides two vital resources for this process at the links below.

Annual Report Form: www.credentialingexcellence.org/d/do/66    -This is what you would fill out and submit.

Sample Annual Report: www.credentialingexcellence.org/d/do/65    -This is filled with imaginary example data, but is very useful as a guideline.

NCCA requirement Explanation Example
Form name or number This is the name which you use to keep track of the exam form. Suppose you had two: MA2014-1, MA2014-2
Total # of candidates tested on this exam form in 20xx This is simply the number of people that took this test during the given time period. 1,234
% of Candidates Passing in 20xx This is the pass rate of the form. NumberPassing/NumberCandidates x 100 Suppose 802 passed out of the 1,234.  Then your pass rate is 65%.
Passing Point Also known as the cutscore, this is the score needed to pass the exam. If you have 100 items and candidates need a 72 to pass, then this is 72.
Average Score This is the average (mean) score for anyone that took this exam during the given time period. 75.25
Standard Deviation The standard deviation provides an index regarding the spread of scores.  If this number is small, it means that most examinees had scores near the average.  If it is large, it means that examinees had a wide range of scores. If you have 100 items, then an SD of 3.2 would be pretty small.  And SD of 18.4 would be considered large.
Standard Error of Measurement A large SEM means high error and therefore low accuracy, so lower is better.  There are two ways to calculate SEM, which depend on the psychometric approach used by your organization.If you use classical test theory, the SEM is simply SEM=SD*sqrt(1-Reliability).If you use IRT, that SEM is based on extremely complex calculations beyond the scope of this paper, and is a continuous function rather than a single index.  You also have the option to just use the classical SEM, as you have to calculate the classical reliability anyway (see below). Suppose you have an SD of 5.4 and Reliability of 0.92.  This is then 5.4*(1-0.92)=1.527.  The SEM is fairly small because our Reliability is good.
Decision Consistency Estimate(of P/F decisions) This is the proportion of candidates to receive a consistent P/F decision if they took the test over. Again, there are two options here.Classical test theory programs will use an index that ranges from 0 to 1, with 1 being perfect.  There are several such indices but common ones are Livingston, Huynh, and Subkoviak. (Though actually, van der Linden and Mellenbergh proved that the Reliability coefficient should be used here.)IRT-based programs have the option to submit the value of the SEM function at the cutscore. 0.94 would mean that we expect 94% of candidates to receive a consistent P/F decision if they took the test again.0.32 would mean that we expect that level of variation in IRT (theta) scores near the cutscore.  Above 0.50 is relatively inaccurate.
Reliability Estimate3(of test scores) Reliability is an attempt to boil down the quality of your entire assessment into a single number between 0 and 1.  Reliability of 0 means random numbers, while 1 is perfect measurement.  Obviously, you lose some important information be boiling down a complex assessment process to a single number, but it is highly convenient so it is highly ubiquitous.Need to raise this?  Either add more scored items to the test, or increase the quality of your items. <0.7 is generally regarded as unacceptable>0.7 is generally regarded as acceptable>0.9 is regarded as good (accurate scores)
Total Number of Items on Exam4 The number of scored items on the exam. Suppose you had 100 items that count towards the score plus 20 pilot items.  This submission should then be 100.

NCCA also provides the following guidelines in footnotes

For Passing Point, Average Score, Standard Deviation, and Standard Error of Measurement, you must state the scale or metric that you use in the NCCA annual report.  For example, if you score all your tests by counting number of items correct and then report that to the candidates, these four things should all be calculated on number-correct scores.  If you use raw IRT scoring, with a bell curve that has a mean of 0.0 and a SD of 1.0, then these four things should be calculated on those scores.  If you convert all your scores to scaled scores (for example, how university admissions tests often use a scale of 200 to 800), then calculate using those scores.  The choice is in part up to your psychometrician and you; the actual choice does not matter as much as you being consistent.  Otherwise, it is difficult for the NCCA evaluators to conceptualize the performance of your exam.

For Decision Consistency, you need to note whether you are using the classical approach (index 0 to 1) or the IRT approach (SEM at cutscore).  If using classical, please note the name of the index (Livingston, Huynh, Subkoviak…).

For Reliability estimate, there are also several indices that could be used, such as alpha/KR20, alternative forms, and split-half with Spearman-Brown correction.  Note which one you use.  Alpha/KR20 is by far the most common.

Most tests are of fixed length, i.e., every candidate receives 100 items.  Very large certification programs will sometimes use adaptive testing, which is based on complex algorithms, and not every candidate receives the same number of items.  If this is the case, you need to provide the possible range; the Total Number of Items is then the average number of items seen by examinees. 

Example

The following table provides statistical information in the format required for the NCCA annual report.  This is only an example; as discussed above, there are sometimes a few ways you can approach a certain column.

Table 1: Test Summary Statistics for Each Test Form

Test Form Name N Candidates N Passed Passing Point Average Score Standard Deviation SEM Decision Consistency Reliability Items
CBA 2014-1 978 645 72 75.94 9.03 2.71 0.86 0.91 100
2014-2 963 638 72 76.13 8.89 2.51 0.88 0.92 100

Average score, standard deviation, SEM, and passing point are all reported on the raw number-correct score metric.

Decision consistency index is the Livingston coefficient.

Reliability is estimated by coefficient alpha.

OK, I now I need to get all these statistics.  Where do I find them?

Your psychometrician should report them to you.  Alternatively, you can calculate them in-house if you have any psychometric expertise.  If you prefer to have them calculated for you, we recommend you utilize our Certifior platform for credential management and delivery.  We have an automated report that provides you with all the necessary information.

Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source

Certification… Certificate… Accreditation

These three terms might seem similar, but mean very different things. The Institute for Credentialing Excellence, which is the association of certification associations and therefore the leader in the field, defines the first two thusly.  Accreditation is something that is applied to a Certificate or Certification.  Note that a Certificate is based on instruction/training that your organization might provide. To be a Certification, your test cannot require candidates to take your training if you want to gain accreditation.

Certification

Professional or personnel certification is a voluntary process by which individuals are evaluated against predetermined standards for knowledge, skills, or competencies. Participants who demonstrate that they meet the standards by successfully completing the assessment process are granted the certification.

Certificate

An assessment-based certificate program is a non-degree granting program that:
(a) provides instruction and training to aid participants in acquiring specific knowledge, skills, and/or competencies associated with intended learning outcomes;
(b) evaluates participants’ achievement of the intended learning outcomes; and
(c) awards a certificate only to those participants who meet the performance, proficiency or passing standard for the assessment(s).

Accreditation

Accreditation says that your Certification or Certificate program meets best practices. Only the minority of these guidelines refer to aspects of your test, such as cutscores and reliability. The rest pertain to aspects such as board governance, eligibility pathways, security policies, handbooks, and corporate finance. We can help you navigate these waters, in addition to the technical test-related aspects.

What is psychometrics?

Psychometrics is the science of assessment, that is, the testing of psychoeducational variables.  It is often confused with psychological assessment, but it is actually far wider. Psychometrics studies the assessment process itself (what makes a good test?) regardless of what the test is about.  As such, it also covers many other areas of testing, from K-12 math exams to a certification to be an Accountant to assessment of basic job skills to university admissions, and much more.

Psychometrics an essential aspect of assessment, but to most people it remains a black box. However, a basic understanding is important for anyone working in the testing industry, especially those developing or selling tests.

Psychometrics is centered around the concept of validity, which is the documentation that the interpretations you are making from test scores are actually supported.  There is a ton of work that goes into making high-quality exams.

This serves an extremely important purpose in society.  We use tests every day to make decisions about humans, from hiring a person to helping a 5th grader learn math to providing career guidance.  By using principles of engineering and science to improve these assessments, we are making those decisions more accurate, which can have far-reaching effects.

test development cycle

How can psychometrics help your organization?

Why is psychometrics so important, and how will it benefit your organization? There are two primary ways to implement better psychometrics in your organization: process improvement (typically implemented by psychometricians), and specially-designed software.

This article will outline some of the ways that your tests can be improved, but first, let me outline some of the things that psychometrics can do for you.

 

Define what should be covered by the test

Before writing any items, you need to define very specifically what will be on the test.  Psychometricians typically run a job analysis study to form a quantitative, scientific basis for the test blueprints.  A job analysis is necessary for a certification program to get accredited.

 

Improve development of test content

There is a corpus of scientific literature on how to develop test items that accurately measure whatever you are trying to measure.  This is not just limited to multiple-choice items, although that approach remains popular.  Psychometricians leverage their knowledge of best practices to guide the item authoring and review process in a way that the result is highly defensible test content.  Professional item banking software provides the most efficient way to develop high-quality content and publish multiple test forms, as well as store important historical information like item statistics.

 

Set defensible cutscores

Test scores are often used to classify candidates into groups, such as pass/fail (Certification/Licensure), hire/non-hire (Pre-Employment), and below-basic/basic/proficient/advanced (Education).  Psychometricians lead studies to determine the cutscores, using methodologies such as Angoff, Beuk, Contrasting-Groups, and Borderline.

 

Statistically analyze results to improve the quality of items and scores

Psychometricians are essential for this step, as the statistical analyses can be quite complex.  Smaller testing organizations typically utilize classical test theory, which is based on simple mathematics like proportions and correlations.  Large, high-profile organizations typically use item response theory, which is based on a type of nonlinear regression analysis.  Psychometricians evaluate overall reliability of the test, difficulty and discrimination of each item, distractor analysis, possible bias, multidimensionality, linking multiple test forms/years, and much more.  Software such as Iteman and Xcalibre is also available for organizations with enough expertise to run statistical analyses internally.

 

Establish and document validity

Validity is the evidence provided to support score interpretations.  For example, we might interpret scores on a test to reflect knowledge of English, and we need to provide documentation and research supporting this.  There are several ways to provide this evidence.  A straightforward approach is to establish content-related evidence, which includes the test definition, blueprints, and item authoring/review.  In some situations, criterion-related evidence is important, which directly correlates test scores to another variable of interest.  Delivering tests in a secure manner is also essential for validity.

 

Is there a lot of Math in Psychometrics?

Absolutely.  A large portion of the work involves the statistical analysis of exam data, as mentioned above.  Classical test theory uses basic math like proportions, averages, and correlations.  An example of this is below, where we are analyzing a test question to determine if it is good.  Here, we see that the majority of the examinees get the question correct (65%) and that it has a strongly positive point-biserial, which is good, given the low sample size in this case.

Iteman45-quantile-plot

Item response theory analyzes many of the same things, but with far more complex mathematics by fitting nonlinear models.  However, doing so provides a number of advantages.  It is much easier to equate across forms or years, build adaptive tests, and construct forms.

Xcalibre item response theory

Here’s an article that compares CTT to IRT, if you are interested in learning more.

 

Where is Psychometrics Used?

Certification

In certification testing, psychometricians develop the test via a documented chain of evidence following a sequence of research outlined by accreditation bodies, typically: job analysis, test blueprints, item writing and review, cutscore study, and statistical analysis.  Web-based item banking software like FastTest is typically useful because the exam committee often consists of experts located across the country or even throughout the world; they can then easily log in from anywhere and collaborate.

 

Pre-Employment

In pre-employment testing, validity evidence relies primarily on establishing appropriate content (a test on PHP programming for a PHP programming job) and the correlation of test scores with an important criterion like job performance ratings (shows that the test predicts good job performance).  Adaptive tests are becoming much more common in pre-employment testing because they provide several benefits, the most important of which is cutting test time by 50% – a big deal for large corporations that test a million applicants each year.  Adaptive testing is based on item response theory, and requires a specialized psychometrician as well as specially designed software like FastTest.

 

K-12 Education

Most assessments in education fall into one of two categories: lower-stakes formative assessment in classrooms, and higher-stakes summative assessments like year-end exams.  Psychometrics is essential for establishing the reliability and validity of higher-stakes exams, and on equating the scores across different years.  They are also important for formative assessments, which are moving towards adaptive formats because of the 50% reduction in test time, meaning that student spend less time testing and more time learning.

 

Universities

Universities typically do not give much thought to psychometrics even though a significant amount of testing occurs in higher education, especially with the move to online learning and MOOCs.  Given that many of the exams are high stakes (consider a certificate exam after completing a year-long graduate program!), psychometricians should be used in the establishment of legally defensible cutscores and in statistical analysis to ensure reliable tests, and professionally designed assessment systems used for developing and delivering tests, especially with enhanced security.

Assessment Systems Corporation (ASC)  will be attending the 2013 Institute for Credentialing Excellence (ICE) Exchange Conference in November. The 2013 ICE Exchange Conference will be held in Amelia Island, Florida (outside Jacksonville) from November 11-14. The ICE Exchange Conference is the credentialing industry’s premiere conference, and provides organizations the opportunity to network, visit with vendors, and learn industry trends.

Nathan Thompson, Vice President of ASC, recognizes this as an opportunity both for attendees. “I am eager to attend the ICE Exchange Conference in November because I am interested in learning about new developments in the Certification industry, especially in the topics of badging, test security, and psychometrics, as well as sharing how ASC’s advanced software can provide technological solutions to the issues facing credentialing organizations.” Thompson also adds that ASC’s innovative product, FastTest Web, has the ability to positively impact the industry.

Thompson adds, “We are busy integrating new certification-specific functionalities into FastTest Web, and I am very excited to demonstrate it for leaders in the industry. Additionally, our Iteman 4 Software is highly relevant for certification organizations as it is designed to produce the statistical reports required by NCCA for accreditation.”

For more information about the 2013 ICE Exchange Conference, go to http://www.credentialingexcellence.org

Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source