About Catalyst Proficiency Tests - Rosetta Stone Support for Work
English
×

Additional Help

>

About Catalyst Proficiency Tests

   

How to Access the Tests

Four proficiency tests are currently available for English, Spanish, and German, and two proficiency tests are available for French. 


The tests will appear for learners based on settings determined by each organization’s admin. The default settings are 150, 300, 450 and 600 days after each learner takes the placement test.


When a learner has reached the predetermined time to take the test, they will receive this pop-up on web:
 

 

Catalyst proficiency test pop up


Learners will also receive this pop-up when using the program on mobile, advising them to take the proficiency tests on a computer:


 

 

What if a learner isn’t ready to take the test when it appears?

Learners are advised but not required to take the proficiency test when it appears. If a learner chooses not to take the test and closes the pop-up, it will reappear every time they log in until they complete the test.


What happens if a proficiency test is interrupted?

If a learner is mid-way through a proficiency test and the session is interrupted (electricity or internet goes out, accidentally closes the browser/tab etc), the next time they log in they will see the same prompt to take the proficiency test. When they click to start the proficiency test again, it will automatically preserve the answers they've already given and start them out at the question after the question they were at when the session was interrupted.

 

Tests and Scoring

How were the Rosetta Stone English Pre-test and English proficiency tests developed?

All Rosetta Stone tests are developed by language assessment experts following strict industry standards. Before any language assessment test is given to learners, we perform a validation study to:

  • Evaluate the quality and appropriateness of each question.
  • Determine if the test is measuring the intended skills.
  • Ensure that questions are fair, unbiased and appropriately leveled.


The validation study is also used to develop a model to determine the score and CEFR level based on the learner's proficiency.  


What does the scaled score represent?

A scaled score is a conversion of the total number of correct answers (raw score) to a consistent and standardized scale. The conversion takes into account the difficulty of the questions in each test version. This ensures that test scores are comparable across different versions of tests.


What is a CEFR level?

The Common European Framework of Reference for Languages (CEFR) is an international standard for describing language ability. It is used around the world to describe learners' language skills. The CEFR defines six levels of language proficiency (from beginner to advanced: A1 and A2, B1 and B2, C1 and C2). It describes what language learners should be able to do in terms of listening, reading, spoken interaction, spoken production, and writing, using a series of ‘can do’ statements. The CEFR makes it possible to compare standards and assessments across languages, and provides a shared basis for recognizing language qualifications. 


What if a learner skips a question, and guesses answers on the test? 

If a learner skips a question, then they will get the question wrong and it will count negatively towards their score. Due to probability theory, there is an understanding that guessing 100% of the time will average out to 25% correct answers. This is an average, and the probability of learners getting items correct when blindly guessing is .25 due to the 4 answer options we provide for every question. Some learners may be lucky and get 100% correct, and some may be unlucky and get 0% correct, but the math averages out to 25% correct.

Additionally, the items in the test form are displayed from easiest to hardest generally, and it is possible the learner is guessing only on the later more difficult items. This behavior would result in a higher than expected score. Even still, some learners may be cheating via Google translate or looking up concepts (verb tense) in order to make more educated guesses. We frown upon cheating, as we see this behavior leads our learners to content above their level and that makes them uncomfortable. Therefore, it is in your learner’s best interest to answer honestly, and skip those questions they aren’t sure about. 


Why can’t a learner go back to previous questions on the test?

In an effort to cut down on cheating, the test does not allow for learners to move back and forth between test items. Also, some test items may include content that hints to answers for other test items. In addition to those reasons, we also have a test behavior that limits the number of times audio can be played (we limit to one play per audio piece). The content has been evaluated this way and the psychometrics supports this test behavior. If a learner were to move back and forth between items, then the audio would be available to be played more than once, and the item difficulties would need to be re-evaluated, and new scoring algorithms would need to be developed.  


What can I do if a learner's scaled scores are stagnant or declining?

Encourage learners to spend time on a weekly basis in the product to avoid knowledge loss. If a learner is prompted to take a test and he/she doesn’t have much time in the product, they should skip the test and take it at a later time.

Several studies highlight the importance of having sufficient hours of instruction between assessments. For example, the National Reporting System for Adult Education (NRS) recommends 30-120 hours of study between pretesting and posttesting.


Why has a learner’s scaled score increased but not their CEFR level?

There are a large range of points within each CEFR level, so it's possible (and quite common) for learners to increase their scaled scores and still have the same CEFR level. As learners progress in proficiency, their score will increase and they may advance to a higher CEFR level.



Test Intervals

How often are learners tested?

When learners start the Catalyst program, they will take a questionnaire which allows them to set their language learning goals. The questionnaire is followed by the Pre-test, used to both establish a baseline proficiency and to decide product placement for each learner.


The second test learners take is a proficiency test to assess their growth. They are prompted to take this test 150 calendar days after completing the questionnaire.


300 days after completing the questionnaire, learners will be prompted to take a different version of the proficiency test to assess their proficiency again.


Why did Rosetta Stone select 150-day intervals for the tests?

We chose 150 days based on historical data evaluating the amount of learning time spent in our products. We found that learners spend more time learning languages in their first 90 days. We allow some extra time before we give them each test to ensure learners have enough time in the product to show growth.


Can test intervals be adjusted?

Yes, administrators can adjust the test intervals in the Administration Tools product under Settings. Adjusted testing intervals will only go into effect for new learners. The minimum test interval possible is 30 days.



Learner Growth Report

What is the purpose of the Learner Growth Report?

The Learner Growth Report allows you to compare the results from a learner's Pre-test (a test that measures the baseline proficiency of the learner) to the results of his/her proficiency tests (tests that measure proficiency, typically after 150 calendar days, unless otherwise specified in Administration Tools). 


Both tests use the same scoring methodology and branching. Both tests also show a learner’s CEFR level and scaled score to ensure that you can easily track your learners’ improvement/growth.


What factors affect a learner’s performance or growth while learning a language?

Several factors can affect learner growth. If a learner does not spend enough time in the product, he/she will not show substantial progress. Learners need to be in the product on a regular basis to maintain learning growth. Another factor that can affect test results is a learner’s focus and/or physical and mental state while taking tests. We recommend a quiet setting free from distractions for the best results. 


Is learner growth affected by a learner’s baseline proficiency?

It is normal and expected for beginners to show faster progress than learners with an intermediate or advanced baseline proficiency, even when they spend the same amount of time in the product learning a language. This is because beginners have more room to grow.


Why don't I see all learners in my organization/group in the Learner Growth Report?

In order to be included on the Learner Growth Report, a learner must complete the Pre-test, a proficiency test, or both tests. 


If an administrator chooses to use the override feature in Administration Tools and places a learner directly into a specific product or level, that learner will not take the Pre-test and therefore will not have a baseline score. These learners will still be presented with proficiency tests at the default times (150-day intervals, unless otherwise specified in Administration Tools). 


If a learner chooses not to take the proficiency tests, they will be reminded each time they log in to the product. Encourage all of your learners to take these tests so that you can track their improvement.
 

Was this article helpful?