About Catalyst Proficiency Tests
Four proficiency tests are currently available for English, Spanish, and German, and two proficiency tests are available for French.
Learners will also receive this pop-up when using the program on mobile, advising them to take the proficiency tests on a computer:
Learners are advised but not required to take the proficiency test when it appears. If a learner chooses not to take the test and closes the pop-up, it will reappear every time they log in until they complete the test.
If a learner is mid-way through a proficiency test and the session is interrupted (electricity or internet goes out, accidentally closes the browser/tab etc), the next time they log in they will see the same prompt to take the proficiency test. When they click to start the proficiency test again, it will automatically preserve the answers they've already given and start them out at the question after the question they were at when the session was interrupted.
All Rosetta Stone tests are developed by language assessment experts following strict industry standards. Before any language assessment test is given to learners, we perform a validation study to:
A scaled score is a conversion of the total number of correct answers (raw score) to a consistent and standardized scale. The conversion takes into account the difficulty of the questions in each test version. This ensures that test scores are comparable across different versions of tests.
The Common European Framework of Reference for Languages (CEFR) is an international standard for describing language ability. It is used around the world to describe learners' language skills. The CEFR defines six levels of language proficiency (from beginner to advanced: A1 and A2, B1 and B2, C1 and C2). It describes what language learners should be able to do in terms of listening, reading, spoken interaction, spoken production, and writing, using a series of ‘can do’ statements. The CEFR makes it possible to compare standards and assessments across languages, and provides a shared basis for recognizing language qualifications.
If a learner skips a question, then they will get the question wrong and it will count negatively towards their score. Due to probability theory, there is an understanding that guessing 100% of the time will average out to 25% correct answers. This is an average, and the probability of learners getting items correct when blindly guessing is .25 due to the 4 answer options we provide for every question. Some learners may be lucky and get 100% correct, and some may be unlucky and get 0% correct, but the math averages out to 25% correct.
In an effort to cut down on cheating, the test does not allow for learners to move back and forth between test items. Also, some test items may include content that hints to answers for other test items. In addition to those reasons, we also have a test behavior that limits the number of times audio can be played (we limit to one play per audio piece). The content has been evaluated this way and the psychometrics supports this test behavior. If a learner were to move back and forth between items, then the audio would be available to be played more than once, and the item difficulties would need to be re-evaluated, and new scoring algorithms would need to be developed.
Encourage learners to spend time on a weekly basis in the product to avoid knowledge loss. If a learner is prompted to take a test and he/she doesn’t have much time in the product, they should skip the test and take it at a later time.
Several studies highlight the importance of having sufficient hours of instruction between assessments. For example, the National Reporting System for Adult Education (NRS) recommends 30-120 hours of study between pretesting and posttesting.
There are a large range of points within each CEFR level, so it's possible (and quite common) for learners to increase their scaled scores and still have the same CEFR level. As learners progress in proficiency, their score will increase and they may advance to a higher CEFR level.
When learners start the Catalyst program, they will take a questionnaire which allows them to set their language learning goals. The questionnaire is followed by the Pre-test, used to both establish a baseline proficiency and to decide product placement for each learner.
We chose 150 days based on historical data evaluating the amount of learning time spent in our products. We found that learners spend more time learning languages in their first 90 days. We allow some extra time before we give them each test to ensure learners have enough time in the product to show growth.
Yes, administrators can adjust the test intervals in the Administration Tools product under Settings. Adjusted testing intervals will only go into effect for new learners. The minimum test interval possible is 30 days.
The Learner Growth Report allows you to compare the results from a learner's Pre-test (a test that measures the baseline proficiency of the learner) to the results of his/her proficiency tests (tests that measure proficiency, typically after 150 calendar days, unless otherwise specified in Administration Tools).
Several factors can affect learner growth. If a learner does not spend enough time in the product, he/she will not show substantial progress. Learners need to be in the product on a regular basis to maintain learning growth. Another factor that can affect test results is a learner’s focus and/or physical and mental state while taking tests. We recommend a quiet setting free from distractions for the best results.
It is normal and expected for beginners to show faster progress than learners with an intermediate or advanced baseline proficiency, even when they spend the same amount of time in the product learning a language. This is because beginners have more room to grow.
In order to be included on the Learner Growth Report, a learner must complete the Pre-test, a proficiency test, or both tests.