I have a lot of reading to do to prepare for next week's Seminar II, Designing Learning Environments in Developmental and Adult Education, to be done by Dr. Barbara Bonham. Before starting on that, I organized my notes and tried to synthesize and summarize everything from Seminar I, Basic Skills Assessment and Placement.
Developmental education bridges the gap between the proficiencies of entering students and the college's ability to provide both access and quality. The corollary is that without a comprehensive and effective developmental education program, a college will have to lower its standards. The reasonable deduction, there, is that the developmental education program in any college is its most important program.
Most of week one of the Kellogg Institute was an extremely comprehensive 5-day lecture done by Dr. Edward A. Morante on the topic of Basic Skills Assessment and Placement. The notes, activities, and articles I have accumulated from this first week of the Institute are almost two inches in thickness. (The big Kellogg binder we were given is huge; it needs to be.) On Monday of the week, all participants were given a pretest, the scores from which were not told until after we took the posttest on Friday.
Dr. Morante was an excellent lecturer and kept us thinking. His overview, on Monday, was appropriate for the audience, as he reviewed the principles and components for developmental education, then went directly into testing. He covered the kinds of tests, the uses of tests and placement testing.
Among my notes:
Atmosphere is important in any testing situation
A minimum level of anxiety is optimum for success. He said to aim for balance – be serious without being a total hardass.
- What messages are given to student before they take placement tests? This is so IMPORTANT. Keep in mind that most students have absolutely no understanding of what is expected in college.
At many colleges and universities, the assessment test plays an important role in placement. At Edison, it really is the only means by which our developmental students are placed.
If you make the placement test optional, the students who are going to need it the most will not get it. In the State of Florida, it is mandated.
- The placement process plays a HUGE role in the success of an institution!
Interesting things to know about your institution – (things about which I would know to know about Edison, if they are, indeed, knowable)
What percentage of students who take the placement tests did not enroll or show up in classes?
- Took the test, but did not enroll
- Took the test, but did not show up to class at all
- Took the test, when to class, but stopped going or dropped
- A placement test needs to have items that are at the college level so that you can see the full continuum of skills range capability. You cannot base a test solely on the level of proficiency of the students. There must be skills tested that will be needed for proficiency – or it is not a placement test.
It is as important to understand what a test is not as it is important to understand what a test is.
The best predictor of future success is past achievement --- success in high school, however does not always indicate proficiency. Things like AP courses in high schools are smoke screens. Students are misled that they have skills that they often do not have.
There needs to be:
Mandatory testing
Mandatory placement
Developmental instruction – on more than one level
Support services that include counseling, advising, tutoring, and support labs.
Evaluation
A test will be comprised of selection items and supply items. Selection items require a person to choose an answer from a range of alternatives (think multiple-choice, true/false, or matching). In these questions, the stem presents the problem, the options are the choices from which the students select answers, and distracters are incorrect options. Distracters are important – as important as the right answers – as, upon analysis, they tell you what the student is thinking. Supply items are just that – they require students to supply answers. Supply items include essays, journals, portfolios, or presentations.
Tests can be cognitive or affective. Cognitive tests include intelligence tests, aptitude tests, and achievement tests. Aptitude tests, like the SAT or the ACT, attempt to predict success. Achievement test assess skills proficiencies and have the highest reliability and validity. Any placement test should be an achievement test. Affective tests include interest tests, personality tests, and study skills tests. These have the lowest reliability and validity; however, affective tests, used along with cognitive tests, are VERY useful in placement in developmental students, and should be used.
Tests may be either norm-referenced or criterion-referenced. Norm-referenced tests are developed with standards that are determined by group performance that are used to confirm a rank order, or norms, of students across a continuum of achievement. These essentially operate by comparing students, can be misleading, and can lead to lower standards.
Criterion-referenced tests are used to determine what students can do and what they know, not how they compare to others. These are obviously more likely to provide a more accurate depiction of skills. The PERT is a criterion-reference test, and the standards on it have been what many of us have been so in tune to.
The NAEP, National Assessment of Education Progress, is part of the National Center of Education Statistics site. There are volumes of intensely cool stuff here: http://nces.ed.gov/
A placement test is a basic skills achievement test that measures skills proficiency for the purpose of assisting entering college students in selecting appropriate beginning courses. The basic placement test does not tell you diagnostic information, and it does not predict anything. It measures. The college makes placement decisions based on those measurements.
Dr. Morante went on to review the basic psychometrics, the working definitions of which we should all be familiar. The reliability of a test is the consistency of the test or of the test score. The three most important measures of reliability are test-retest, internal consistency, and the standard error of measurement, or SEM. The reliability coefficient on any test is calculated with scores are correlated, and the expected correlation should approach or exceed .90. To increase reliability on any test, add more items. Test-retest is a dicey method by which to measure reliability, in my opinion. All kinds of inaccuracies may occur, and it is expensive. Internal consistency is assured by an analysis of individual test questions under the assumption that all of the test questions are intended to measure the same ability. This is measured by a coefficient called the coefficient alpha. The standard error of measurement, or SEM, is the measure of the amount of error in an individual’s score. This is important to know – or to trust the test company (McCann, in Florida’s case) to know. This statistic mimics standard deviations. The goal for reliability is that the SEM is small. I cannot wait to look into McCann’s data when I get back to Edison; I hope Barb Brennan has this information; maybe I'll go through my PERT notes and check the web site to see what I can find.
Content validity, predictive validity, and concurrent validity were all discussed in detail, and I have notes and notes on it. Dr. Morante reviewed mean, median, and mode, though everyone was familiar with those. He discussed correlation, percent, percentiles, scaled scores, and content. Bias came up; that is a hard one, as just about anyone can find bias in anything if he or she looks hard enough. The conclusion is that an unbiased test just does not exist.
Cut scores were an important topic. In Florida, we are all too familiar with this. Computer adaptive testing was also discussed. This is based on IRT, or item response theory, and it is more than simply using a computer to test. On Thursday, all Kellogg participants had the opportunity to take the College Board’s CPT and ACT’s placement, called Compass. We had all after to “play” with the assessments, to see if we could trip them up, beat them, or see any flaws in them. Friday morning, we had an in-depth discussion of our findings. I have several pages of notes from that.
A discussion of aspects of placement took up the rest of the week; appropriate placement, or a student being placed into a course according to the level or his or her proficiency, cannot be overlooked.
This seems like a long post, but it is less than about five percent of what I have written from the week of class notes. I thought I’d provide an overview, as I found it all to be so interesting!
On the social side of the Kellogg experience, all 43 participants are exceedingly happy that it is a weekend. Last night, there was an Art Crawl in downtown Boone. There was music playing on King Street (Boone's main street), and wine and snacks at all of the artsy merchants' counters. We walked around as a large group of about 20 or so, split into smaller groups, and then met at the Black Cat, a dive, to drink more and to socialize. Incredibally nice people; I shall happily keep in touch with so many of them.
So ends the first week at the Kellogg Institute 2011. Someone, probably a math professor, said we are 25% finished. Wow; it is going quickly, and we are learning and doing so much. What an experience!
No comments:
Post a Comment