Toggle navigation
Toggle navigation
Home
DCUM Forums
Nanny Forums
Events
About DCUM
Advertising
Search
Recent Topics
Hottest Topics
FAQs and Guidelines
Privacy Policy
Your current identity is: Anonymous
Login
Preview
Subject:
Forum Index
»
Schools and Education General Discussion
Reply to "PARCC monitoring student's social media, wants schools to "punish" them"
Subject:
Emoticons
More smilies
Text Color:
Default
Dark Red
Red
Orange
Brown
Yellow
Green
Olive
Cyan
Blue
Dark Blue
Violet
White
Black
Font:
Very Small
Small
Normal
Big
Giant
Close Marks
[quote=Anonymous][quote=Anonymous][quote]There's a difference between releasing test feedback and releasing test questions. The questions come in discrete, distinct categories which map to specific standards.[/quote] Yes, and, believe it or not, some of those questions in "discrete, distinct categories which map to specific standards" may be poorly written and part of the problem. That is why tests need to be carefully piloted for validity and reliability before they are used for the purpose intended. I have not seen the data on the pilot programs--or if there were pilot programs. I worked in adult training and the piloting program for tests was extensive. If all the people miss a question, it has to be considered that the question may be poorly written. The people in charge need to go back and find out why so many may miss the question. It may not be a factor of poor instruction. Did PARCC do this with their tests? Of course, since the standards did not go through a vetting process, we do not know if it could be a problem of an inappropriate standard. Were the questions tested for reliability? And, all this money spent on PARCC, and they cannot afford to write new test questions? Something wrong with this picture, too. They should certainly be able to have multiple versions of the tests. [/quote] Yes, they did piloting. For example, here is just one set of things they did: [quote]Spring 2013 Item Development Research Printer-friendly versionPrinter-friendly versionPDF version Spring 2013 Item Development Research PARCC’s Item Development Research includes three studies. Nearly 2,500 students from six PARCC states (Florida, Georgia, Maryland, New Jersey, New Mexico, and New York) participated in the studies. A summary of research results will be available in Summer 2013. Rubric Choice Study The purpose of this study is to empirically compare the functioning of two rubrics that could be used to score Prose Constructed Response tasks - a condensed rubric and an expanded rubric. Student-Task Interaction Studies Part I: The purpose of this study is to investigate students’ interaction with the assessment tasks and instructions, such as whether students perform on the tasks as intended given the instructions. PARCC will use information collected to help inform its ongoing item development. Part II: The purpose of this study is to further investigate students’ interaction with assessment tasks and instructions through face-to-face cognitive labs to inform iterative test development process. The cognitive interviews will especially focus on students’ interactions with various functionalities and tools available (e.g., drag and drop, hot spot, etc.). Accessibility Studies There are three accessibility studies conducted under this phase of research: Accessibility for English Learners Accessibility for Students with Disabilities Accessibility of Student Response Mode for Grade 3 (e.g. computer-based and paper-based responses) The purpose of these studies is to investigate potential issues with the items and tasks specific to accessibility and accommodations. In particular: Investigate how technology-enhanced items and tasks function for English Learners Investigate how technology-enhanced items and tasks function for Students with Disabilities Investigate the accessibility for grade 3 students responding on the computer Summer 2013 Item Tryout Studies PARCC’s Summer Item Tryout includes four studies to be conducted throughout June, July, and August 2013 across five PARCC states (Arkansas, Colorado, Maryland, Massachusetts, New Jersey) as well as the District of Columbia. A summary of results from these studies will be available in early Fall 2013. Quality of Type II and III Tasks in Mathematics Study The goal of this study is to examine PARCC’s Type II and Type III tasks, which test reasoning and modeling skills. The study will consist of one-to-one interviews with students using cognitive lab protocols. Students will be asked to “think aloud” about their reasoning and/or modeling as they solve the Type II and Type III items. The cognitive labs will be conducted during the last three weeks of July 2013. The labs will be conducted in Maryland and New Jersey with 10 students per item. Use of Narrative Writing Prompts in Assessing Reading Comprehension Study The purpose of this study is to investigate whether the Narrative Writing Prompts on the PARCC English Language Arts/Literacy Assessments yield enough information to score for reading in addition to writing. The data for the study will be collected through computer-based administration from mid-June to mid-July, 2013. Approximately 3,000 students from the District of Columbia, Massachusetts, and New Jersey participated in this study. Use of Evidence-Based Selected Response Items in Assessing Reading Comprehension Study The purpose is to investigate Evidence-Based Selected Response items on the PARCC English Language Arts/Literacy Assessments can be scored with partial-credit scoring models developed by PARCC. The data for this study was collected concurrently with the Use of Narrative Writing Prompts in Assessing Reading Comprehension Study in the District of Columbia, Massachusetts, and New Jersey. Tablet Cognitive Lab Study This study will focus on a range of item interactions and how they function on 10-inch tablets. The study is the first stage of a longer effort to establish a comparability and fairness strategy for students taking the PARCC assessments on paper, desktops/laptops, and tablets. Approximately 72 students from Arkansas and Colorado will participate in this study. [/quote] You claim to have done adult education and test piloting - as such you should know that what's typically done is to develop rotating item banks with multiple question for each concept, and it's also pretty much the norm to run the test item results through analysis for psychometric factors which can tell you a lot about how reliable the question is, whether the results indicate the question is too ambiguous, can tell you about logical distractors or what appear to be more than one correct answer, and which can even tell you if there was cheating on the test. PARCC has technical advisory and research and psychometric committees which include people with a lot of expertise in those areas, to include folks who worked on development of professional licensure exams for adults, GREs, TOEFLs and many other national exams, they did a whole lot more work than you seem aware of or are willing to give them credit for. http://www.parcconline.org/technical-advisory-committee[/quote]
Options
Disable HTML in this message
Disable BB Code in this message
Disable smilies in this message
Review message
Search
Recent Topics
Hottest Topics