Canonical Tag Script

Showing posts with label course code 8602. Show all posts
Showing posts with label course code 8602. Show all posts

Wednesday, October 11, 2023

Calculating CGPA and Assigning Letter Grades|Educational Assessment and Evaluation|

 

QUESTION

Discuss the methods of calculating CGPA and assigning letter grades.
CourseEducational Assessment and Evaluation

Course code 8602

Level: B.Ed Solved Assignment 

ANSWER  

Calculating CGPA and Assigning Letter Grades

CGPA stands for Cumulative Grade Point Average. It reflects the grade point average of all subjects/courses regarding a student’s performance in a composite way. To calculate CGPA, we should have the following information.

Marks in each subject/course

 • Grade point average in each subject/course

• Total credit hours (by adding credit hours of each subject/course)

Calculating CGPA is very simple the total grade point average is divided by total credit hours. For example, if a student's MA Education program has studied 12 courses, each of 3 credits. The total credit hours will be 36. The CGPA will be 36/12 = 3.0

Assigning letter grades

The letter grade system is the most popular in the world including Pakistan. Most teachers face problems while assigning grades. There are four core problems or issues in this regard; 

1) what should be included in a letter grade,

 2) how should achievement data be combined in assigning letter grades?

 3) what frame of reference should be used in grading, and 

4) how should the distribution of letter grades be determined?

Determining what to include in a grade Letter grades are likely to be most meaningful and useful when they represent achievement only. If they are communicated with other factors or aspects such as effort of work completed, personal conduct, and so on, their interpretation will become hopelessly confused. For example, a letter grade of C may represent average achievement with extraordinary effort and excellent conduct and behavior or vice versa. If letter grades are to be valid indicators of achievement, they must be based on valid measures of achievement. This involves defining objectives as intended learning outcomes and developing or selecting tests and assessments that can measure these learning outcomes.

Combining data in assigning grades

 One of the key concerns while assigning grades is to be clear about what aspects of a student are to be assessed or what will be the tentative weightage to each learning outcome. For example, if we decide that 35 percent weightage is to be given to mid-term assessments, 40 percent final term tests or assessments, and 25% to assignments, presentations, classroom participation, and conduct and behavior; we have to combine all elements by assigning appropriate weights to each element, and then use these composite scores as a basis for grading.

Selecting the proper frame of reference for grading

 Letter grades are typically assigned based on one of the following frames of reference.

a)      Performance of other group members (relative grading)

b)      Performance concerning specified standards (absolute grading)

c)       Performance concerning learning ability (amount of improvement)

 

Assigning grades on a relative basis involves comparing a student’s performance with that of a reference group, mostly class fellows. In this system, the grade is determined by the student’s relative position or ranking in the total group. Although relative grading has the disadvantage of a shifting frame of reference (i.e. grades depend upon the group’s ability), it is still widely used in schools, as most of the time our system of testing is ‘norm-referenced’.

 

Assigning grades on an absolute basis involves comparing a student’s performance to specified standards set by the teacher. This is what we call ‘criterion-referenced’ testing. If all students show a low level of mastery consistent with the established performance standard, all will receive low grades. The student performance about the learning ability is inconsistent with a standard[1]based system of evaluating and reporting student performance. The improvement over a short period is difficult. Thus lack of reliability in judging achievement about ability and in judging degree of improvement will result in grades of low dependability. Therefore such grades are used as supplementary to other grading systems.

 

Determining the distribution of grades

 

 The assigning of relative grades is essentially a matter of ranking the student in order of overall achievement and assigning letter grades based on each student’s rank in the group. This ranking might be limited to a single classroom group or might be based on the combined distribution of several classroom groups taking the same course. If grading on the curve is to be done, the most sensible approach in determining the distribution of letter grades in a school is to have the school staff set general guidelines for introductory and advanced courses.

All staff members must understand the basis for assigning grades, and this basis must be clearly communicated to users of the grades. If the objectives of a course are clearly mentioned and the standards for mastery appropriately set, the letter grades in an absolute system may be defined as the degree to which the objectives have been attained, as follows. A = Outstanding (90 to 100%) B = Very Good (80-89%) C = Satisfactory (70-79%) D = Very Weak (60-69%) F = Unsatisfactory (Less than 60%)


Related Topics 


Tuesday, October 10, 2023

Interpreting Test Scores by ordering and ranking|Educational Assessment and Evaluation

QUESTION

Write how to interpret test scores by ordering and ranking?

CourseEducational Assessment and Evaluation

Course code 8602

Level: B.Ed Solved Assignment 

ANSWER  

Interpreting Test Scores by ordering and ranking

Organizing and reporting students’ scores starts with placing the scores in ascending or descending order. Teachers can find the smallest, largest, range, and some other facts like the variability of scores associated with scores from ranked scores. Teacher may use ranked scores to see the relative position of each student within the class but ranked scores does not yield any significant numerical value for result interpretation or reporting.

Measurement Scales

 Measurement is the assignment of numbers to objects or events in a systematic fashion. Measurement scales are critical because they relate to the types of statistics you can use to analyze your data. An easy way to have a paper rejected is to have used either an incorrect scale/statistic combination or to have used a low-powered statistic on a high-powered set of data. The following four levels of measurement scales are commonly distinguished so that the proper analysis can be used on the data a number can be used merely to label or categorize a response.

Nominal Scale.

Nominal scales are the lowest scales of measurement. A nominal scale, as the name implies, is simply some placing of data into categories, without any order or structure. You are only allowed to examine if a nominal scale datum is equal to some particular value or to count the number of occurrences of each value. For example, the categorization of blood groups of classmates into A, B AB, O, etc. The only mathematical operation we can perform with nominal data is to count. Variables assessed on a nominal scale are called categorical variables; Categorical data are measured on nominal scales which merely assign labels to distinguish categories. For example, gender is a nominal scale variable. Classifying people according to gender is a common application of a nominal scale.

Nominal Data

 • classification or categorization of data, e.g. male or female

• no ordering, e.g. it makes no sense to state that male is greater than female (M > F), etc

• arbitrary labels, e.g., pass=1 and fail=2, etc

Ordinal Scale.

 Something measured on an "ordinal" scale does have an evaluative connotation. You are also allowed to examine if an ordinal scale datum is less than or greater than another value. For example rating of job satisfaction on a scale from 1 to 10, with 10 representing complete satisfaction. With ordinal scales, we only know that 2 is better than 1 or 10 is better than 9; we do not know by how much. It may vary. Hence, you can 'rank' ordinal data, but you cannot 'quantify' differences between two ordinal values. Nominal scale properties are included in the ordinal scale.

Ordinal Data

• ordered but differences between values are not important. Differences between values may or may not same or equal.

 • e.g., political parties on left to right spectrum given labels 0, 1, 2

• e.g., Likert scales, rank on a scale of 1..5 your degree of satisfaction

 • e.g., restaurant ratings

 

Interval Scale

An ordinal scale has quantifiable differences between values and becomes an interval scale. You are allowed to quantify the difference between two interval scale values but there is no natural zero. A variable measured on an interval scale gives information about more or better than ordinal scales do, but interval variables have an equal distance between each value. The distance between 1 and 2 is equal to the distance between 9 and 10. For example, temperature scales are interval data with 25C warmer than 20C and a 5C difference has some physical meaning. Note that 0C is arbitrary so it does not make sense to say that 20C is twice as hot as 10C but there is the exact same difference between 100C and 90C as there is between 42C and 32C. Students’ achievement scores are measured on an interval scale

Interval Data

• ordered, constant scale, but no natural zero

• differences make sense, but ratios do not (e.g., 30°-20°=20°-10°, but 20°/10° is not twice as hot!

 • e.g., temperature (C, F), dates

Ratio Scale

Something measured on a ratio scale has the same properties that an interval scale has except, with a ratio scaling, there is an absolute zero point. Temperature measured in Kelvin is an example. There is no value possible below 0 degrees Kelvin, it is absolute zero. Physical measurements of height, weight, and length are typically ratio variables. Weight is another example, 0 lbs. is a meaningful absence of weight. This ratio holds true regardless of which scale the object is being measured in (e.g. meters or yards). This is because there is a natural zero.

Ratio Data

• ordered, constant scale, natural zero

 • e.g., height, weight, age, length One can think of nominal, ordinal, interval, and ratio as being ranked in their relation to one another. Ratio is more sophisticated than interval, interval is more sophisticated than ordinal, and ordinal is more sophisticated than nominal.


Related Topics 


Wednesday, October 4, 2023

Content Validity | Content Construct Validity|Educational Assessment and Evaluation |

 

QUESTION

Write a note on content validity and content construct validity.

CourseEducational Assessment and Evaluation

Course code 8602

Level: B.Ed Solved Assignment 

ANSWE

 

Content Validity

The evidence of the content validity is a judgmental process and may be formal or informal. The formal process has a systematic procedure which arrives at a judgment. The important components are the identification of behavioural objectives and the construction of a table of specifications. Content validity evidence involves the degree to which the content of the test matches a content domain associated with the construct. For example, a test of the ability to add two numbers should include a range of combinations of digits.

 A test with only one-digit numbers, or only even numbers, would not have good coverage of the content domain. Content-related evidence typically involves Subject Matter Experts (SMEs) evaluating test items against the test specifications. It is a non-statistical type of validity that involves “the systematic examination of the test content to determine whether it covers a representative sample of the behaviour domain to be measured” (Anastasi & Urbina, 1997). For example, does an IQ questionnaire have items covering all areas of intelligence discussed in the scientific literature? A test has content validity built into it by careful selection of which items to include (Anastasi & Urbina, 1997). Items are chosen so that they comply with the test specification which is drawn up through a thorough examination of the subject domain. Foxcraft et al. (2004, p. 49) note that by using a panel of experts to review the test specifications and the selection of items the content validity of a test can be improved. 

The experts will be able to review the items and comment on whether the items cover a representative sample of the behaviour domain.

For Example –

In developing a teaching competency test, experts in the field of teacher training would identify the information and issues required to be an effective teacher and then choose (or rate) items that represent those areas of information and skills which are expected from a teacher to exhibit in the classroom.

Lawshe (1975) proposed that each rater should respond to the following question for each item in content validity: Is the skill or knowledge measured by this item?

• Essential

• Useful but not essential

• Not necessarily Concerning educational achievement tests, a test is considered content valid when the proportion of the material covered in the test approximates the proportion of material covered in the course.

 

Construct Validity

Before defining the construct validity, it seems necessary to elaborate on the concept of the construct. It is the concept or the characteristic that a test is designed to measure. A construct provides the target that a particular assessment or set of assessments is designed to measure; it is a separate entity from the test itself. According to Howell (1992), Construct validity is a test’s ability to measure factors which are relevant to the field of study. Construct validity is thus an assessment of the quality of an instrument or experimental design. It says 'Does it measure the construct it is supposed to measure'. Construct validity is rarely applied in achievement tests. 

Construct validity refers to the extent to which operationalizations of a construct (e.g. practical tests developed from a theory) do actually measure what the theory says they do. For example, to what extent is an IQ questionnaire actually measuring "intelligence"? Construct validity evidence involves the empirical and theoretical support for the interpretation of the construct. Such lines of evidence include statistical analyses of the internal structure of the test including the relationships between responses to different test items. They also include relationships between the test and measures of other constructs. As currently understood, construct validity is not distinct from the support for the substantive theory of the construct that the test is designed to measure. As such, experiments designed to reveal aspects of the causal role of the construct also contribute to construct validity evidence. Construct validity occurs when the theoretical constructs of cause and effect accurately represent the real-world situations they are intended to model. This is related to how well the experiment is operationalized. A good experiment turns the theory (constructs) into actual things you can measure. Sometimes just finding out more about the construct (which itself must be valid) can be helpful.

The construct validity addresses the construct that is mapped into the test items, it is also assured either by the judgmental method or by developing the test specification before the development of the test. The constructs have some essential properties the two of them are listed as under: 1. Are abstract summaries of some regularity in nature? 2. Related with concrete, observable entities. For Example - Integrity is a construct; it cannot be directly observed, yet it is useful for understanding, describing, and predicting human behaviour.


Related Topics 


Monday, October 2, 2023

Extended Response Essay Type Items|Educational Assessment and Evaluation|

 QUESTION

Write a detailed note on extended response essay-type items

CourseEducational Assessment and Evaluation

Course code 8602

Level: B.Ed Solved Assignment 

ANSWE

Extended Response Essay Type Items

 An essay-type item that allows the student to determine the length and complexity of the response is called an extended-response essay item. This type of essay is most useful at the synthesis or evaluation levels of the cognitive domain. We are interested in determining whether students can organize, integrate, express, and evaluate information, ideas, or pieces of knowledge when the extended response items are used.

Example:

 Identify as many different ways to generate electricity in Pakistan as you can? Give the advantages and disadvantages of each. Your response will be graded on its accuracy, comprehension, and practical ability. Your response should be 8-10 pages in length and it will be evaluated according to the RUBRIC (scoring criteria) already provided.

Overall Essay type items (both types of restricted response and extended response) are

Good for:

 • Application, synthesis, and evaluation levels Types:

• Extended response: synthesis and evaluation levels; a lot of freedom in answers

• Restricted response: more consistent scoring, outlines parameters of responses

 Advantages:

• Students less likely to guess

 • Easy to construct

 • Stimulates more study

 Allows students to demonstrate an ability to organize knowledge, express opinions, and show originality.

Disadvantages:

• Can limit the amount of material tested, therefore has decreased validity.

 • Subjective, potentially unreliable scoring.

• Time-consuming to score.

Tips for Writing Good Essay Items:

• Provide reasonable time limits for thinking and writing.

 • Avoid letting them answer a choice of questions (You won't get a good idea of the broadness of student achievement when they only answer a set of questions.)

• Give definitive tasks to compare, analyze, evaluate, etc.

• Use a checklist point system to score with a model answer: write an outline, determine how many points to assign to each part

• Score one question at a time at the same time.


Related Topics 


Wednesday, September 27, 2023

Why Intelligence Tests are Used | Advantages of Intelligence tests | Disadvantages of Intelligence Tests |

 QUESTION  

Why intelligence tests are used? Also, write the advantages and disadvantages of intelligence tests.

CourseEducational Assessment and Evaluation

Course code 8602

Level: B.Ed Solved Assignment 

ANSWER 

Intelligence Tests

 Intelligence involves the ability to think, solve problems, analyze situations, and understand social values, customs, and norms. Two main forms of intelligence are involved in most intelligence assessments:

 • Verbal Intelligence is the ability to comprehend and solve language-based problems; and

• Nonverbal Intelligence is the ability to understand and solve visual and spatial problems.

 

Intelligence is sometimes referred to as intelligence quotient (IQ), cognitive functioning, intellectual ability, aptitude, thinking skills, and general ability.

Intelligence tests are psychological tests that are designed to measure a variety of mental functions, such as reasoning, comprehension, and judgment.

 Intelligence test is often defined as a measure of general mental ability. Of the standardized intelligence tests, those developed by David Wechsler are among the most widely used. Wechsler defined intelligence as “the global capacity to act purposefully, to think rationally, and to deal effectively with the environment.” While psychologists generally agree with this definition, they don't agree on the operational definition of intelligence (that is, a statement of the procedures to be used to precisely define the variable to be measured) or how to accomplish its measurement.

 The goal of intelligence tests is to obtain an idea of the person's intellectual potential. The tests center around a set of stimuli designed to yield a score based on the test maker's model of what makes up intelligence. Intelligence tests are often given as a part of a battery of tests.

 

Advantages

In general, intelligence tests measure a wide variety of human behaviors better than any other measure that has been developed. They allow professionals to have a uniform way of comparing a person's performance with that of other people who are similar in age. These tests also provide information on cultural and biological differences among people. Intelligence tests are excellent predictors of academic achievement and provide an outline of a person's mental strengths and weaknesses. Many times the scores have revealed talents in many people, which have led to an improvement in their educational opportunities. Teachers, parents, and psychologists can devise individual curricula that match a person's level of development and expectations.

 

Disadvantages

Some researchers argue that intelligence tests have serious shortcomings. For example, many intelligence tests produce a single intelligence score. This single score is often inadequate in explaining the multidimensional.

Another problem with a single score is the fact that individuals with similar intelligence test scores can vary greatly in their expression of these talents. It is important to know the 53 person's performance on the various subtests that make up the overall intelligence test score. Knowing the performance on these various scales can influence the understanding of a person's abilities and how these abilities are expressed. For example, two people have identical scores on intelligence tests. Although both people have the same test score, one person may have obtained the score because of strong verbal skills while the other may have obtained the score because of strong skills in perceiving and organizing various tasks. Furthermore, intelligence tests only measure a sample of behaviors or situations in which intelligent behavior is revealed.

For instance, some intelligence tests do not measure a person's everyday functioning, social knowledge, mechanical skills, and/or creativity. Along with this, the formats of many intelligence tests do not capture the complexity and immediacy of real-life situations. Therefore, intelligence tests have been criticized for their limited ability to predict non-test or nonacademic intellectual abilities. Since intelligence test scores can be influenced by a variety of different experiences and behaviors, they should not be considered a perfect indicator of a person's intellectual potential.


Related Topics 


Monday, September 25, 2023

Define Classroom Assessment | Characteristics of Classroom Assessment | Educational Assessment and Evaluation

 QUESTION  

What is classroom assessment? What are the characteristics of classroom assessment

CourseEducational Assessment and Evaluation

Course code 8602

Level: B.Ed Solved Assignment 

ANSWER 

Classroom Assessment

Kizlik (2011) defines assessment as a process by which information is obtained relative to some known objective or goal. Assessment is a broad term that includes testing. For example, a teacher may assess the knowledge of the English language through a test and assess the language proficiency of the students through any other instrument for example oral quiz or presentation. Based upon this view, we can say that every test is an assessment but every assessment is not the test. The term ‘assessment’ is derived from the Latin word ‘assidere’ which means ‘to sit beside’. In contrast to testing, the tone of the term assessment is non-threatening indicating a partnership based on mutual trust and understanding. This emphasizes that there should be a positive rather than a negative association between assessment and the process of teaching and learning in schools. In the broadest sense assessment is concerned with children’s progress and achievement. In a comprehensive and specific way, classroom assessment may be defined as the process of gathering, recording, interpreting, using, and communicating information about a child’s progress and achievement during the development of knowledge, concepts, skills, and attitudes. (NCCA, 2004) In short, we can say that assessment entails much more than testing. It is an ongoing process that includes many formal and informal activities designed to monitor and improve teaching and learning.

 

Characteristics of Classroom Assessment

1. Effective assessment of student learning begins with educational goals.

Assessment is not an end in itself but a vehicle for educational improvement. Its effective practice, then, begins with and enacts a vision of the kinds of learning we most value for students and strive to help them achieve. Educational values/ goals should drive not only what we choose to assess but also how we do so. Where questions about educational mission and values are skipped over, assessment threatens to be an exercise in measuring what's easy, rather than a process of improving what we really care about.

 

2.  Assessment is most effective when it reflects an understanding of learning as multidimensional, integrated, and revealed in performance over time.

 Learning is a complex process. It entails not only what students know but what they can do with what they know; it involves not only knowledge and abilities but also values, attitudes, and habits of mind that affect both academic success and performance beyond the classroom. Assessment should reflect these understandings by employing a diverse array of methods, including those that call for actual performance, using them over time so as to reveal change, growth, and increasing degrees of integration. Such an approach aims for a more complete and accurate picture of learning, and therefore, a firm base for improving our students' educational experience.

 

3. Assessment works best when it has clear, explicitly stated purposes.

Assessment is a goal-oriented process. It entails comparing educational performance with educational purposes and expectations -- those derived from the institution's mission, from faculty intentions in program and course design, and from knowledge of students' own goals. Where program purposes lack specificity or agreement, assessment as a process pushes a campus towards clarity about where to aim and what standards to apply; assessment also prompts attention to where and how program goals will be taught and learned. Clear, shared, implementable goals are the cornerstone for assessment that is focused and useful.


4. Assessment requires attention to outcomes but also equally to the experiences that lead to those outcomes.

Information about outcomes is of high importance; where students "end up" matters greatly. But to improve outcomes, we need to know about student experience along the way -- about the curricula, teaching, and kind of student effort that leads to particular outcomes. Assessment can help us understand which students learn best under what conditions; with such knowledge comes the capacity to improve the whole of their learning.

 

5. Assessment works best when it is ongoing, not episodic.

Assessment is a process whose power is cumulative. Though isolated, a "one-shot" assessment can be better than none, improvement is best fostered when assessment entails a linked series of activities undertaken over time. This may mean tracking the process of individual students, or of cohorts of students; it may mean collecting the same examples of student performance or using the same instrument semester after semester. The point is to monitor progress toward intended goals in a spirit of continuous improvement. Along the way, the assessment process itself should be evaluated and refined in light of emerging insights.

 

6. Assessment is effective when representatives from across the educational community are involved.

Student education is a campus-wide liability, and assessment is a way of acting out that responsibility. Thus, while assessment attempts may start small, the aim over time is to involve people from across the educational community. Faculty plays an important role, but assessment questions can't be fully addressed without participation by educators, librarians, administrators, and students. Assessment may also involve individuals from beyond the campus (alumni/ae, trustees, employers) whose experience can enrich the sense of appropriate aims and standards for learning. Thus understood, assessment is not a task for small groups of experts but a collaborative activity; its aim is wider, better[1]informed attention to student learning by all parties with a stake in its improvement.

 

7. Assessment makes a difference when it begins with issues of use and illuminates questions that people really care about.

Assessment recognizes the value of information in the process of improvement. But to be useful, information must be connected to issues or questions that people really care about. This implies assessment approaches that produce evidence that relevant parties will find credible, suggestive, and applicable to decisions that need to be made. It means thinking in advance about how the information will be used, and by whom. The point of assessment is not to collect data and return "results"; it is a process that starts with the questions of decision-makers, involves them in the gathering and interpreting of data, and informs and helps guide continuous improvement.

 

8. Through effective assessment, educators meet responsibilities to students and to the public.

 There is a compelling public stake in education. As educators, we have a responsibility to the public that supports or depends on us to provide information about the ways in which our students meet goals and expectations. But that responsibility goes beyond the reporting of such information; our deeper obligation -- to ourselves, our students, and society -- is to improve. Those to whom educators are accountable have a corresponding obligation to support such attempts at improvement. (American Association for Higher Education; 2003)


Related Topics 


Thursday, February 4, 2021

Difference among sociograms, social distance scale | Teacher Education | aiou solved assignment | Course Code 8602

Q 1: Elaborate the difference among sociograms, social distance scale, and guess who questionnaire in terms of their use.

CourseEducational Assessment and Evaluation

Course code 8602

Level: B.Ed Solved Assignment 

Answer:

A sociogram is a visual representation or map of the relationships between individuals. Learn more about sociograms from examples and test your knowledge with a quiz.

Definition of Sociogram


Suppose you are a seventh-grade teacher. There are ten students in your classroom: Mike, Olivia, Connor, Tracy, Lena, Darren, James, Tiona, Lisa, and Taylor. You notice that your male and female students have not been getting along well in recent weeks. You are interested in looking at the relationships between your students to help you understand what is going on in your classroom. 


One method that can help you examine relationships is creating a sociogram.


A sociogram is a visual depiction of the relationships among a specific group. The purpose of a sociogram is to uncover the underlying relationships between people. A sociogram can be used to increase your understanding of group behaviors.


How Do You Create a Sociogram?


Before you begin to create a sociogram of the students in your classroom, you must first come up with a criterion, which is what you want to measure. The criterion that you use is usually some question about a specific type of social interaction. A criterion can be either positive or negative.


Positive criteria are those that ask the students to choose something that they either enjoy or would like to participate in with others. Negative criteria ask students to choose something that they would not enjoy. Negative criteria are used to discover resistance or rejection in interpersonal relationships.


Examples of positive criteria that can be used to create a sociogram are:


  • Which three classmates would you most like to go on a vacation with?
  • Which three classmates are your best friends?
  • Which three classmates do you like the most?

Examples of negative criteria that can be used to create a sociogram are:


  • Which three classmates would you least enjoy going on a vacation with?
  • Which three classmates do you like to be around the least?
  • Which three classmates would you least like to be stranded on an island with?
Once your students have all answered the question, you tabulate the results and use them to create a sociogram. Sociologist R.E. Park (1923) coined the term social distance for the first time while describing the observed fact that the kinds of situations in which contact occurs between a dominant group and subordinates vary in their degree of intimacy, from Kinship by marriage, residence in the same neighborhood, work in the same occupation to absolutely no contact.


Emory Bogardus, an eminent sociologist at the University of Southern California in 1942 developed a scale for measuring the social distances among various groups in the United States. It was further given prominence by Katz and Allport under the able guidance of Gallet and Bogardus.


Bogardus was interested in measuring racial attitudes, and attitudes of people towards different races, towards different nationalities and comparing them through his social distance scale. The procedure for the construction of the scale is as follows:

The investigator first formulates various statements indicating different degrees of acceptance or rejection of the group.

The subject has to indicate how close or how far away he is from the members of the other group. A distance is measured by these statements which are basically psychological. A favorable attitude is indicated by closeness and an unfavorable attitude is indicated by distance. The greater the distance the greater the unfavorable attitude and the less the distance the greater the favorable attitude.



The psychological distance is progressively increased in the scale as one proceeds from the first to
the last statement starting from close kinship by marriage to exclusion from the country. Bogardus thus asked the respondents to indicate to which of the following steps they would admit members of the various groups in the United States of America.


Guess who questionnaire in terms of their use


This worksheet includes prompt questions to help students play the game 'Guess Who?'. It is for the beginner level. The worksheet includes short questions and descriptions of people. It is to help students complete a meaningful speaking activity where they have to guess the identity of their partner's character based on questions about their appearance. The game can be played with 2 or more players.




Related Topics 


Saturday, January 9, 2021

The procedure for development of multiple choice tests items and assembling the test prepare ten multiple choice items from subject of your choice | Teacher Education | aiou solved assignment | Course Code 8602

Q 5:  Briefly describe the procedure for the development of multiple-choice test items assemble the test and prepare ten multiple-choice items from the subject of your choice.

CourseEducational Assessment and Evaluation

Course code 8602

Level: B.Ed Solved Assignment 

Answer:

Multiple choice test questions, also known as items, can be an effective and efficient way to assess learning outcomes. Multiple choice test items have several potential advantages:

Versatility: 

Multiple choice test items can be written to assess various levels of learning outcomes, from basic recall to application, analysis, and evaluation. Because students are choosing from a  set of potential answers, however, there are obvious limits on what can be tested with multiple-choice items. For example, they are not an effective way to test students’ ability to organize thoughts or articulate explanations or creative ideas.


Reliability: 

Reliability is defined as the degree to which a test consistently measures a learning outcome. Multiple-choice test items are less susceptible to guessing than true/false questions, making them a more reliable means of assessment. The reliability is enhanced when the number of MC items focused on a single learning objective is increased. In addition, the objective scoring associated with multiple choice test items frees them from problems with scorer inconsistency that can plague the scoring of essay questions.


Validity: 

Validity is the degree to which a test measures the learning outcomes it purports to measure. Because students can typically answer a multiple-choice item much more quickly than an essay question, tests based on multiple-choice items can typically focus on a relatively broad representation of course material, thus increasing the validity of the assessment. The key to taking advantage of these strengths, however, is the construction of good multiple-choice items.


A multiple-choice item consists of a problem, known as the stem, and a list of suggested solutions, known as alternatives. The alternatives consist of one correct or best alternative, which is the answer, and incorrect or inferior alternatives, known as distractors.



Constructing an Effective Stem

1. The stem should be meaningful by itself and should present a definite problem. A stem that presents a definite problem allows a focus on the learning outcome. A stem that does not present a clear problem, however, may test students’ ability to draw inferences from vague descriptions rather than serving as a more direct test of students’ achievement of the learning outcome.




2. The stem should not contain irrelevant material, which can decrease the reliability and the validity of the test scores (Haldyna and Downing 1989)


3. The stem should be negatively stated only when significant learning outcomes require it. Students often have difficulty understanding items with negative phrasing (Rodriguez 1997). If a significant learning outcome requires negative phrasing, such as the identification of dangerous laboratory or clinical practices, the negative element should be emphasized with italics or capitalization.



4. The stem should be a question or a partial sentence. A question stem is preferable because it allows the student to focus on answering the question rather than holding the partial sentence in working memory and sequentially completing it with each alternative (Statman 1988). The cognitive load is increased when the stem is constructed with an initial or interior blank, so this construction should be avoided.






Related Topics 


New BISE Gazzets of the Current Year

All Punjab Gazzets Sargodha Board Gazzet 2024 10th class Lahore Board 10th Class Gazzet Part 1 Lahore Board 10th Class Gazzet Part 2