Canonical Tag Script

Showing posts with label Educational Statistics. Show all posts
Showing posts with label Educational Statistics. Show all posts

Sunday, December 3, 2023

Chi-Square | independent test | Educational Statistics | 8614 |

 QUESTION

Explain Chi-Square. Also, discuss it as an independent test.

  • CourseEducational Statistics
  • Course code 8614
  • Level: B.Ed Solved Assignment 

ANSWE

The Chi-Square Distribution

The Chi-Square (or the Chi-Squared - χ2) distribution is a special case of the gamma distribution (the gamma distribution is a family of right skewed, continuous probability distribution. These distributions are useful in real life where something has a natural minimum of 0.). a chi-square distribution with n degree of freedom is equal to a gamma distribution with a = n/2 and b = 0.5 (or β = 2).

Let us consider a random sample taken from a normal distribution. The chi-square distribution is the distribution of the sum of these random samples squared. The degrees of freedom (say k) are equal to the number of samples being summed. For example, if 10 samples are taken from the normal distribution, then the degree of freedom df = 10. Chi-square distributions are always right-skewed. The greater the degree of freedom, the more the chi-square distribution looks like a normal distribution.

Uses of Chi-Square (χ2) Distribution

The chi-square distribution has many uses which include:

i)  Confidence interval estimation for a population standard deviation of a normal distribution from a sample standard deviation.

ii)  Independence of two criteria of classification of qualitative variables (contingency tables).

iii)  Relationship between categorical variables.

iv)  Sample variance study when the underlying distribution is normal.

v)  Tests of deviations of differences between expected and observed frequencies (one-way table).

vi)  The chi-square test (a goodness of fit test).

What is a Chi-Square Statistic?

A Chi-Square Statistic is one way to a relationship between two categorical (non-numerical) variables. The Chi-Square Statistic a is a single number that tells us how much difference exists between the observed counts and the counts that one expects if there is no relationship in the population.

There are two different types of chi-square tests, both involve categorical data. These are:

a)  A chi-square goodness of fit test, and

b)  A chi-square test of independence.

In the coming lines, these tests will be dealt with in some detail.

 

Chi-Square Independence Test

A chi-square (χ2) test of independence is the second important form of a chi-square test. It is used to explore the relationship between two categorical variables. Each of these variables can have two or more categories.

It determines if there is a significant relationship between two nominal (categorical) variables. The frequency of one nominal variable is compared with different values of the second nominal variable. The data can be displayed in the R*C contingency table, where R is the row and C is the column. For example, the researcher wants to examine the relationship between gender (male and female) and empathy (high vs. low). The researcher will use the chi-square test of independence. If the null hypothesis is accepted there would be no relationship between gender and empathy. If the null hypothesis is rejected then the conclusion will be there is a relationship between gender and empathy (e.g. say females tend to score higher on empathy and males tend to score lower on empathy).

The chi-square test of independence being a non-parametric technique follows less strict assumptions, some general assumptions should be taken care of:

Random Sample –

 The sample should be selected using a simple random sampling method.

Variables –

Both variables under study should be categorical.

Independent Observations –

Each person or case should be counted only once and none should appear in more than one category of group. The data from one subject should not influence the data from another subject.

If the data are displayed in a contingency table, the expected frequency count for each cell of the table is at least 5.

Both the chi-square tests are sometimes confused but they are quite different from each other.

  • The chi-square test for independence compares two sets of data to see if there is a relationship.
  • The chi-square goodness of fit test is to fit one categorical variable to a distribution.


Related Topics

ANOVA and its Logics

Measures of Dispersion

Descriptive and Inferential Statistics

What is data Cleaning? Importance and Benefits of Data Cleaning 

Explain the terms Degree of Freedom,Spread of Score,Sample,Z Score,Confidence Interval 

What is measure of difference? Explain different types of test

Concept of Reliability, Types and methods of Reliability

Level of Measurement

Types of Variable in Stats 

Measures of Central Tedency and Dispersion, 

Role of Normal Distribution, and also note on Skewness and Kurtosis

Methods of Effective Presentation


Friday, December 1, 2023

ANOVA and its logics | Educational Statistics |8614|

QUESTION

Explain ANOVA and its Logic?

  • Course: Educational Statistics
  • Course code 8614
  • Level: B.Ed Solved Assignment

ANSWE

Introduction to Analysis of Variance (ANOVA)

The t-tests have one very serious limitation  –  they are restricted to tests of the significance of the difference between only two groups. There are many times when we like to see if there are significant differences among three, four, or even more groups. For example, we may want to investigate which of three teaching methods is best for teaching ninth-class algebra. In such a case, we cannot use a t-test because more than two groups are involved. To deal with such types of cases one of the most useful techniques in statistics is analysis of variance (abbreviated as ANOVA). This technique was developed by British Statistician Ronald A. Fisher (Dietz & Kalof, 2009; Bartz, 1981)

Analysis of Variance (ANOVA) is a hypothesis testing procedure that is used to evaluate mean differences between two or more treatments (or populations). Like all other inferential procedures. ANOVA uses sample data as a basis for drawing general conclusions about populations. Sometimes, ANOVA and t-tests may be two different ways of doing exactly the same thing: testing for mean differences. In some cases this is true – both tests use sample data to test hypotheses about population mean.

However, ANOVA has many more advantages over t-tests. t-tests are used when we have to compare only two groups or variables (one independent and one dependent). On the other hand, ANOVA is used when we have two or more two independent variables (treatment). Suppose we want to study the effects of three different models of teaching on the achievement of students. In this case, we have three different samples to be treated using three different treatments. So ANOVA is the suitable technique to evaluate the difference.

 

Logic of ANOVA

Let us take the hypothetical data given in the table.




There are three separate samples, with n = 5 in each sample. The dependent variable is the number of problems solved correctly These data represent results of an independent-measure experiment comparing learning performance under three temperature conditions. The scores are variable and we want to measure the amount of variability (i.e. the size of the difference) to explain where it comes from.

 To compare the total variability, we will combine all the scores from all the separate samples into one group and then obtain one general measure of variability for the complete experiment. Once we have measured the total variability, we can begin to break it into separate components. The word analysis means breaking into smaller parts. 

Because we are going to analyze the variability, the process is called analysis of variance (ANOVA). This analysis process divides the total variability into two basic components:

i)  Between-Treatment Variance

Variance simply means difference and calculating the variance is a process of measuring how big the differences are for a set of numbers. The between-treatment variance measures how much difference exists between the treatment conditions. In addition to measuring differences between treatments, the overall goal of ANOVA is to evaluate the differences between treatments. Specifically, the purpose of the analysis is to distinguish is to distinguish between two alternative explanations.

a)  The differences between the treatments have been caused by the treatment effects.

b)  The differences between the treatments are simply due to chance. 

Thus, there are always two possible explanations for the variance (difference) that exists between treatments

1)  Treatment Effect: 

The differences are caused by the treatments. the scores in sample 1 are obtained at room temperature of 50and that of sample 2 at 70o. The difference between samples may be caused by the difference in room temperature.

2)  Chance: 

The differences are simply due to chance. If there is no treatment effect, even then we can expect some difference between samples. The chance differences are unplanned and unpredictable differences that are not caused or explained by any action of the researcher. Researchers commonly identify two primary sources for chance differences.

  Individual Differences

Each participant in the study has their own individual characteristics. Although it is reasonable to expect that different subjects will produce different scores, it is impossible to predict exactly what the difference will be. 

   Experimental Error

In any measurement, there is a chance of some degree of error. Thus, if a researcher measures the same individuals twice under the same conditions, there is a greater possibility of obtaining two different measurements. Often these differences are unplanned and unpredictable, so they are considered to be by chance. 

Thus, when we calculate the between-treatment variance, we are measuring differences that could be either by treatment effect or could simply be due to chance. To demonstrate that the difference is really a treatment effect, we must establish that the differences between treatments are bigger than would be expected by chance alone. To accomplish this goal, we will determine how big the differences are when there is no treatment effect involved. That is, we will measure how much difference (variance) occurred by chance. To measure chance differences, we compute the variance within treatments

ii)  Within-Treatment Variance

Within each treatment condition, we have a set of individuals who are treated exactly the same and the researcher does not do anything that would cause these individual participants to have different scores. For example,  the data shows that five individuals were treated at a70oroom temperature. Although these five students were all treated exactly the same, their scores are different. The question is why are the scores different? A plain answer is that it is due to chance.  the overall analysis of variance and identifies the sources of variability that are measured by each of the two basic components.










Related Topics

Wednesday, November 22, 2023

Process and Errors in Hypothesis Testing | Educational Statistics | 8614 |

QUESTION

Explain the process and errors in hypothesis testing. 

CourseEducational Statistics

Course code 8614

Level: B.Ed Solved Assignment 

ANSWE

Four-Step Process for Hypothesis Testing

The process of hypothesis testing goes through the following four steps.

i)  Stating the Hypothesis

The process of hypothesis testing begins by stating a hypothesis about tn. Usually, a researcher states two opposing hypotheses. Both hypotheses are stated in terms of population unknown population parameters.

The first and most important of the two hypotheses is called the null hypothesis. A null hypothesis states that the treatment has no effect. In general, the null hypothesis states that there is no change, no effect, no difference – nothing happened. The null hypothesis is denoted by the symbol Ho (H stands for hypothesis and 0 denotes that this is a zero effect).

The null hypothesis (Ho) states that in the general population, there is no change, no difference, or no relationship. In an experimental study, the null hypothesis (Ho) predicts that the independent variable (treatment) will have no effect on the dependent variable for the population.

The second hypothesis is simply the opposite of the null hypothesis and it is called the scientific or alternative hypothesis. It is denoted by H1. This hypothesis states that the treatment has an effect on the dependent variable.

The alternative hypothesis (H1) states that there is a change, a difference, or a relationship for the general population. In an experiment, H1 predicts that the independent variable (treatment) will have an effect on the dependent variable.

ii)  Setting Criteria for the Decision

In a common practice, a researcher uses the data from the sample to evaluate the authority of the null hypothesis. The data will either support or negate the null hypothesis. To formalize the decision process, a researcher will use the null hypothesis to predict exactly what kind of sample should be obtained if the treatment has no effect. In particular, a researcher will examine all the possible sample means that could be obtained if the null hypothesis is true.

iii)  Collecting data and computing sample statistics

The next step in hypothesis testing is to obtain the sample data. Then raw data are summarized with appropriate statistics such as mean, standard deviation, etc. then it is possible for the researcher to compare the sample mean with the null hypothesis.

iv)  Make a Decision

In the final step, the researcher decides, in the light of the analysis of data, whether to accept or reject the null hypothesis. If analysis of data supports the null hypothesis, he accepts it and vice versa

 

Uncertainty and Error in Hypothesis Testing

Hypothesis testing is an inferential process. It means that it uses limited information obtained from the sample to reach general conclusions about the population. As a sample is a small subset of the population, it provides only limited or incomplete information about the whole population. Yet hypothesis test uses information obtained from the sample. In this situation, there is always the probability of reaching an incorrect conclusion.

Generally, two kinds of errors can be made.

i)  Type I Errors

A type I error occurs when a researcher rejects a null hypothesis that is actually true. It means that the researcher concludes that the treatment does have an effect when in fact the treatment has no effect.

Type I error is not a stupid mistake in the sense that the researcher is overlooking something that should be perfectly obvious. He is looking at the data obtained from the sample that appear to show a clear treatment effect. The researcher then makes a careful decision based on available information. He never knows whether a hypothesis is true or false.

The consequences of a type I error can be very serious because the researcher has rejected the null hypothesis and believed that the treatment had a real effect. it is likely that the researcher will report or publish the research results. Other researchers may try to build theories or develop other experiments based on false results.

ii)  Type II Errors

A type II error occurs when a researcher fails to reject the null hypothesis that is really false. It means that a treatment effect really exists, but the hypothesis test has failed to detect it. This type of error occurs when the effect of the treatment is relatively small. That is the treatment does influence the sample but the magnitude of the effect is very small.

The consequences of Type II errors are not very serious. In case of Type II error, the research data do not show the results that the researcher had hoped to obtain. The researcher can accept this outcome and conclude that the treatment either has no effect or has a small effect that is not worth pursuing. Or the researcher can repeat the experiment with some improvement and try to demonstrate that the treatment does work. It is impossible to determine a single, exact probability value for a type II error.

Summarizing we can say that a hypothesis test always leads to one of two decisions.

i)  The sample data provides sufficient evidence to reject the null hypothesis and the researcher concludes that the treatment has an effect.

ii)  The sample data do not provide enough evidence to reject the null hypothesis. The researcher fails to reject the null hypothesis and concludes that the treatment does not appear to have an effect.

Tuesday, November 21, 2023

Procedure for Determining Median | Merits of Median | Demerits of Median | Educational Statistics | 8614 |

 

QUESTION

How do we calculate the median? Also, mention its merits and demerits.
CourseEducational Statistics

Course code 8614

Level: B.Ed Solved Assignment 

ANSWER

Median

The median is the middle value of rank order data. It divides the distribution into two halves (i.e. 50% of scores or observations on either side of the median value). It means that this value separates the higher half of the data set from the lower half. The goal of the median is to determine the precise midpoint of the distribution. The median is appropriate for describing ordinal data.

Procedure for Determining Median

When the number of scores is odd, simply arrange the scores in order (from lower to higher or from higher to lower). The median will be the middle score in the list. Consider the set of scores 2, 5, 7, 10, 12. The score “7” lies in the middle of the scores, so it is the median. When there is an even number of scores in the distribution, arrange the scores in order (from lower to higher or from higher to lower). The median will be the average of the middle two scores in the list. Consider the set of scores 4, 6, 9, 14 16, 20. The average of the middle two scores 11.5 (i.e. 9+14/2 = 23/2 = 11.5) is the median of the distribution.

The median is less affected by outliers and skewed data and is usually the preferred measure of central tendency when the distribution is not symmetrical. The median cannot be determined for categorical or nominal data.

Merits of Median

i)  It is rigidly defined.

ii)  It is easy to understand and calculate.

iii)  It is not affected by extreme values.

iv)  Even if the extreme values are not known median can be calculated.

v)  It can be located just by inspection in many cases.

vi)  It can be located graphically.

vii)  It is not much affected by sampling fluctuations.

viii)  It can be calculated by data based on an ordinal scale.

ix)  It is suitable for skewed distribution.

x)  It is easily located in individual and discrete classes.

Demerits of Median

i)  It is not based on all values of the given data.

ii)  For larger data sizes the arrangement of the data in increasing order is

a somewhat difficult process.

iii)  It is not capable of further mathematical treatment.

iv)  It is not sensitive to some change in the data value.

v)  It cannot be used for further mathematical processing.


Related Topics

ANOVA and its Logics

Median (Procedure of Determination, Merits, Demerits)

Measures of Dispersion

Descriptive and Inferential Statistics

What is data Cleaning? Importance and Benefits of Data Cleaning 

Explain the terms Degree of Freedom,Spread of Score,Sample,Z Score,Confidence Interval 

What is measure of difference? Explain different types of test

Concept of Reliability, Types and methods of Reliability

Level of Measurement

Types of Variable in Stats 

Measures of Central Tedency and Dispersion, 

Role of Normal Distribution, and also note on Skewness and Kurtosis. 

Methods of Effective Presentation

Monday, November 20, 2023

Measures of Dispersion | Educational Statistics | 8614 |

QUESTION

Explain different measures of dispersion used in educational research.

  • Course: Educational Statistics
  • Course code 8614
  • Level: B.Ed Solved Assignment

ANSWER

Introduction to Measures of Dispersion

Measures of central tendency focus on what is an average or in the middle of the distribution of scores. Often the information provided by these measures does not give us a clear picture of the data and we need something more. It means that knowing the mean, median, and mode of a distribution does allow us to differentiate between two or more than two distributions; and we need additional information about the distribution. This additional information is provided by a series of measures which are commonly known as measures of dispersion.

There is dispersion when there is dissimilarity among the data values. The greater the dissimilarity, the greater the degree of dispersion will be.

Measures of dispersion are needed for four basic purposes.

i)  To determine the reliability of an average.

ii)  To serve as a basis for the control of the variability.

iii)  To compare two or more series about their variability.

iv)  To facilitate the use of other statistical measures.

 

The measure of dispersion enables us to compare two or more series concerning their variability. It is also looked at as a means of determining uniformity or consistency. A high degree would mean little consistency or uniformity whereas a low degree of variation would mean greater uniformity or consistency among the data set. Commonly used measures of dispersion are range, quartile deviation, mean deviation, variance, and standard deviation.

Range

The range is the simplest measure of spread and is the difference between the highest and lowest scores in a data set. In other words, we can say that the range is the distance between the largest score and the smallest score in the distribution. We can calculate the range as:

Range = Highest value of the data – The lowest value of the data


For example, if the lowest and highest marks scored in a test are 22 and 95 respectively, then

Range = 95 – 22 = 73

The range is the easiest measure of dispersion and is useful when you wish to evaluate the whole of a dataset. However, it is not considered a good measure of dispersion as it does not utilize the other information related to the spread. The outliers, either extremely low or extremely high value, can considerably affect the range.

Quartiles

The values that divide the given set of data into four equal parts are called quartiles and are denoted by Q1, Q2, and Q3. Q1  is called the lower quartile and Q3 is called the upper quartile. 25% of scores are less than Q1and 75% scores are less than Q3. Q2 is the median. The formulas for the quartiles are:


Quartile Deviation (QD)

Quartile deviation or semi inter-quartile range is one-half the difference between the first and the third quartile, i.e.

Q D = Q3 – Q1

Where Q1 = the first quartile (lower quartile)

Q3 = third quartile (upper quartile)

Calculating quartile deviation from ungrouped date:

To calculate quartile deviation from ungrouped data, the following steps are used.

i)  Arrange the test scores from highest to lowest

ii)  Assign a serial number to each score. The first serial number is assigned to the lowest score.


Determine the first quartile (Q1) by using the formula



Use the obtained value to locate the serial number of the score that falls under Q1.

iv  Determine the third (Q3), by using the formula



Locate the serial number corresponding to the obtained answer. Opposite to this number is the test score corresponding to Q3.

v)  Subtract the Q1 from Q3, and divide the difference by 2.

Mean Deviation or Average Deviation

The mean or the average deviation is defined as the arithmetic mean of the deviations of the scores from the mean or the median. The deviations are taken as positive. Mathematically, 

Standard Deviation

Standard deviation is the most commonly used and the most important measure of variation. It determines whether the scores are generally near or far from the mean, i.e. are the scores clustered together or scattered. In simple words, standard deviation tells how tightly all the scores are clustered around the mean in a data set. When the scores are close to the mean, the standard deviation is small. And large standard deviation tells that the scores are spread apart. Standard deviation is simply the square root of variance, i.e.

 Variance



Related Topics


Thursday, November 16, 2023

Non-Probability Sampling | Educational Statistics |

 

QUESTION

Explain non-probability sampling techniques used in educational research. 
CourseEducational Statistics

Course code 8614

Level: B.Ed Solved Assignment 

ANSWER

Non-Probability Sampling

This technique depends on subjective judgment. It is a process where probabilities cannot  be assigned to the individuals objectively. It means that in this technique samples are gathered in a way that does not give all individuals in the population equal chances of being selected. Choosing these methods could result in biased data or a limited ability to make general inferences based on the findings. But there are also many situations in which choosing this kind of sampling technique is the best choice for a particular research question or the stage of research.

There are four kinds of non-probability sampling techniques.

i)  Convenience Sampling

In this technique a researcher relies on available subjects, such as stopping people in the markets or on street corners as they pass by. This method is extremely risky and does not allow the researcher to have any control over the representativeness of the sample. It is useful when the researcher wants to know the opinion of the masses on a current issue; or the characteristics of people passing by on streets at a certain point of time; or if time and resources are limited in such a way that the research would not be possible otherwise. What may be the reason for selecting convenience samples, it is not possible to use the results from a convenience sampling to generalize to a wider population.

ii)  Purposive or Judgmental Sampling

In this technique, a sample is selected on the basis of the knowledge of the population and the purpose of the study. For example, when an educational psychologist wants to study the emotional and psychological effects of corporal punishment, he will create a sample that will include only those students who ever had received corporal punishment.  In this case, the researcher used a purposive sample because those being selected fit a specific purpose or description that was necessary to conduct the research.

Snowball Sample

This type of sampling is appropriate when the members of the population are difficult to locate, such as homeless industry workers, undocumented immigrants, etc. A snowball sample is one in which the researcher collects data on a few members of the target population he or she can locate, then asks to locate those individuals to provide the information needed to locate other members of that population whom they know. For example, if a researcher wants to interview undocumented immigrants from Afghanistan, he might interview a few undocumented individuals he knows or can locate, and would then rely on those subjects to help locate more undocumented individuals. This process continues until the researcher has all the interviews he needs and all contacts have been exhausted. This technique is useful when studying a sensitive topic that people might not openly talk about, or if talking about the issue under investigation could jeopardize their safety.

iv)  Quota Sample

A quota sample is one in which units are selected into a sample on the basis of pre-specified characteristics so that the total sample has the same distribution of characteristics assumed to exist in the population. For example, if a researcher wants a national quota sample, he might need to know what proportion of the population is male and what proportion is female, as well as what proportion of each gender fall into different age category and educational category. The researcher would then collect a sample with the same proportion as the national population.

Saturday, November 11, 2023

Descriptive and Inferential Statistics | Educational Statistics |

 QUESTION

How do descriptive and inferential statistics help a teacher? Explain. 

CourseEducational Statistics
Course code 8614
Level: B.Ed Solved Assignment 

ANSWE

Descriptive and Inferential Statistics 

Researchers use a variety of statistical procedures to organize and interpret data. These procedures can be classified into two categories – Descriptive Statistics and Inferential Statistics. The starting point for dealing with a collection of data is to organize, display, and summarize it effectively. It is the major objective of descriptive statistics.

Descriptive Statistics, as the name implies, describes the data. Descriptive statistics consist of methods for organizing and summarizing information. These are statistical procedures that are used to organize, summarize, and simplify data. In these techniques, raw scores are taken, and some statistical techniques to obtain a more manageable form. These techniques allow the researcher to describe a large amount of information or scores in a few indices such as mean, median, standard deviation, etc. When these indices are calculated for a sample, they are called statistics; and when they are calculated from the entire population, they are called parameters (Fraenkel, Wallen, & Hyun, 2012). Descriptive statistics organizes scores in the form of a table or a graph. It is especially useful when the researcher finds it necessary to handle interrelationships among more than two variables.

Only summarizing and organizing data is not the whole purpose of a researcher. He often wishes to make inferences about a population based on data he has obtained from a sample. For this purpose, he uses inferential statistics. Inferential Statistics are techniques that allow a researcher to study samples and then make generalizations about the populations from which they are selected.

The population of a research study is typically too large and it is difficult for a researcher to observe each individual. Therefore a sample is selected. By analyzing the results obtained from a sample, a researcher hopes to make a general conclusion about the population. One problem with using a sample is that a sample provides only limited information about the population. To address this problem the notion that the sample should be representative of the population. That is, the general characteristics of the sample should be consistent with the characteristics of the population             


Related Topics

                     

Saturday, February 15, 2020

What are levels of measurement? | description of level and differentiate each level from other levels | Introduction to Educational Statistics | BEd Solved Assignment Course Code 8614

What are the levels of measurement? Explain each level so that the reader can understand the description of the level and differentiate each level from other levels.

Write down 10 examples for each level and further explain one example from each level.


  • Course: Introduction to Educational Statistics (8614)
  • Level: B.Ed (1.5 Years)


Answer:


Data Levels of Measurement


A variable has one of four different levels of measurement: Nominal, Ordinal, Interval, or Ratio.   (Interval and Ratio levels of measurement are sometimes called Continuous or Scale).

The researcher needs to understand the different levels of measurement, as these levels of measurement, together with how the research question is phrased, dictate what statistical analysis is appropriate.  In fact, the Free download below conveniently ties 

In descending order of precision, the four different levels of measurement variable’s levels to different statistical analyses.t are:

  Nominal–Latin for name only (Republican, Democrat, Green, Libertarian)

  Ordinal–Think ordered levels or ranks (small–8oz, medium–12oz, large–32oz)

  Interval–Equal intervals among levels (1 dollar to 2 dollars is the same interval as 88 dollars to 89 dollars)

  Ratio–Let the “o” in ratio remind you of a zero in the scale (Day 0, day 1, day 2, day 3, …)

The first level of measurement is the nominal level of measurement.   In this level of measurement, the numbers in the variable are used only to classify the data.   In this level of measurement, words, letters, and alpha-numeric symbols can be used.   Suppose there is data about people belonging to three different gender categories. In this case, the person belonging to the female gender could be classified as F, the person belonging to the male gender could be classified as M, and the transgendered classified as T.   This type of assigning classification is the nominal level of measurement.

The second level of measurement is the ordinal level of measurement.   This level of measurement depicts some ordered relationship among the variable’s observations.  Suppose a student scores the highest grade of 100 in the class.   In this case, he would be assigned the first rank.   Then, another classmate scored the second highest grade of 92; she would be assigned the second rank.   A third student scores 81 and he would be assigned the third rank, and so on.     The ordinal level of measurement indicates an ordering of the measurements.

The third level of measurement is the interval level of measurement.  The interval level of measurement not only classifies and orders the measurements, but also specifies that the distances between each interval on the scale are equivalent along the scale from low interval to high interval.   For example, an interval level of measurement could be the measurement of anxiety in a student between the score of 10 and 11, this interval is the same as that of a student who scores between 40 and 41. A popular example of this level of measurement is temperature in centigrade, where, for example, the distance between  94C and 96C is the same as the distance between 100C and 102C.

The fourth level of measurement is the ratio level of measurement.   In this level of measurement, the observations, in addition to having equal intervals, can have a value of zero as well.   The zero in the scale makes this type of measurement, unlike the other types of measurement, although the properties are similar to that of the interval level of measurement. In the ratio level of measurement, the divisions between the points on the scale have an equivalent distance between them.

The researcher should note that among these levels of measurement, the nominal level is simply used to classify data, whereas the levels of measurement described by the interval level and the ratio level are much more exact.

What level of measurement is used for psychological variables?


Rating scales are used frequently in psychological research. For example, experimental subjects may be asked to rate their level of pain, how much they like a consumer product, their attitudes about capital punishment, and their confidence in an answer to a test question.

Typically these ratings are made on a 5-point or a 7-point scale. These scales are ordinal since there is no assurance that a given difference represents the same thing across the range of the scale. For example, there is no way to be sure that a treatment that reduces pain from a rated pain level of 3 to a rated pain level of 2 represents the same level of relief as a treatment that reduces pain from a rated pain level of 7 to a rated pain level of 6.

In memory experiments, the dependent variable is often the number of items correctly recalled. What scale of measurement is this? You could reasonably argue that it is a ratio scale. First, there is a true zero point: some subjects may get no items correct at all. Moreover, a difference of one represents a difference of one item recalled across the entire scale. It is certainly valid to say that someone who recalled 12 items recalled twice as many items as someone who recalled only 6 items.

But the number of items recalled is a more complicated case than it appears at first. Consider the following example in which subjects are asked to remember as many items as possible from a list of 10. Assume that (a) there are 5 easy items and 5 difficult items, (b) half of the subjects can recall all the easy items and different numbers of difficult items, while (c) the other half of the subjects are unable to recall any of the difficult items but they do remember different numbers of easy items. Some sample data are shown below.

Subject
Easy Items
Difficult Items
Score
A
0
0
1
1
0
0
0
0
0
0
2
B
1
0
1
1
0
0
0
0
0
0
3
C
1
1
1
1
1
1
1
0
0
0
7
D
1
1
1
1
1
0
1
1
0
1
8

Let's compare (1) the difference between Subject A's score of 2 and Subject B's score of 3 with (2) the difference between Subject C's score of 7 and Subject D's score of 8. The former difference is a difference of one easy item; the latter difference is a difference of one difficult item. Do these two differences necessarily signify the same difference in memory? We are inclined to respond "No" to this question since only a little more memory may be needed to retain the additional easy item whereas a lot more memory may be needed to retain the additional hard item. The general point is that it is often inappropriate to consider psychological measurement scales as either interval or ratio.

Consequences of the level of measurement


Why are we so interested in the type of scale that measures a dependent variable? The crux of the matter is the relationship between the variable's level of measurement and the statistics that can be meaningfully computed with that variable. For example, consider a hypothetical study in which 5 children are asked to choose their favorite colors from blue, red, yellow, green, and purple. The researcher codes the results as follows:
Color
Code
Blue
1
Red
2
Green
3
Yellow
4
Purple
5



Related Topics




New BISE Gazzets of the Current Year

All Punjab Gazzets Sargodha Board Gazzet 2024 10th class Lahore Board 10th Class Gazzet Part 1 Lahore Board 10th Class Gazzet Part 2