1.2 Introduction to Quality Control
:
Concept of quality control : |
|
Quality control describes the directed use of testing to measure the achievement of a specified standard.
|
|
Quality control is a superset of testing, although it often used synonymously with testing.
|
|
The concept of quality, namely that the object, the service, the performance – whatever it is – should be “fit for purpose”.
|
|
Quality
does not mean being at the top of the range, but it does mean being
efficient (so things happen on time and on cost), reliable (whatever the
weather, every day of the week) and giving good value for money.
|
|
“The goal of quality control should be to identify process steps that best predict outcomes” - A. Blanton Godfrey
|
|
Link Among Quality, Productivity, and Cost :
|
Take an example.
|
A
ring frame machine produces 1000 cops per day. The production cost per
cop is Rs. 15. It is seen that 75% of the cops conform to
specifications and 60% of the non-conforming cops can be reworked at an
additional expense of Rs. 5 per cop, the rest 40% of the nonconforming
cops is scrapped.
|
The cost per good cop is then
|
|
|
After
implementation of process control program, it is seen that 80% of the
cops conform to specifications and 60% of the non-conforming cops can be
reworked at additional expense of Rs. 5 per cop, the rest is required
to be scrapped.
|
The cost per good cop is then |
|
|
Quality Control Methodology :
|
|
Statistical process control
|
|
Statistical product control |
|
Six sigma
|
|
Total quality management (TQM)
|
|
Total quality assurance (TQA) |
|
Note: TQM and TQA are beyond the scope of this course.
|
Statistical Process Control :
|
|
Statistical
process control involves measurement and analysis of process variation
by using statistical techniques namely control charts.
|
|
This
is most often used for manufacturing processes, in the intent of
statistical process control is to monitor product quality and maintain
processes to fixed targets.
|
|
Statistical Product Control :
|
|
Statistical
Product Control involves in making dispositions on accepting or
rejecting a lot (or batch) of product that has already been produced.
|
|
It is
most often used to evaluate products that are received from outside
sources and where it is not possible to implement statistical process
control.
|
|
Six Sigma :
|
|
Technical
products with many complex components typically have many opportunities
for failure or defects to occur. Motorola developed the six-sigma
program as a response to the demand for such products.
|
|
The
focus of six sigma lies in reducing variability in key product quality
characteristics to the level at which failure or defects are extremely
unlikely.
|
|
2.1 Random Variable
Random Variable: |
The science and application
of statistics deal with quantities that can vary. Such quantities are
called variables. Generally, there are two types of variables, depending
on the origin and nature of the values that they assign.
|
A
variable whose values are obtained by comparing the variable with a
standard measuring scale such that they can assign any number including
whole numbers and fractional numbers is known as continuous variable.
|
A
variable whose values are obtained simply by counting such that they
can be always whole numbers or integers, is known as discontinuous or
discrete variable.
|
Characteristics of Random Variable : |
A random variable is generally characterized by its statistical as well as probability characteristics.
|
The
basic statistical characteristics include mean (measure of central
tendency of the data) and standard deviation (measure of variation in
the data). In order to get more information on the data, the frequency
distribution of the data need to be evaluated.
|
The probability characteristics include probability distribution and its parameters.
|
2.2 Continuous Random Variable
The Continuous Random Variable x : |
Let x be a continuous random variable, x ε (xmin,xmax). Let the number of measurements be n. Then x takes the values x1, x2, x3, …,xn. Alternately, we write x takes the values xj,
where j=1, 2, 3, …, n. The measured values of x are different, because
the variable x is a random variable. Actually, the number of
measurements is limited mainly because of the time and the capacity of
the measuring instrument. Here, we consider that the number of
measurements is very large and it can be increased without any
limitation of time.
|
Statistical Characteristics of x :
|
|
Distribution of x : |
Let us divide the data domain (xmin,xmax) into m number of classes of constant class interval δx as follows
|
|
Let us mark the classes by serial number i=1, 2, …, m. Then, we get
|
|
Histogram of x [1]: |
|
Observation : |
As the number of class increases, the width of class decreases.
The contours of the histogram should have roughly similar
shape, but the steps become smoother until they “diminish”
and become “infinitely small”. This is valid if and only if
the chosen “higher” number of class is very small as compared to
the number of measurements, that is m<
|
Statistical Characteristics : |
For a
finite (limited) number m of classes, the statistical characteristics of
the random variable x are described below. It is known that in a given
class, the measured value xj does not differ by more than δx/2 from the class value xi. For simplicity, we consider that all values in a given class are of same value xi. Then,
|
|
|
Discussion : |
Here, we used the class value xi for all calculations. This value may differ by δx/2 from the real measured value xj.
As a result, the statistical characteristics obtained by using the
class value are erroneous and this error decreases as the class width
(δx) decreases.
|
Let us now
decrease the class width by (a) increasing the number of classes say
twice and also (b) increasing the measurements say twice and repeat this
procedure to infinity.
|
As a result,
intuitively, the class width becomes smaller and smaller until it
becomes “infinitesimal”. Such a class with infinitely small width is
defined as “elementary class”, its width is denoted by the differential
symbol dx, instead of the symbol δx used to denote higher or finite
value. Then,
|
(1)
The contours of the histogram should have roughly similar shape, but
the steps become smoother until they “diminish” and become “infinitely
small”. The contours of the histogram change to a continuous function
called probability density function f(x).
|
|
(3)
The area under each elementary class of x is f(x)dx. This product
expresses the relative frequency of x in an elementary class of lower
limit x and upper limit x+dx. |
(4). |
The area under the probability density curve still remains one. Thus, In other words, the |
integration of all probabilities (“cumulative probability”) from xmin to xmax equals to one. It is possible to find out the |
cumulative probability of x from the following expression: |
|
Note: Here we use integral expression instead of summation expression |
Remark: For simplicity, we suggest that the domain of values x is finite (closed) interval (xmin,xmax). It can be proved that it is |
valid even when xmin =-∞ and xmax =∞. |
|
Statistical Characteristics : |
|
|
Probability : |
According to
the classical definition of probability, it is the ratio of the number
of successful outcomes to the number of all outcomes. If we have n
measurements and only ni measurements belong to i-th class (i=1, 2, …,m) then, the probability that a randomly chosen value belongs to i-th class is
|
|
We see that
probability and relative frequency possess the same value, that is,
probability is relative frequency and vice-versa. Relative frequency is
used when we would like to characterize a value which is already
measured. It means relative frequency is used as “ex post”. In opposite
to this, probability is used to explore the future based on past
investigation. Hence probability is used as “ex ante”.
|
The earlier
concept of “relative frequency” and “probability” for a class of certain
width is also applicable for an “elementary class”. Thus, we understand
the meaning of f(x)dx not only as the relative frequency of x in the
elementary class of lower limit x and upper limit x+dx, but also as the
“probability of occurrence” (future measured values) of x in the
elementary class.
|
|
2.5 Discrete Random Variable
The Discrete Random Variable x : |
Let x be a discrete random variable,
such that it can only assign values with whole numbers or integers. Let
the number of observations be n. Then x takes the values . Alternately, we write x takes the values x j,
where j=1, 2, 3, …, n. The observed values of x are different, because
the variable x is a random variable. Actually, the number of
observations is limited mainly because of the time and the cost of the
sample. Here, we consider that the number of observations is very large
and it can be increased without any limitation of time.
|
Statistical Characteristics of x : |
|
Distribution of x : |
Let us divide the data domain (xmin, xmax)
into m number of classes, each class corresponds to one single value.
Let us mark the classes by serial number i=1, 2, …, m. Then, we get
|
|
Histogram of x : |
|
Statistical Characteristics : |
For a finite (limited) number m of classes, the statistical characteristics of the random variable x are described below.
|
Mean: |
Mean of square values:
|
Variance:
|
Standard deviation:
|
3.1 Introduction to Quality
:
|
Population & Sample |
By population we mean the aggregate or totality of objects or individuals regarding which inferences are to be made.
|
The number of objects or individuals present in the population is known as size of the population.
|
By
sample we mean a collection consisting of a part of the objects or
individuals of a population which is selected as a basis for making
inferences about certain population facts.
|
The number of objects or individuals present in the sample is known as size of the sample.
|
The technique of obtaining a sample is called sampling.
|
Parameter and Statistic |
A
parameter is a population fact which depends upon the values of the
individuals comprising the population. For example, mean, variance, etc.
associated with a population are known as population parameters. They
are constants for a given population.
|
A
statistic is a sample fact which depends upon the values of the
individuals comprising the sample. For example, mean, variance, etc.
associated with a sample are known as sample statistics. Of course, many
samples of a given size can be formed from a given population,
accordingly the statistics will vary from sample to sample. So they are
not constants, but are variables.
|
The
difference between the value of a population parameter and the value of
the corresponding statistic for a particular sample is known as
sampling error.
|
Estimation of Population Parameters |
We
can calculate the value of statistics for a sample, but, it is
practically impossible to calculate the value of parameters for a
population. Therefore, we often estimate the population parameters based
on the sample statistics.
|
|
The two methods of estimation are
|
|
point estimation |
|
interval estimation |
|
Sampling Distribution |
Sampling
distribution of a sample statistic is the relative frequency
distribution of a large number of determinations
of the value of this
statistic, each determination being based on a separate sample of the
same size and selected independently
but by the same sampling technique
from the same population.
|
Standard Error
|
The standard error of any statistic is the standard deviation of its sampling distribution of a sample statistic.
|
Bias
|
If the mean
of the sampling distribution of a statistic is equal to that of the
corresponding population parameter, then the
statistic is said to be
unbiased.
|
If, on the
other hand, the mean of the sampling distribution of a statistic is not
equal to that of the corresponding population
parameter, then the
statistic is said to be biased.
|
Bias may arise from two sources:
|
|
Technique of sample selection (troublesome, no way to assess its magnitude) |
|
Character of the statistic (less troublesome, possible to find out its magnitude and direction and then make allowance accordingly). |
|
|
3.2 Sampling Technique
:
|
Simple Random Sample |
Assume
a sample of a given size is selected from a given population in such a
way that all possible samples of this size which could be formed from
this population have equal chance of selection, then such a sample is
called simple random sample.
|
Simple Random Sampling Scheme |
Step 1: |
Assign some identification numbers to each individuals of the population. |
Step 2: |
Take out an individual randomly, using “Random Number Table” |
Step 3: |
Repeat Step 2 until you obtain the desired number of individuals in a sample. |
Note: |
No two individuals assigning the same random number can be taken to form this sample. |
|
Random Number Table |
This
is a huge collection of ten-digit numbers such that the ten digits not
only would occur with equal frequencies but also are arranged in a
random order.
|
|
|
An Assumption |
It is
practically impossible to numerically identifying each individual of a
population either because of the large size of
the population or because
of the inaccessibility or current non-existence of some of the
individuals. In such situations, some
of the available individuals may
be used as a sample. When some are used, these should be selected at
random from those
available. Although such samples do not comply with
the definition of simple random sample, they are often treated as such.
That is, they are simply assumed to be random samples of the population
involved.
|
Note: This assumption has been widely used in sampling of textile materials.
3.3 Point Estimation of Population Parameters
:
|
Estimation of Population Mean |
Suppose from a population, we draw a number of samples each containing n random variables x 1,x 2,.....,x n Then, each sample has a mean which is a random variable. The mean (expected) value of this variable is
|
|
where μ is the population mean. Thus, sample mean is an unbiased estimator of population mean μ
|
|
The variance of the variable is |
|
where σ is the variance of the population. Note that the standard deviation of the variable is σ/√n , which is known as standard error of the variable Clearly, larger samples give more precise estimates of the population mean than do smaller samples.
|
Estimation of Population Variance
|
Suppose from a large population of mean μ; and variance σ 2 , we draw a number of samples each containing n random variables x 1,x 2,.....,x n Let be the mean of these samples and S 2 is the variance of these samples. Clearly, and S 2 are variables. The expected value of sample variance is
|
|
|
Since E(S2)≠ σ2, the estimator S2 is said to be biased estimate of the population variance.
|
|
|
Estimation of Population Standard Deviation |
|
Suppose from a large population of mean μ; and variance σ2 , we draw a number of samples each containing n random variables x1,x2,.....,xn Let be the mean of these samples and S2 is the variance of these samples. Clearly, and S2 are variables. The expected value of sample variance is
|
|
|
Estimation of Difference Between Two Population Means |
|
The variance of the difference of the sample means is |
|
|
Estimation of Population Proportion |
Assume a
population consists of “good” items and “bad” items. Let the proportion
of good items in this population be p. Hence, the proportion of bad
items in this population is 1-p. Let us draw n items from this
population such that it resembles a series of n independent Bernoulli
trials with constant probability p of selection of good items in each
trial. Then the probability of selection of x good items in n trials, as
given by binomial distribution, is nCxPx(1-P)n-x,
where Then, the mean (expected) number of good items in a n trials
is np and the standard deviation of number of good items in n trials is
√np(1-P).
|
Suppose we
draw n items from this population to form a sample. Assume x be the
number of good items in this sample. Hence, the proportion of good items
in this sample is P'=x/n The mean (expected) proportion of good items
in this sample is
|
|
Hence, the sample proportion is an unbiased estimator of population proportion.
|
The variance of sample proportion is given by
|
|
3.4 Interval Estimation of Population Parameters
:
|
Probability Distribution of Sample Means |
If the population from which samples are taken is normally distributed with mean μ and variance σ2 then the sample means of size n is also normally distributed with mean μ and variance σ2/n.
|
|
If the population from which samples are taken is not normally distributed, but has mean μ and variance σ2, then the sample means of size n is normally distributed with mean μ and variance σ2/n when n —>∞ (large sample).
|
|
|
Estimation of Population Mean |
Assume
population distribution is normal, regardless of sample size or take
large samples n —>∞ , regardless of population distribution. The
sample means follows normal distribution with mean μ and standard deviation σ/√n.
|
|
|
Popular Confidence Intervals for Population Mean |
Often, the 95 percent confidence intervals of population mean μ are estimated, they are |
|
The 99 percent confidence intervals of population mean μ are |
|
Illustration |
Consider the
theoretical population of yarn strength follows normal distribution
with mean at 14.56 cN.tex-1 and standard deviation at 1.30 cN.tex-1.
Then, for random samples of 450 yarns selected from this population, the
probability distribution of sample means follow normal distribution
with mean 14.56 cN.tex-1. and standard deviation 0.0613 cN.tex-1
(1.3/21.21 cN.tex-1 = 0.0613 cN.tex-1).
|
This distribution is shown in the next slide.
|
In the long
run, 68.26 percent of the mean strength of random samples of 450 yarns
selected from this population will involve sampling errors of less than
0.0613 cN.tex-1. Or, the probability of a sample mean being in error by
0.1201 (1.96×0.0613) cN.tex-1 or more is 0.05.
|
|
Probability Distribution of Sample Means |
Assume that the population from which samples are taken is normally distributed, but mean μ and variance σ2 are unknown. Then, we consider a statistic T1 as defined below
|
|
One can see here that as n ≥ 30 the t-distribution practically approaches to a normal distribution.
|
Small sample (practically): n < 30
|
Large sample (practically): n ≥ 30
|
Note:
For large sample, one can then find out the confidence interval of
population mean based on normal distribution as discussed earlier.
|
|
Estimation of Population Mean |
|
Illustration |
Consider a
population of cotton fibers has mean length 25 mm and standard
deviation of length is 0.68 mm. Then, for random samples of 10 fibers
selected from this population, the probability distribution of sample
means follow t-distribution with mean length 25 mm and standard
deviation of length 0.23 mm, and degree of freedom 9.
|
|
|
The
probability of mean length of 10 fibers selected from the population
being in error by 0.52 mm (2.262×0.23) or more is 0.05.
|
Probability Distribution of Difference Between Two Sample Means |
Let x1,x2,.....,xn1 and y1,y2,.....,yn2 be two independent sample observations from two normal populations with means μx,μy and variances σx,σy, respectively.
|
Or, let x1,x2,.....,xn1 and y1,y2,.....,yn2 be two independent large sample observations from two populations with means μx,μy and variances σx,σy, respectively.
|
Then the variable u
|
|
is a standard normal variable with mean zero and variance one.
|
Estimation of Difference Between Two Populations |
|
Probability Distribution of Difference Between Two Sample Means |
Or, let x1,x2,.....,xn1 and y1,y2,.....,yn2 be two independent small sample observations from two populations with means μx,μy and variances σx,σy, respectively. Then the variable T2
|
|
Estimation of Difference Between Two Population Means |
|
Probability Distribution of Sample Variances |
Suppose we draw a number of samples each containing n random variables x 1,x 2,.....,x n from a population that is normally distributed with mean μ and variance σ 2 then the sample means of size n is also normally distributed with mean μ and variance σ 2/n. Then the variable
|
|
|
Estimation of Population Variance |
|
Probability Distribution of Sample Variances |
|
Estimation of Population Variance |
|
Probability Distribution of Sample Proportions
|
Take large
samples n —> ∞, then we know that binomial distribution approaches to
normal distribution. Then the variable V is a standard normal variable
with mean zero and variance one.
|
|
where p'=x/n, x is the number of successes in the observed sample, n being the sample size.
|
Note:
Earlier we have shown that the standard deviation of p’ is √[p(1-p)]/n.
But, when p is not known, p’ can be taken as an unbiased estimator of p,
then the standard deviation of p’ can be written as √[p'(1-p')]/n.
|
Estimation of Population Proportion |
|
Illustration |
Consider a
population consists of “good” garments and “bad” garments. A random
sample of 100 garments selected from this population showed 20 garments
were bad, hence the proportion of good garments in this sample was
p'=0.80. Then, for random samples of 100 garments taken from this
population, the probability distribution of follows normal
distribution with mean 0.8 and standard deviation 0.04[√(0.8 *
0.2)/100].
|
|
In long run
68.26 percent of the means of random samples of 100 garments selected
from this population will involve sampling errors of less than 0.04. Or,
the probability of a sample mean being in error by 0.0784 (1.96×0.04)
or more is 0.05.
3.5 Testing of Hypothesis
:
|
Need for Testing |
Testing
of statistical hypothesis is a process for drawing some inference about
the value of a population parameter from the information obtained in a
sample selected from the population.
|
Types of Test |
1. |
One-tailed test |
2. |
Two-tailed test |
|
|
Illustration |
Sometimes we
may be interested only in the extreme values to one side of the
statistic, i.e., the so-called one “tail” of the distribution, as for
example, when we are testing the hypothesis that one process is better
than the other. Such tests are called one-tailed tests or one-sided
tests. In such cases, the critical region considers one side of the
distribution, with the area equals to the level of significance.
|
Sometimes we
may be interested in the extreme values on both sides of the statistic,
i.e., the so-called two “tails” of the distribution, as for example,
when we are testing the hypothesis that one process is not the same
with the other. Such tests are called two-tailed tests or two-sided
tests. In such cases, the critical region considers two side of the
distribution, with the area of both sides equals to the level of
significance.
|
Testing Procedure |
Step 1 : |
State the statistical hypothesis. |
Step 2 : |
Select the level of significance to be used. |
Step 3 : |
Specify the critical region to be used. |
Step 4 : |
Find out the value of the test statistic. |
Step 5 : |
Take decision. |
|
Statement of Hypothesis |
Suppose we
are given a sample from which a certain statistic such as mean is
calculated. We assume that this sample is drawn from a population for
which the corresponding parameter can tentatively take a specified
value. We call this as null hypothesis. This is usually denoted by
This null hypothesis will be tested for possible rejection under the
assumption that the null hypothesis is true.
|
Alternative hypothesis is complementary to null hypothesis. This is usually denoted by HA
|
For example if H: μ=μ0 is rejected then HA: μ ≠μ 0, μ <;μ 0, μ >μ 0
|
Selection of Level of Significance |
The level of
significance, usually denoted by , is stated in terms of some small
probability value such as 0.10 (one in ten) or 0.05 (one in twenty) or
0.01 (one in a hundred) or 0.001 (one in a thousand) which is equal to
the probability that the test statistic falling in the critical region,
thus indicating falsity of H.
|
Specification of Critical Region
|
A critical
region is a portion of the scale of possible values of the statistic so
chosen that if the particular obtained value of the statistic falls
within it, rejection of the hypothesis is indicated.
|
The phrase “test statistic” is simply used here to refer to the statistic employed in effecting the test of hypothesis.
|
The Decision
|
In this
step, we refer the value of the test statistic as obtained in Step 4 to
the critical region adopted. If the value falls in this region, reject
the hypothesis. Otherwise, retain or accept the hypothesis as a tenable
(not disproved) possibility.
|
Illustration: A Problem |
A fiber
purchaser placed an order to a fiber producer for a large quantity of
basalt fibers of 1.4 GPa breaking strength. Upon delivery, the fiber
purchaser found that the basalt fibers, “on the whole”, were weaker and
asked the fiber producer for replacement of basalt fibers. The fiber
producer, however, replied that the fibers produced met the
specification of the fiber purchaser, hence, no replacement would be
done. The matter went to court and a technical advisor was appointed to
find out the truth. The advisor conducted a statistical test.
|
Illustration: The Test |
Step 1 :
|
Null hypothesis H:μ [GPa]=1.4
Alternative hypothesis HA:μ [GPa]>1.4
|
where μ is the population mean breaking strength of basalt fibers as ordered by the fiber purchaser.
|
Step 2 :
|
The level of significance was chosen as α=0.01.
|
Step 3:
|
The advisor wanted to know the population
standard deviation σ of strength. So he made a random sample with 65
fibers n=65 and observed the sample standard deviation s of breaking
strength was 0.80 GPa. Then, he estimated the population standard
deviation of strength as follows:
|
|
|
Illustration: The Test Continued |
Then, the critical region for mean breaking strength is found as: |
|
|
Step 5:
|
The advisor observed that the sample mean breaking strength was 1.12 GPa.
|
Step 6 :
|
The advisor referred to the observed value [GPa]
to the critical region he established and noted that it fell in this
region. Hence, he rejected the null hypothesis and thus accepted the
alternative hypothesis μ [GPa] <1 .4.="" div="">1>
|
|
Errors Associated with Testing of Hypothesis |
Let us analyze the following situations:
|
Possibilities |
Course of Action |
True H |
Accept
(Desired correct action)
|
Reject
(Undesired erroneous action)
|
False H |
Accept
(Undesired erroneous action)
|
Reject
(Desired correct action)
|
|
Type I Error |
Rejecting H when it is true. |
Type II Error |
Accepting H when it is false. |
|
In
situations where Type I error is possible, the level of significance α
represents the probability of such an error. Higher is the value of
level of significance, higher is probability of Type I error.
|
Type I Error
|
α=0 means complete elimination of occurrence of Type I error.
|
Of course,
it implies that no critical region exists, hence H is retained always.
In this case, in fact, there is no need to analyze or even collect any
data at all. Obviously, while such a procedure would completely
eliminate the possibility of making a Type I error, it does not provide a
guarantee against error, for every time that the H stated is false, a
Type II error would necessarily occur. Similarly, by letting α=1 it
would be possible to eliminate entirely the occurrence of Type II error
at the cost of committing a Type I error for every true H tested.
|
Thus, the
choice of a level of significance represents a compromise effect at
controlling the two type of errors that may occur in testing statistical
hypothesis
|
Type II Error
|
We see that
for a given choice of α, there is always a probability for Type II
error. Let us denote this probability by β This depends upon:
|
(1)
|
the value of α chosen |
(2)
|
the location of critical region |
(3)
|
the variability of the statistic |
(4)
|
the amount by which the actual population parameter differs from the hypothesized value of it, stated in H. |
|
Because in
any real situation, the actual value of a population parameter can never
be known, the degree of control exercised by a given statistical test
on Type II error can never be determined.
|
Illustration: The beta value
|
Let us
assume that the actual population mean breaking strength of basalt
fibers supplied by the fiber producer was 1.0 GPa. In this case, Type II
error will occur when [GPa] >1.1670. The probability Β of this can be found as under
|
|
|
Hence, the
probability of Type II error is 0.0475. That is, if in this situation,
this test were to be repeated indefinitely, 4.75 percent of the
decisions would be of Type II error.
|
Illustration: Effect of alpha on beta |
|
α
|
Β
|
As alpha increases, beta decreases
|
0.001
|
0.1814
|
0.005
|
0.0778
|
0.010
|
0.0475
|
0.050
|
0.0094
|
0.100
|
0.0033
|
|
Illustration:Effect of Location of Critical Region on Beta Value |
|
Illustration: Effect of Sample Variability on Critical Region on Beta Value |
|
Illustration: Effect of Difference Between Actual & Hypothesized Values of Population Parameter |
|
Power of A Statistical Test |
Suppose that
the actual value of a population parameter differs by some particular
amount from the value, H, hypothesized for it such that the rejection
of H is the desired correct outcome. The probability when this outcome
will be reached is the probability that the test statistic falls in the
critical region. Let us refer to this probability as the power (P) of
the statistical test. Since Β represents the probability that the test
statistic does not fall in the critical region, then the probability P
that the test statistic falls in the critical region is 1-Β=P. Hence,
the power of a test is the probability that it will detect falsity in
the hypothesis.
|
Power Curve
|
The power
curve of a test of a statistical hypothesis, H, is the plot of P-values
which correspond to all values that are possible alternatives to H.
|
In other words, power curve may be used to read the probability of rejecting H for any given possible alternative value of μ .
|
Power Curve: Illustration
|
Let us draw the power curve for the previous example. There exists an infinite collection of μ - Values (μ[GPa]<1 .4="" 1.4="" accordingly="" alternative="" and="" are="" div="" exist.="" here.="" hypothesized="" of="" p-values="" possible="" shown="" some="" that="" the="" to="" value="" values="">1>
|
μ[GPa] |
μ[-] |
Β[-] |
P[-]=1-Β[-] |
0.7
|
4.67
|
0
|
1
|
0.8
|
3.67
|
0.0001
|
09999
|
0.9
|
2.67
|
0.0038
|
0.9962
|
1.0
|
1.67
|
0.0475
|
0.9525
|
1.1
|
0.67
|
0.2514
|
0.7486
|
1.2
|
-0.33
|
0.6293
|
0.3707
|
1.3
|
-1.33
|
0.9082
|
0.0918
|
|
Power Curve: Illustration |
|
Inadequacy of Statistical Hypothesis Test |
We have seen
that the decision of a statistical hypothesis test is based on whether
the null hypothesis is rejected or is not rejected at a specified level
of significance (α-value). Often, this decision is inadequate because it
gives the decision maker no idea about whether the computed value of
the test statistic is just barely in the rejection region or whether it
is very far into this region. Also, some decision makers might be
uncomfortable with the risks implied by a specified level of
significance, say α=0.05. To avoid these difficulties, the P-value
approach has been widely used.
|
P-value Approach [2]
|
The P-value
is the probability that the test statistic takes on a value that is at
least as extreme as the observed (computed) value of the test statistic
when the null hypothesis H is true. Otherwise, it can be said that the
P-value is the smallest level of significance that would lead to
rejection of the null hypothesis with the given data.
|
|
|
|
Testing Procedure |
Step 1 : |
State the statistical hypothesis. |
Step 2 : |
Find out the value of the test statistic. |
Step 3 : |
Find out the P-value. |
Step 4 : |
Select the level of significance to be used. |
Step 5 : |
Take decision. |
|
Illustration: The Test |
|
4. Shewhart Control Charts |
4.1 Introduction :
|
Why Control Charts? [1]
|
|
Basis of Control Charts
|
The
basis of control charts is to checking whether the variation in the
magnitude of a given characteristic of a manufactured product is arising
due to random variation or assignable variation.
|
Random variation: Natural variation or allowable variation, small magnitude
|
Assignable variation: Non-random variation or preventable variation, relatively high magnitude.
|
If
the variation is arising due to random variation, the process is said
to be under control. But, if the variation is arising due to assignable
variation then the process is said to be out of control.
|
Types of Control Charts
|
4.2 Basics of Shewhart Control Charts :
Major Parts of Shewhart Control Chart |
|
Central Line (CL): This indicates the desired standard or the level of the process.
|
Upper Control Limit (UCL): This indicates the upper limit of tolerance.
|
Lower Control Limit (LCL): This indicates the lower limit of tolerance.
|
If m is the underlying statistic so that E(m)=μm & Var(m)=σ2m |
CL=μm |
UCL=μm+3σm |
LCL=μm-3σm |
Why 3σ? |
Let us assume that the probability distribution of the sample statistic m is (or tends to be) normal with mean μm and standard deviation σm Then, |
|
This means the
probability that a random value of m falls in-between the 3-σ limits is
0.9973, which is very high. On the other hand, the probability that a
random value of m falls outside of the 3-σ limits is 0.0027, which is
very low. When the values of m fall in-between the 3-σ limits, the
variations are attributed due to chance variation, then the process is
considered to be statistically controlled. But, when one or many values
of m fall out of the 3-σ limits, the variations are attributed due to
assignable variation, then the process is said to be not under
statistical control. |
Analysis of Control Chart: Process Out of Control |
The following one or more incidents indicate the process is said to be out of control (presence of assignable variation). |
|
Analysis of Control Chart: Process Out of Control… |
|
Note: Sometimes the 2σ limits are called as warning limits. Then, the 3 σ limits are called as action limits. |
|
Analysis of Control Chart: Process Under Control |
When all of the following incidents do not occur, the process is said to be under control (absence of assignable variation). |
(1) |
A point falls outside any of the control limits. |
(2) |
Eight consecutive points fall within 3σ limits.
|
(3) |
Two out of three consecutive points fall beyond 2σ limits. |
(4) |
Four out of five consecutive points fall beyond 1σ limits.
|
(5) |
Presence of upward or downward trend
|
(6) |
Presence of cyclic trend |
|
4.3 Shewhart Control Charts for Variables :
The Mean Chart (-Chart) |
Let x ij, j=1,2,.....,n be the measurements on ith sample (i=1,2,…,k). The mean i , range Ri , and standard deviation si for ith sample are given by |
|
Then the mean of sample means, the mean of sample ranges, and the mean of sample standard deviations are given by |
|
Let us now decide the control limits for i. |
|
|
The Range Chart (R-Chart) |
Let xij, j=1,2,.....,n be the measurements on ith sample (i=1,2,…,k). The range Ri for ith sample is given by |
|
Then the mean of sample ranges is given by |
|
Let us now decide the control limits for Ri. |
|
|
The Standard Deviation Chart (s-Chart) |
Let xij, j=1,2,.....,n be the measurements on ith sample (i=1,2,…,k). The range si for ith sample is given by |
|
Then the mean of sample ranges is given by |
|
Let us now decide the control limits for si |
|
|
Table |
|
Illustration |
|
Illustration (-chart) |
|
Illustration (R-chart) |
|
Illustration (s-chart) |
|
Illustration (Overall Conclusion) |
Although the process
variability is in control, the process cannot be regarded to be in
statistical control since the process average is out of control.
|
4.4 Shewhart Control Charts for Attributes :
Control Chart for Fraction Defective (p-Chart) |
The
fraction defective is defined as the ratio of the number of defectives
in a population to the total number of items in the population.
|
Suppose
the production process is operating in a stable manner, such that the
probability that any item produced will not conform to specifications is
p and that successive items produced are independent. Then each item
produced is a realization of a Bernouli random variable with parameter
p. If a random sample of n items of product is selected, and if D is the
number of items of product that are defectives, then D has a binomial
distribution with parameter n and p; that is P{D=x}=nCxp (1-p)n-x, x=0,1,...n. The mean and variance of the random variable D are np and np(1-p), respectively.
|
The
sample fraction defective is defined as the ratio of the number of
defective items in the sample of size n; that is p'= D/n. The
distribution of the random variable p' can be obtained from the binomial
distribution. The mean and variance of p' are p and p(1-p)/n,
respectively.
|
|
When the mean fraction of defectives p of the population is not known.
|
Let
us select m samples, each of size n. If there are Di defective items in
ith sample, then the fraction defectives in the ith sample is p'i= Di/n, i=1,2,....,m.The average of these individual sample fraction defectives is
|
|
Control Chart for Number of Defectives (np-Chart) |
It is also possible to base a control chart on the number of defectives rather than the fraction defectives. |
|
Illustration [2]
|
The following refers to the number of defective knitwears in samples of size 180.
|
|
|
Control Chart for Defects (c-Chart) |
Consider
the occurrence of defects in an inspection of product(s). Suppose that
defects occur in this inspection according to Poisson distribution; that
is
|
|
Where x is the number of defects and c is known as mean and/or variance of the Poisson distribution.
|
|
When the mean number of defects c in the population is not known. Let us select n samples. If there are ci defects in ith sample, then the average of these defects in samples of size n is
|
|
Note: If this calculation yields a negative value of LCL then set LCL=0. |
Illustration [2] |
The following dataset refers to the number of holes (defects) in knitwears.
|
|
|
5.1 Introduction :
Process Capability Analysis |
When
the process is operating under control, we are often required to obtain
some information about the performance or capability of the process.
|
Basis of Control Charts
|
The
basis of control charts is to checking whether the variation in the
magnitude of a given characteristic of a manufactured product is arising
due to random variation or assignable variation.
|
Random variation: Natural variation or allowable variation, small magnitude
|
Assignable variation: Non-random variation or preventable variation, relatively high magnitude.
|
If
the variation is arising due to random variation, the process is said
to be under control. But, if the variation is arising due to assignable
variation then the process is said to be out of control.
|
Process
capability refers to the uniformity of the process. The variability in
the process is a measure of uniformity of the output. There are two ways
to think about this variability.
|
(1) |
Natural or inherent variability at a specified time, |
(2) |
Variability over time. |
|
Let us investigate and assess both aspects of process capability.
|
Natural Tolerance Limits
|
The
six-sigma spread in the distribution of product quality characteristic
is customarily taken as a measure of process capability. Then the upper
and lower natural tolerance limits are
|
Upper natural tolerance limit = μ + 3σ
|
Lowe natural tolerance limit = μ - 3σ
|
Under
the assumption of normal distribution, the natural tolerance limits
include 99.73% of the process output falls inside the natural tolerance
limits, that is, 0.27% (2700 parts per million) falls outside the
natural tolerance limits.
|
|
5.2 Techniques for Process Capability Analysis
:
Techniques for Process Capability Analysis |
|
Histogram |
|
Probability Plot |
|
Control Charts |
|
Histogram |
It gives an immediate visual impression of process performance. It may also immediately show the reason for poor performance.
|
|
Example: Yarn Strength (cN.tex-1) Dataset
|
|
Frequency Distribution
|
|
Histogram |
|
Probability Plot |
Probability plot can
determine the shape, center, and spread of the distribution. It often
produces reasonable results for moderately small samples (which the
histogram will not).
|
Generally, a
probability plot is a graph of the ordered data (ascending order) versus
the sample cumulative frequency on special paper with a vertical scale
chosen so that the cumulative frequency distribution of the assumed type
(say normal distribution) is a straight line.
|
The procedure to obtain a probability plot is as follows.
|
(1) |
The sample data x1,x2,......,xn is arranged as x(1), x(2), ...., x(n) where x(1) is the smallest observation, x(2) is the second smallest observation, and x(n)is the largest observation, and so forth. |
(2) |
The ordered observations x(j) are then plotted again their observed cumulative frequency (j-0.5)/n on the appropriate probability paper. |
(3) |
If the hypothesized distribution adequately
describes the data, the plotted points will fall approximately along a
straight line. |
|
Example: Yarn Strength (cN.tex-1) Dataset |
|
|
5.3 Measures of Process Capability Analysis
:
Measure of Process Capability: Cp |
|
Illustration
|
|
Measure of Process Capability: Cpu and Cpl
|
The
earlier expression of Cp assumes that the process has both upper and
lower specification limits. However, many practical situations can give
only one specification limit. In that case, the one-sided Cp is defined
by
|
|
Illustration
|
|
Process Capability Ratio Versus Process Fallout [1]
|
Assumptions:
|
(1) |
The quality characteristic is normally distributed.
|
(2) |
The process is in statistical control. |
(3) |
The process mean is centered between USL and LSL.
|
|
|
Measure of Process Capability: Cpk
|
We
observed that Cp measures the capability of a centered process. But,
all process are not necessarily be always centered at the nominal
dimension, that is, processes may also run off-center, then the actual
capability of non-centered processes will be less than that indicated by
Cp. In the case when the process is running off-center, the capability
of a process is measured by the following ratio
|
|
Interpretations
|
(1) |
When Cpk=Cp then the process is centered at the midpoint of the specifications.
|
(2) |
When Cpk < Cp then the process is running off center.
|
(3) |
When Cpk=0, the process mean is exactly equal to one of the specification limits.
|
(4) |
When Cpk<0 lies="" limit.="" mean="" outside="" process="" specification="" td="" the="" then="">
0> |
(5) |
When Cpk <-1 entire="" lies="" limits.="" outside="" process="" specification="" td="" the="" then="">
-1> |
|
Illustration
|
|
Inadequacy of Cpk
|
|
Measure of Process Capability: Cpm
|
|
Measure of Process Capability: Cpmk
|
|
Illustration
|
Take the example of process A and process B. Here T=50 cN. Then,
|
|
Note to Non-normal Process Output
|
An
important assumption underlying the earlier expressions and
interpretations of process capability ratio are based on a normal
distribution of process output. If the underlying distribution is
non-normal then
|
1)
Use suitable transformation to see if the data can be reasonably
regarded as taken from a population following normal distribution.
|
2) For non-normal data, find out the standard capability index
|
|
3) For non-normal data, use quantile based process capability ratio
|
|
5.4 Inferential Properties of Process Capability Ratios
:
Confidence Interval on Cp |
|
Confidence Interval on Cpk |
|
Example |
|
Test of Hypothesis about Cp |
Many
a times the suppliers are required to demonstrate the process
capability as a part of contractual agreement. It is then necessary that
Cp exceeds a particular target value say Cp0. Then the statements of hypotheses are formulated as follows.
|
H: Cp=Cpo (The process is not capable.)
|
HA: Cp>Cpo (The process is capable.)
|
|
The Cp(high) is
defined as a process capability that is accepted with probability 1-α
and Cp(low) is defined as a process capability that is likely to be
rejected with probability 1-β.
|
|
Example |
A
fabric producer has instructed a yarn supplier that, in order to
qualify for business with his company, the supplier must demonstrate
that his process capability exceeds Cp=1.33. Thus, the supplier is
interested in establishing a procedure to test the hypothesis
|
H: Cp=1.33
|
HA: Cp>1.33
|
The
supplier wants to be sure that if the process capability is below 1.33
there will be a high probability of detecting this (say, 0.90), whereas
if the process capability exceeds 1.66 there will be a high probability
of judging the process capable (again, say 0.90).
|
Then, Cp(low)=1.33, Cp(high)=1.66, and α=β=0.10.
|
Let us first find out the sample size n and the critical value C.
|
|
Then, from table, we get, n=70 and
|
|
To
demonstrate capability, the supplier must take a sample of n=70 and the
sample process capability ratio Cp must exceed C=1.46.
|
Note to Practical Application |
|
6. Non-Shewhart Control Charts |
6.1 Introduction :
|
Non-Shewhart Control Charts |
|
Cumulative sum control chart (Cusum control chart)
|
|
Moving average control chart (MA control chart)
|
|
Exponentially weighted moving average control chart (EWMA control chart)
|
|
6.2 Cusum Control Chart
:
Overview [1 , 2 ] |
The
Shewhart control charts are relatively insensitive to the small shifts
in the process, say in the order of about 1.5 σ or less. Then, a very
effective alternative is an advanced control chart: Cumulative sum
(CUSUM) control chart.
|
Illustration: Yarn Strength Data
|
Consider the yarn strength (cN.tex-1) data as shown here. The first twenty of these observations were taken from a normal distribution with mean μ=10 cN.tex-1 and standard deviation σ=1cN.tex-1. The last ten observations were taken from a normal distribution with mean μ=11 cN.tex-1 and standard deviation σ=1cN.tex-1. The observations are plotted on a Basic Control Chart as shown in the next slide.
|
|
Illustration: Basic Control Chart
|
|
CUSUM: What is it?
|
The cumulative sum (CUSUM) of observations is defined as
|
|
When the process remains in control with mean at μ, the cumulative sum is a random walk with mean zero.
|
When the mean shifts upward with a value μ0 such that μ > μ0 then an upward or positive drift will be developed in the cumulative sum.
|
When the mean shifts downward with a value μ0 such that μ < μ0 then a downward or negative drift will be developed in the CUSUM.
|
Illustration: CUSUM
|
|
Tabular CUSUM
|
The tabular CUSUM works by accumulating deviations from μ (the target value) that are above the target with one statistic C+ and accumulating deviations from μ (the target value) that are below the target with another statistic C-. These statistics are called as upper CUSUM and lower CUSUM, respectively.
|
|
If the shift δ in the process mean value is expressed as
|
|
where μ1
denotes the new process mean value and μ and σ indicate the old process
mean value and the old process standard deviation, respectively. Then, K
is the one-half of the magnitude of shift.
|
|
If either Ci+ and Ci+
exceed the decision interval H, the process is considered to be out of
control. A reasonable value for H is five times the process standard
deviation, H=5σ.
|
Illustration: Tabular CUSUM (Missing Eqn )
|
|
Illustration: Tabular CUSUM (Missing Eqn)
|
|
Illustration: CUSUM Status Chart
|
|
Concluding Remarks
|
|
|
|
6.3 MA Control Chart
:
Moving Average: What is it?
|
|
Control Limits
|
If
μ denotes the target value of the mean used as the center line of the
control chart, then the three-sigma control limits for Mi are
|
|
The
control procedure would consists of calculating the new moving average
Mi as each observation xi becomes available, plotting Mi on a control
chart with upper and lower limits given earlier and concluding that the
process is out of control if Mi exceeds the control limits. In general,
the magnitude of the shift of interest and w are inversely related;
smaller shifts would be guarded against more effectively by longer-span
moving averages, at the expense of quick response to large shifts.
|
Illustration
|
The
observations xi of strength of a cotton carded rotor yarn for the
periods 1≤i≤30 are shown in the table. Let us set-up a moving average
control chart of span 5 at time i. The targeted mean yarn strength is
4.5 cN and the standard deviation of yarn strength is 0.5 cN.
|
Data
|
|
|
Calculations
|
The statistic Mi
plotted on the moving average control chart will be for periods i≥5.
For time periods i<5 1="" also="" are="" average="" averages="" div="" for="" i="" in="" is="" moving="" observations="" of="" periods="" plotted.="" shown="" table.="" the="" these="" values="">5>
|
|
Control Chart
|
|
Conclusion
|
Note
that there is no point that exceeds the control limits. Also note that
for the initial periods i
|
Comparison Between Cusum Chart and MA Chart
|
The
MA control chart is more effective than the Shewhart control chart in
detecting small process shifts. However, it is not as effective against
small shifts as cusum chart. Nevertheless, MA control chart is
considered to be simpler to implement than cusum chart in practice.
|
6.4 EWMA Control Chart
:
Exponentially Weighted Moving Average (EWMA):
What is it?[2]
|
Suppose the individual observations are x1,x2,x3,… The exponentially weighted moving average is defined as
|
|
where 0<λ≤1 is a constant and z0=μ, where μ is process mean.
|
What is it called “Exponentially Weighted MA”?
|
|
The control limits are
|
|
where L is known to be the width of the control chart.
|
The choice of the parameters L and λ will be discussed shortly.
|
Choice of λ and L
|
The
choice of λ and L are related to average run length (ARL). ARL is the
average number of points that must be plotted before a point indicates
an out-of-control condition. So, ARL=1/p, where p stands for the
probability that any point exceeds the control limits.
|
As
we know, for three-sigma limit, p=0.0027, so ARL=1/0.0027=370. This
means, even if the process is in control, an out-of-control signal will
be generated every 370 samples, on the average.
|
In
order to detect a small shift in process mean, which is what is the
goal behind the set-up of an EWMA control chart, the parameters λ and L
are required to be selected to get a desired ARL performance.
|
The following table illustrates this.
|
|
As
a rule of thumb, λ should be small to detect smaller shifts in process
mean. It is generally found that 0.05≤λ≤0.25 work well in practice. It
is also found that L=3 (3-sigma control limits) works reasonably well,
particularly with higher values of λ. But, when λ is small, that is,
λ≤0.1, the choice of L between 2.6 and 2.8 is advantageous.
|
Illustration
|
Let
us take our earlier example of yarn strength in connection with MA
control chart. Here, the process mean is taken as μ=4.5 cN and process
standard deviation is taken as σ=0.5 cN. We choose λ=0.1 and L=2.7. We
would expect this choice would result in an in-control average run
length equal to 500 and an ARL for detecting a shift of one standard
deviation in the mean of ARL=10.3. The observations of yarn strength,
EWMA values, and the control limit values are shown in the following
table.
|
Table
|
|
|
Graph
|
|
Conclusion |
Note that there is no point that exceeds the control limits. We therefore conclude that the process is in control.
|
|
7. Acceptance Sampling Techniques |
7.1 Introduction :
|
Why Acceptance Sampling? [1] |
|
Acceptance Sampling: Attributes & Variables |
The
input or output articles are available in lots or batches (population).
It is practically impossible to check each and every article of a
batch. So we randomly select a few articles (sample) from a batch,
inspect them, and then draw conclusion whether the batch is acceptable
or not. This is called acceptance sampling.
|
Sometimes
the articles inspected are merely classified as defective or
non-defective. Then we deal with acceptance sampling of attributes.
|
Sometimes the property of the articles inspected is actually measured, then we deal with acceptance sampling of variables.
|
|
7.2 Acceptance Sampling of Attributes
:
Definition |
Let
us take a sample of size n randomly from a batch. If the number of
defective articles in the sample is not greater than a certain number c,
then the batch is accepted; otherwise, it is rejected. This is how we
define acceptance sampling plan.
|
Probability of Acceptance
|
Let
us assume that the proportion of defective articles in the batch is p.
Then, when a single article is randomly chosen from a batch, the
probability that it will be defective is p. Further assume that the
batch size is sufficiently larger than the sample size n so that this
probability is the same for each article in the sample. Thus, the
probability of finding exactly r number of defective articles in a
sample of size n is
|
|
Now
the batch will be accepted if r = c, i.e., if r=0 or 1 or 2 or….or c.
Then, according to the addition rule of probability, the probability of
accepting the batch is
|
Operating Characteristic
|
|
This
tells that once n and c are known, the probability of accepting a batch
depends only on the proportion of defectives in the batch. Thus, Pa(p) is a function of p. This function is known as Operating Characteristic (OC) of the sampling plan.
|
|
Acceptable Quality Level (AQL) p=p1
|
This
represents the poorest level of quality for the producer’s process that
the consumer would consider to be acceptable as process average.
Ideally, the producer should try to produce lots of quality better than p1. Assume there is a high probability, say 1-α, of accepting a batch of quality p1. Then, the probability of rejecting a batch of quality p1 is α, which is known as producer’s risk.
|
when p=p1 then pa (p1)=1-α
|
Lot Tolerance Proportion Defectives (LTPD) p=p2>p1
|
This
represents the poorest level of quality that the consumer is willing to
accept in an individual lot. Below this level, it is unacceptable to
the consumer. In spite of this, let there will be a small chance
(probability) β of accepting such a bad batch by the consumer, β is
known as consumer’s risk.
|
|
When p > p2 then pa(p2)=β. LTPD is also known as rejectable quality level (RQL).
|
Finding of n and c
|
|
Illustration [2]
|
Design a sampling plan for which AQL is 2%, LTPD is 5%, and the producer’s risk and consumer’s risk are both 5%.
|
|
|
Effect of Sample Size n on OC Curve
|
|
Effect of Acceptance Number c on OC Curve
|
|
7.3 Acceptance Sampling of Variables
:
Problem Statement
|
Consider
a producer supplies batches of articles having mean value μ of a
variable (length, weight, strength, etc.) and standard deviation σ of
the variable. The consumer has agreed that a batch will be acceptable if
|
|
where μ0 denotes the critical (nominal) mean value of the variable and T indicates the tolerance for the mean value of the variable.
|
Otherwise, the batch will be rejected.
|
Here, the producer’s risk α is the probability of rejecting a perfect batch, the one for which μ= μ0 .
|
The consumer’s risk β is the probability of accepting an imperfect batch, the one for which μ= μ0 ±T.
|
Sampling Scheme
|
Let us assume that the probability distribution of mean of samples, each of size n, taken from a batch, is (or tends to) normal with mean μ, where μ =μ 0 and standard deviation σ/√n Then, the batch will be accepted if
|
|
where t denotes the tolerance for sample mean . Otherwise, the batch will be rejected.
|
Here,
the producer’s risk α is the probability of rejecting a perfect batch
and the consumer’s risk β is the probability of accepting an imperfect
batch.
|
The Producer’s Risk Condition
|
|
The Consumer’s Risk Condition
|
|
Finding n and t :
|
|
Illustration
|
A
producer (spinner) supplies yarn of nominal linear density equal to be
45 tex. The customer (knitter) accepts yarn if its mean linear density
lies within a range of 45 ± 1.5 tex. As the knitter cannot test all the
yarns supplied by the spinner, the knitter would like to devise an
acceptance sampling scheme with 10% producer’s risk and 5% consumer’s
risk . Assume the standard deviation of count within a delivery is 1.2
tex.
|
|
Assume
the mean linear density of yarn samples, each of size n, follows (or
tends to follow) normal distribution with mean 45 tex and standard
deviation 1.2 tex. Then, the standard normal variable takes the
following values
|
|
|
Thus,
the sampling scheme is as follows: Take a yarn sample of size nine and
accept the delivery if the sample mean lies in the range of 45±0.68 tex,
that is in-between 44.32 tex and 45.68 tex, otherwise, reject the
delivery.
|
8. Six Sigma and Its Application to Textiles |
8.1 What is Six Sigma? [1]
|
Six Sigma as a Statistical Measure
|
The
lowercase Greek letter sigma—s—stands for standard deviation, which is
known to be a measure of variation present in a set of data, a group of
items, or a process.
|
For
example, if you weigh fabric pieces of many different sizes, you’ll get
a higher standard deviation than if you weigh fabric pieces of all the
same size.
|
The sigma measurement develops to help
(1) Focussed measurements of
paying the customers of a business. Many of the measures, such as labour
hours, costs, and sales volume, companies have traditionally used
evaluate things that are not related to what the customer really cares
about
(2) Provide a consistent way to
measure and to compare different processes. Using the sigma scale, we
could assess and compare performance of, say, the cloth dying process
with the cloth delivery process—two very different but critical
activities.
|
In the language of Six Sigma, customer requirements and customer expectations are called “critical to quality (CTQ)”.
|
Six Sigma as a Goal
|
“Even
if you’re on the right track, you’ll get run over if you just sit
there.”
|
WILL ROGERS
|
When
a business violates important customer requirements, it is generating
defects, complaints, and cost. The greater the number of defects that
occur, the greater the cost of correcting them, as well as the risk of
losing the customers. Ideally, your company wants to avoid any defects
and the resulting cost in money and customer satisfaction.
|
We use the sigma measure to see how well or poorly a process performs and to give everyone a common way to express that
measure.
|
Six Sigma as a System of Management
|
A
significant difference between Six Sigma and seemingly similar programs
of past years is the degree to which management plays a key role in
regularly monitoring program results and accomplishments.
managers at all levels are held accountable for a variety of measures:
|
• Customer satisfaction
|
• Key process performance
|
• Scorecard metrics on how the business is running
|
• Profit-and-loss statements
|
• Employee attitude
|
These
measures provide feedback on the performance of the regions. At regular
meetings, managers review key measures within their hotels and select
new Six Sigma projects that target those measures that have fallen off.
|
In
short, Six Sigma is a system that combines both strong leadership and
grassroots energy and involvement. In addition, the benefits of Six
Sigma are not just financial. People at all levels of a Six Sigma
company find that better understanding of customers, clearer processes,
meaningful measures, and powerful.
|
|
|
8.2 Six Sigma Quality [2]
Limitation of Three Sigma Quality
|
Let
us consider that a product consists of 100 mutually independent parts
and the probability of producing the product within 3-sigma control
limit is 0.9973. Then, the probability that any specific part of the
product is non-defective is
|
0.9973×0.9973×0.9973×…100 times = (0.9973)100 = 0.7631.
|
If
it is assumed that all 100 parts must be non-defective for the product
to function satisfactorily, then 23.70% of the products produced under
the three-sigma quality will be defective. This is not an acceptable
situation, because there are many products that have many components
assembled in them.
|
Six-sigma Quality
|
Under
six-sigma quality, the probability that any specific unit of the
hypothetical product above is non-defective is 0.9999998, or 0.002 parts
per million (ppm), a much better situation.
|
Statistics of Six-sigma Quality
|
|
When
six-sigma concept was developed, an assumption was made that when the
process reached the six-sigma quality level, the process mean was still
subject to disturbances that could cause it to shift by as much as 1.5
times standard deviations off target. Under this scenario. A six-sigma
process would produce about 3.4 ppm defective.
|
|
|
8.3 Themes of Six Sigma
[1]
Theme One: Genuine Focus on Customer
|
As
mentioned, companies launching Six Sigma have often been appalled to
find how little they really understand about their customers.
|
In
six sigma, customer focus becomes the top priority. For example, the
measures of Six Sigma performance begin with the customer. Six Sigma
improvements are defined by their impact on customer satisfaction and
value.
|
Theme Two: Data- and-Fact-Driven Management
|
Six
Sigma takes the concept of “management by fact” to a new, more powerful
level. Despite the attention paid in recent years to improved
information systems, knowledge management, and so on, many business
decisions are still being based on opinions and assumptions. Six Sigma
discipline begins by clarifying what measures are key to gauging
business performance and then gathers data and analyzes key variables.
Then problems can be much more effectively defined, analyzed, and
resolved—permanently. At a more down-to-earth level, Six Sigma helps
managers answer two essential questions to support data-driven decisions
and solutions.
|
1. What data/information do I really need?
|
2.
How do we use that data/information to maximum benefit?
|
Theme Three: Processes Are Where Action is
|
Whether
focused on designing products and services, measuring performance,
improving efficiency and customer satisfaction, or even running the
business, Six Sigma positions the process as the key vehicle of success.
One of the most remarkable breakthroughs in Six Sigma efforts to date
has been convincing leaders an managers—particularly in service-based
functions and industries—that mastering processes is a way to build
competitive advantage in delivering value to customers.
|
Theme Four : Proactive Management
|
Most
simply, being proactive means acting in advance of events rather than
reacting to them. In the real world, though, proactive management means
making habits out of what are, too often, neglected business practices:
defining ambitious goals and reviewing them frequently, setting clear
priorities, focusing on problem prevention rather than firefighting, and
questioning why we do things instead of blindly defending them. Far
from being boring or overly analytical, being truly proactive is a
starting point for creativity and effective change. Six Sigma, as we’ll
see, encompasses tools and practices that replace reactive habits with a
dynamic, responsive, proactive style of management.
|
Theme Five : Boundary Less Collaboration
|
“Boundarylessness”
is one of Jack Welch’s mantras for business success. Years before
launching Six Sigma, GE’s chairman was working to break down barriers
and to improve teamwork up, down, and across organizational lines. The
opportunities available through improved collaboration within companies
and with vendors and customers are huge. Billions of dollars are lost
every day because of disconnects and outright competition between groups
that should be working for a common cause: providing value to
customers.
|
Theme Six : Drive for Perfection & Tolerate Failure
|
How
can you be driven to achieve perfection and yet also tolerate failure?
In essence, though, the two ideas are complementary. No company will get
even close to Six Sigma without launching new ideas and
approaches—which always involve some risk. If people who see possible
ways to be closer to perfect are too afraid of the consequences of
mistakes, they’ll never try. Fortunately, the techniques we’ll review
for improving performance include a significant dose of risk management
so the downside of setbacks or failures is limited. The bottom line,
though, is that an company that makes Six Sigma its goal will have to
keep pushing to be ever more perfect while being will in to accept—and
manage—occasional setbacks.
|
8.4 Implementation of Six Sigma
On-ramp 1: The Business Transformation
|
Employees
and managers can often sense the need for a company to break away from
old habits and to transform itself. For those organizations with the
need, vision, and drive to launch Six Sigma as a full-scale change
initiative, this first on-ramp, business transformation, is the right
approach.
Everywhere, management will be trying to drive results from the
changes and to control their impact. As an employee, you may find
yourself on a Six Sigma team challenged to improve a critical business
process or a key product.
Teams chartered along the business-transformation highway are often
asked to look at key process areas and to make recommendations for
change.
|
These teams may scrutinize:-
|
|
Distribution of the products by the company
|
|
The effectiveness of the sales process |
|
New-product development |
|
Critical customer complaints |
|
Product defects and habitual problems |
|
Information systems critical to business decision making |
|
Large-scale cost reductions
|
|
If
your company chooses the business-transformation on ramp, you’ll know
it! This approach will have an impact on your work, how you measure your
work, how you interact with customers and peers, and how you and your
job performance are evaluated.
|
On-ramp 2: Strategic Improvement
|
The
middle on-ramp offers the most options. A strategic improvement effort
can be limited to one or two critical business needs, with teams and
training aimed at addressing major opportunities or weaknesses. Or, it
may be a Six Sigma effort concentrated in limited business units or
functional areas of the organization.
|
In
fact, to those directly involved, the strategic-improvement approach
can seem as all-encompassing as the all-out corporate wide effort, but
it simply is not as extensive or ambitious as the most aggressive
efforts. On the other hand, a number of companies that have started with
the more limited strategic focus have later expanded Six Sigma into a
full-scale corporate change initiative, and yours may evolve that way,
too.
|
On-ramp 3: Problem Solving
|
The
problem-solving approach is best for companies that want to tap into
the benefits of Six Sigma methods without creating major change ripples
within the organization. If your business takes this approach, there’s a
strong probability that only a few people will be significantly engaged
in the effort—unless, of course, it gets ramped up later. The benefit
of this approach is in focusing on meaningful issues and addressing
their root causes, using data and effective analysis rather than plain
old gut feel. As an example of this on-ramp, a major real estate company
is running a few training classes and putting people to work on some
key problems. Although the company will have a handful of Black Belts
trained and some projects completed in a few months, that’s about all
you can predict for now. This company, like most others taking a
problem-solving route, is really just kicking the tires on the Six Sigma
vehicle.
|
8.5 Process of Problem Solving Using Six Sigma
Phase 1: Identifying and Selecting Project(s)
|
In
this phase, management reviews a list of potential Six Sigma projects
and selects the most promising to be tackled by a team. Setting good
priorities is difficult but very important to making the team’s work pay
off.
|
We
counsel leaders to pick projects based on the “two Ms”: meaningful and
manageable. A project has to have real benefit to the business and
customers and has to be small enough so the team can get it done. At the
end of this phase, your leadership group should have identified
high-priority problems and given them some preliminary boundaries.
|
Phase 2: Forming The Team
|
Hand-in-hand
with problem recognition comes team and team leader (Black Belt or
Green Belt) selection. Naturally, the two efforts are related.
Management will try to select team members who have good working
knowledge of the situation but who are not so deeply rooted in it that
they may be part of the problem. Smart leaders realize that DMAIC team
participation should not be handed to idle slackers. If you are chosen
for a team, it means that you are viewed as someone with the smarts and
the energy to be a real contributor!
|
Phase 3: Developing The Charter
|
The
Charter is a key document that provides a written guide to the problem
or project. The Charter includes the reason for pursuing the project,
the goal, a basic project plan, scope and other considerations, and a
review of roles and responsibilities. Typically, parts of the Charter
are drafted by the Champion and added to and refined by the team. In
fact, the Charter usually changes over the course of the DMAIC project.
|
Phase 4: Training The team
|
Training
is a high priority in Six Sigma. In fact, some people say that
“training” is a misnomer because a lot of “classroom” time is spent
doing real work on the Black Belt’s or team’s project. The focus of the
training is on the DMAIC process and tools. Typically, this training
lasts one to four weeks. The time is spread out, though. After the first
week, the team leader and/or team members go back to their regular work
but budget a key portion of their time to working on the project. After
a two- to five-week “intersession” comes the second training session,
then another work period and another week of training.
|
Phase 5: DMAIC and implementing Solutions
|
Nearly
all DMAIC teams are responsible for implementing their own solutions,
not just handing them off to another group. Teams must develop project
plans, training, pilots, and procedures for their solutions and are
responsible for both putting them in place and ensuring that they
work—by measuring and monitoring results—for a meaningful period of
time.
|
Phase 6: Handing Off The Solution
|
Eventually,
of course, the DMAIC team will disband, and members return to their
“regular” jobs or move on to the next project. Because they frequently
work in the areas affected by their solutions, team members often go
forward to help manage the new process or solution and ensure its
success. The hand-off is sometimes marked by a formal ceremony in which
the official owner, often called “Process Owner,” of the solutions
accepts responsibility to sustain the gains achieved by the team.
(Dancing and fun may go on until the wee hours of the morning. . . .)
Just as important, the DMAIC team members take a new set of skills and
experience to apply to issues that arise every day.
|
|
8.6 Case Studies
:
Case Study I: Reduction of Shade Variation of Linen Fabrics Using Six Sigma [3] |
The
objective of this work was to reduce the shade matching time in the
fabric dyeing process by optimizing the effect of the controllable
parameters. The problem was tackled using the DMAIC cycle of disciplined
Six Sigma methodology. Initially, the process baseline sigma level was
found as 0.81 and a target sigma level was set at 1.76.
|
Actions
taken on the critical activities led to the reduction in average excess
time as 0.0125 h/m. The yield of the overall process has improved to
82% with an improved sigma level of 2.34. The estimated annual saving is
to the tune of Rupees eighteen lakhs (over $40,000).
|
Project Background
|
In
the weaving department, for some qualities, fabrics are woven from the
dyed yarn (called yarn dyed quality) and for other qualities, fabrics
are prepared from grey/bleached yarn and coloring is carried out in the
dyeing department (called piece dyed quality) . These fabrics are single
colored fabrics. Shade variation is found in the case of piece dyed
quality of the fabrics, which literary means not meeting the customer’s
expectation regarding that particular shade.
|
Owing
to variation of the shade as compared to the sample, color addition/
color stripping are carried out which unnecessarily increases the
process cycle time, labor cost along with the wastage of dyes,
chemicals, power, and sometimes fabrics also.
|
The sole purpose of selection of this project was:
|
I |
to improve the productivity by reducing the shade matching time.
|
II |
to reduce the cost of poor dyeing quality.
|
III |
to reduce the process cycle time.
|
IV |
to deliver the product to the customer on time.
|
V |
to impart moral strength to shop floor people regarding shade matching.
|
|
Objective of The Project
|
The
core objective of the project is to reduce the shade matching time in
the fabric dyeing process by optimizing the effect of the controllable
parameters involved in the dyeing process through a disciplined Six
Sigma methodology.
|
Six Sigma Methodology (DMAIC Cycle)
|
Define Phase of the Study
|
The steps in this phase are: |
I |
Developing a project charter, where primarily the project team and the project boundary are identified.
|
II |
Preparing a process diagram, which is shown in Table 1 in the form of supplier-input-process-output-customer (SIPOC).
|
III |
Defining terms related
to a Six Sigma like unit (each lot of fabric under the process of
dyeing) and defect (a particular lot/unit generates a defect if its
shade does not meet the customer’s requirement at the first attempt
after the dyeing process).
|
IV |
Estimating project completion time of four months including system standardization
|
|
|
Measure Phase of The Study
|
The steps in this phase are:
|
1) |
to identify the relevant parameters for measurement.
|
2) |
to develop proper data collection plans both for historical data and planned data.
|
3) |
to estimate the baseline process performance through the sigma level based on process yield and fixing the target sigma level.
|
4) |
to estimate the status quo of the dyeing process through the identified parameters.
|
5) |
to segregate the significant factors for the next course of study through analysis and improvement phase
|
|
Performance of Fabric Quality
|
|
Comparison of Defects Against Shades
|
|
Olive,
D.K. grey, teak, magenta, and navy were processed more frequently along
with the frequent generation of defective lots. These five shades were
contributing to 68% of the total defective lots.
|
Machinewise Performance of Fabric Lots
|
|
Case Study II: Reduction of Yarn Packing Defects Using Six Sigma [4] |
This
project was aimed at reduction of rejection during packing of finished
yarn cones through measuring the current performance level and
initiating proper remedial action thereafter.
|
Pareto Chart for No. of Defectives for various Defects
|
Almost 65% of rejections were due to weight variation of cones (i.e., either overweight and underweight).
|
|
Defectives due to Underweight
|
The major counts in terms of packing rejection due to underweight were Ne 2/42sP, Ne 4/12sP, Ne 2/20sP, Ne 3/20sP, Ne 1/30sV, Ne 3/12sP.
|
|
Defectives due to Overweight
|
It is found that the major counts in terms of packing rejection due to overweight were Ne 2/42sP, Ne 1/30sV, Ne 4/12sP, Ne 2/57sLY, Ne 3/20sP.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| | | | | | |
| | | |
| |
| | | | | | | | | | | | | | | | | | | | | | | | |
| | | | |
|
|
|
No comments:
Post a Comment