top of page

Lean Six Sigma - Measurement Quality

Lean Six Sigma or Lean Six Sigma is a quality assurance method that aims to systematically reduce waste and improve the performance of a collaborative team, that is, lean manufacturing. Contained within this concept is the MSA - Measurement System Analysis - measurement system analysis, which is a method of validating test and measurement systems, minimizing interference from environmental and human factors.


In the day-to-day of the production lines of companies and industries, thousands of data are generated, and in order to be able to use this information in order to make decisions, we need to guarantee the quality of all the data and all the processes used.



"The quality of the information generated in a production line is one of the main factors to be taken into account when making decisions."

Analysis of Measurement Systems

As stated earlier, MSA is the set of techniques and metrics to be generated within a production line and in other industrial and business environments. In this article, we will focus on the context of test systems in a production system, and the most used MSA method for analysis today, the Gauge R&R.


Gauge R&R


Gauge R&R, short for Gauge Repeatability and Reproducibility, is one of the MSA methods. In this method, analyzes are used to prove the accuracy and precision of measurement systems, production systems and in-line operators. The final objective is to carry out scripted tests to prove the repeatability of the system, that is, the ability of a measurement system to obtain the same values at different times. And also for reproducibility, which is the ability of the measurement system to behave equally, regardless of the production system and the operator who is making or performing the test of the device to be tested, respectively. [1]

To ensure effectiveness, both in terms of repeatability and reproducibility of the system as a whole, the Gauge R&R methodology is divided into three processes, or types, according to the objective to be analyzed.


The final result of a Gauge R&R test is always a set of coefficients, which, according to their values, it is possible to estimate the size of the precision, or imprecision to be treated, as well as what approaches should be taken for the correction and effective improvement of the production process.


Gauge R&R Type 1


The Type 1 Gauge R&R is responsible for analyzing the capacity of the tester's measurement system in the production line, that is, the repeatability of a measurement and the size of its standard deviation.


In general terms it will exclusively analyze the effects of the meter and only the meter without taking into account other environmental factors. This method must be applied as soon as the measurement system is delivered and finalized in order to directly assess the impact it will have on the rest of your production line and what points of improvement should be made in your measurements: or directly on used hardware or software strategies, such as bias and linearity changes. [two]


For type 1 of the Gauge R&R process, two main coefficients are evaluated, Cg and CgK. In this study, a minimum amount of measurements is made on the same device under test, with the same operator, to extract some statistical quantities, such as the mean of the values ​​obtained and the standard deviation of the measurements. It is also important to know at this point, what is the expected (or nominal) value of the measurement to be made, and what tolerance values ​​are allowed.


Coefficient Cg and CgK

The coefficient Cg is obtained from the following formula:


Cg = (K * T) / (L * s), where:


K = percentage of the tolerance value to be considered. (default is 0.2)

T = value of the tolerance magnitude of the signal to be measured.

L = amount of deviations to consider, representing the amount of error allowed in the coefficient. (Default is 6 - six sigma)

s = standard deviation calculated for the measurements taken.


The default minimum value for this coefficient is 1.33, representing that the measured data are mostly distributed in the K range, with a 1:1.33 ratio of distribution to the chosen K tolerance. This measurement is entirely related to the distribution of the data and to the standard deviation and the accuracy of the measurements, but not necessarily to the bias of the data, that is, how close the mean is to the nominal expected value of the measurement. For this, CgK is used, which has the following formula:


CgK = (0.5 * K * T - |Xm - Xe| ) / (0.5 * L * s), where:


Xm = Value of the average of the measured values.

Xe = Nominal expected value of the measure.

K = percentage of the tolerance value to be considered. (default is 0.2)

T = value of the tolerance magnitude of the signal to be measured.

L = amount of deviations to consider, representing the amount of error allowed in the coefficient. (Default is 6 - six sigma)

s = standard deviation calculated for the measurements taken.


The default minimum value is also 1.33 for CgK. As this coefficient serves to analyze the bias, or the distance from the measured mean to the expected, the absolute value was used, so that only one side of the dispersion can be verified, which also explains the multiplications by 0.5. The value of 1.33 represents that the mean of the data is close to the mean according to the K tolerance and the L*s error with a gap of factor 1.33. As the coefficient says about the bias of the measurements as a whole, it is entirely related to the accuracy of the data.


Gauge R&R Type 2 and Type 3


The measurement methodologies for type 2 and type 3 Gauge R&R are used to verify the reproducibility of the system, that is, if the data remains consistent if the device under test or the system operator is changed. The main idea of this method is to analyze the variances obtained for each test possibility.


For Gauge R&R Type 2, the number of devices under test is varied, along with the number of operators for all measurements, and with the data extracted from the measurements, it is possible to calculate the variance between operators, between tested devices, and also whether there is any interaction between the variations. The most appropriate analysis method for this type is the two-way ANOVA (Analysis of Variance) [3], which extracts the variance for each study variable (operators, devices, and interactions between operators and devices), and from these values, the p-value is calculated and whether or not there is interaction between the variables, demonstrating that the variables that influence the variation and dispersion of data must be changed or corrected.


Gauge R&R Type 3, on the other hand, takes care of analyzing the measurements of the same operator acting on different devices under test, in the same measurement system. This method consists of extracting the average of the difference of the absolute values measured for the devices, and multiplying it by the correction factor K, which for two measurements of the same part has a value of 0.8862, thus obtaining the EV (Environment Variation) coefficient, or also known as GRR (Gauge R&R). [4]


EV = 0.8862 * Sum( |Xi1 - Xi2| ) / N, where i is the device number, from 1 to N, with N being the number of devices.


The value analyzed for this study is %EV, which is the manipulation of the EV value, according to the following formula:


%EV = (L * EV)/T, where:


T = value of the tolerance magnitude of the signal to be measured.

L = amount of deviations to consider, representing the amount of error allowed in the coefficient. (Default is 6 - six sigma)


The value of %EV (or %GRR) must be below 0.2 for the variation of tests between devices to be considered acceptable, according to the stipulated standard, but a variation between 0.2 and 0.3 can be tolerated, if the procedure for improvement is impeding.


At Blue Eyes Systems we offer software to calculate MSA, want to know more? Contact our sales team:




References:


[1] Lean Six Sigma - The Engineering Archive, Measurement Systems in Manufacturing. https://theengineeringarchive.com/sigma/page-measurment-systems.html


[2] A type 1 gage study assesses the capability of a measurement process - Minitab. https://support.minitab.com/en-us/minitab/19/help-and-how-to/quality-and-process-improvement/measurement-system-analysis/supporting-topics/other-gage-studies-and-measures/type-1-gage-study/


[3] Example of Crossed Gage R&R Study - Minitab - https://support.minitab.com/en-us/minitab/19/help-and-how-to/quality-and-process-improvement/measurement-system-analysis/how-to/gage-study/crossed-gage-r-r-study/before-you-start/example/


[4] MSA – Type III Gage R&R – Colin Chen - https://hsc251.com/2020/11/15-msatypeiii/



7 views0 comments

Recent Posts

See All

Comments


bottom of page