Sažetak | Vodič za iskazivanje mjerne nesigurnosti je utemeljio opća pravila za iskazivanjem potpunog mjernog rezultata. Temeljem istaknute potrebe za iskazivanjem potpunog mjernog rezultata u svrhu ocjenjivanja sukladnosti, usporedbe mjernih rezultata, uspostavljanja sljedivosti, primjene mjeriteljskih principa u strojnom učenju, itd., u radu su primijenjene različte metode procjene mjerne nesigurnosti te je istražen utjecaj procjena standardnih nesigurnosti na ukupnu mjernu nesigurnost. Izračuni nesigurnosti su provedeni uporabom okvira nesigurnosti prema Vodiču za procjenu mjerne nesigurnosti – GUM metoda, primjenom Monte Carlo metode, primjenom adaptivnoga postupka Monte Carlo i Bayesove metode. U radu su u izračunu mjerne nesigurnosti, bez obzira na prirodu ulaznih veličina uključujući i model mjerenja, razdiobe ulaznih veličina specificirane potpuno objektivno bez nametanja bilo kakvih ograničenja na mjerni rezultat. Veza izmedu ulaznih veličina i izlazne veličine je uspostavljena preko mjernog modela ili opservacijskog modela. Istraživanje različitih pristupa procjeni standardnih nesigurnosti na ukupnu mjernu nesigurnost je provedeno na različitim umjernim modelima. Priorne razdiobe za ulazne veličine (GUM metoda, MCS metoda, Bayesova metoda) su formirane na temelju dostupnih informacija, a priorne razdiobe za izlaznu veličinu (Bayesova metoda) su kreirane kao manje informativne ili neinformativne razdiobe. Ako postoji pouzdana informativna priorna razdioba nema valjanog razlog za odbacivanje takve razdiobe. Uporabom ove razdiobe dobiva se transparentan model što omogućuje lakši kritički osvrt na isti, njegovu provjeru i ažuriranje. Za izlaznu veličinu je specificiran 95 % simetrični i/ili 95 % najkraći interval pokrivanja koji se koriste za iskazivanje potpunog mjernog rezultata. Na osnovi provedenih istraživanja uspostavljeni su kriteriji za odabir metoda za izračun mjerne nesigurnosti. Nadalje, istražen je utjecaj standardne nesigurnosti na proces donošenja odluka sukladnosti uporabom Bayesove metode. Specifični rizik je izračunat iz posteriorne razdiobe, a globalni rizik iz zajedničke razdiobe. U cilju implementiranja mjeriteljskih principa u strojno učenje testirane su metode nadziranog strojnog učenja sa sljedivim podacima. Peterostruka unakrsna provjera je korištena u uporabi metoda nadziranog strojnog učenja. Za navedenu provjeru su izračunate matrice zabune. Kriterij točnosti je korišten za usporedbu dobivenih rezultata. Programski jezik Python i MATLAB su korišteni u radu u svrhu izračuna mjerne nesigurnosti, ocjenjivanja sukladnosti i primjene mjeriteljskih principa u strojnom učenju. |
Sažetak (engleski) | The general rules for expressing the complete measurement result are based on the Guide to the Expression of Measurement Uncertainty in Measurement. In accordance with the highlighted need for expressing the complete measurement result, assessing conformance, comparing results, establishing traceability, applying metrological principles in machine learning, etc., and the various impact approaches for the standard uncertainty quantifications on the overall measurement uncertainty are researched. INTRODUCTION In this chapter, the necessity for research on the various impact approaches for the standard uncertainty quantifications on the overall measurement uncertainty for assessing conformance, comparing results, establishing traceability, and applying machine learning in metrology is presented. The doctoral thesis is motivated by participating in research projects and reviewing the literature from this field. In addition to this, the goal, hypothesis, methods, and research plan are represented, including the expected scientific contribution. METHODS In the methods chapter, the essential metrological terms, uncertainty quantification methods, and machine learning methods are introduced. By applying the Guide to the Expression of Measurement Uncertainty in Measurement – GUM method, Monte Carlo method, Monte Carlo adaptive method, and Bayesian method, the uncertainty quantification are done. The GUM method for quantifying uncertainty is conditioned by the Law of Propagation of Uncertainty. The output quantity is represented by the particular distribution for calculating the coverage interval. Furthermore, information Type A and Type B, needed for uncertainty quantification and expended measurement uncertainty, respectively, representing the knowledge regarding the input quantities. In addition to this, the Monte Carlo method for quantifying uncertainty is conditioned by generating samples randomly from the informative prior distributions. Based on the output quantity calculated values, its distribution is calculated. The required parameters which represent this distribution and coverage interval are calculated. Also, the Bayesian method for quantifying uncertainty is conditioned by combining the prior knowledge regarding the output quantity with the data collected during the calibration process. The marginal distribution is calculated from the joint posterior distribution. The required parameters which represent this marginal distribution and coverage interval are calculated. In addition to this, the relation between the metrological principles in machine learning and machine learning principles in metrology are presented in this doctoral thesis. The key challenge in applying the machine methods in metrology is the basic metrological feature and trust in obtained results. To put in a nutshell, the machine learning supervised methods are tested with the traceable data. RESULTS In this doctoral thesis, quantifying measurement uncertainties, regardless of the origin, the input quantities, measurement models, probability density functions for the input quantities are specified as entirely objective without any further restrictions on the measurement results. By applying different measurement models, the various impact approaches for the standard uncertainty quantifications on the overall measurement uncertainty is re searched. The prior distributions for the input quantities (GUM method, MCS method, Bayesian method) are based on the available information, and the prior distributions for the output quantities (Bayesian method) are formed as the less informative and noninformative distributions. If the informative prior distribution is available, there is no any valid reason for using these distributions. By using these distributions, it is getting a more transparent model, and providing easier feedback, updating, and checking the model. The relationship between input quantity and output quantity is established by using the measurement or observation models. The output quantity is specified by 95 % symmetric and/or 95 % shortest coverage intervals. These intervals are used for presenting the complete measurement result. Based on the research, the criteria for selecting the uncertainty quantification method are established. In addition to this, the standard uncertainty impact on the conformity assessment decisions is researched by applying the Bayesian method. The specific risk is calculated from the posterior distribution, and the global risk is calculated from the joint distribution. According to the conducting research, the standard uncertainty impact on the overall measurement uncertainty in the conformity assessment decision making process is presented. Moreover, the machine learning supervised methods are tested by using the traceable data in order to gain trust in the obtained results. The data for testing the machine learning methods are publicly avail able. The first used unsupervised method for classification data is the Support Vector Machine Method and the second unsupervised method for classification data is the Soft Max Regression Method. As for the data for the supervised machine learning methods, they are divided into two groups, 80 % makes training data and 20 % makes testing data. The five-fold cross validation matrices (accuracy) are used for result presenting and com paring methods. The program languages Python and MATLAB are used for calculating and presenting the final results. CONCLUSION In this chapter, the various impact approaches for the standard uncertainty quantifica tions on the overall measurement uncertainty are summarized. The criteria for selecting uncertainty quantifications methods are elaborated, and the guidelines for future research are provided. |