Explore chapters and articles related to this topic
Initial Aspects of Forensic Failure Investigation
Published in Colin R. Gagg, Forensic Engineering, 2020
In many cases it is common practice to obtain an identical product or component (an exemplar unit) for direct comparison with its broken counterpart. Visual comparison can be undertaken as a simple method for checking if parts are identical and, if not, the reason for any divergence. Many manufactured articles are now identifiable from logos, date stamps and manufacturing codes either printed or embossed on the product, or in a concealed position within the product[1] (is it a counterfeit?). If the material of construction is unknown, it must be analysed for its constituent parts. However, although direct comparison with an unaffected product or component has a clear advantage, it is not always possible (air accidents for example). As a minimum, visual comparison should aim to identify the following product features: Overall dimensionsDistortion in dimensionsFit with matching partsSurface qualitySigns of environmental attackTraces of wearIdentity marksPossible counterfeit product/component
Disintegration, fusion and edge net fidelity
Published in John A. Franklin, Takis Katsabanis, Measurement of Blast Fragmentation, 2018
Simple visual comparison allows the operator to recognize whether breakdown and/or fusion errors are likely to be important or negligible. However, some way of quantifying the extent of breakdown would be useful, to allow comparison of alternative edge detection methods or differences in lighting, and to allow rejection of results from images below a specified quality threshold.
Classification and selection of sheet forming processes with machine learning
Published in International Journal of Computer Integrated Manufacturing, 2018
Elia Hamouche, Evripides G. Loukaides
NNs can be frustrating as their predictions often lack traditional rigour. The weights the NN self-assigns during training aim to approximate a specific function; unfortunately, the link between these weights and the function is not direct, clear or easily presented. ‘Confusion matrices’ are sometimes used to offer some qualitative insight into the NN’s inner workings. These allow a direct visual comparison of performance. Here, such a matrix was produced to further understand the limitations of the geometry representation method. This confusion matrix was produced for each geometry representation method using CNN Configuration 3. These results are presented in Figure 9. The diagonal of each confusion matrix represents the percentage of correct classifications of each process. Other cells represent the percentage that the correct process was misclassified for another process.