EP4200603A1 - Method for characterizing a part through non-destructive inspection - Google Patents
Method for characterizing a part through non-destructive inspectionInfo
- Publication number
- EP4200603A1 EP4200603A1 EP21756011.9A EP21756011A EP4200603A1 EP 4200603 A1 EP4200603 A1 EP 4200603A1 EP 21756011 A EP21756011 A EP 21756011A EP 4200603 A1 EP4200603 A1 EP 4200603A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- neural network
- database
- defect
- measurements
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000001066 destructive effect Effects 0.000 title claims description 6
- 238000007689 inspection Methods 0.000 title description 3
- 238000005259 measurement Methods 0.000 claims abstract description 129
- 230000007547 defect Effects 0.000 claims abstract description 98
- 238000013528 artificial neural network Methods 0.000 claims abstract description 83
- 238000000605 extraction Methods 0.000 claims abstract description 47
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 46
- 239000011159 matrix material Substances 0.000 claims abstract description 14
- 230000013016 learning Effects 0.000 claims description 86
- 238000012545 processing Methods 0.000 claims description 20
- 238000012549 training Methods 0.000 claims description 15
- 238000012512 characterization method Methods 0.000 claims description 13
- 239000000463 material Substances 0.000 claims description 10
- 230000005855 radiation Effects 0.000 claims description 9
- 230000005540 biological transmission Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 6
- 230000005284 excitation Effects 0.000 claims description 6
- 230000001902 propagating effect Effects 0.000 claims description 6
- 230000032798 delamination Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 239000002131 composite material Substances 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 3
- 230000007613 environmental effect Effects 0.000 claims description 3
- 230000015572 biosynthetic process Effects 0.000 claims description 2
- 230000007797 corrosion Effects 0.000 claims description 2
- 238000005260 corrosion Methods 0.000 claims description 2
- 238000004088 simulation Methods 0.000 description 9
- 238000012360 testing method Methods 0.000 description 9
- 238000012986 modification Methods 0.000 description 8
- 230000004048 modification Effects 0.000 description 8
- 102100033591 Calponin-2 Human genes 0.000 description 7
- 101000945403 Homo sapiens Calponin-2 Proteins 0.000 description 7
- 238000009659 non-destructive testing Methods 0.000 description 5
- 230000007847 structural defect Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 239000000835 fiber Substances 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 239000000956 alloy Substances 0.000 description 3
- 229910045601 alloy Inorganic materials 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000005452 bending Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000005670 electromagnetic radiation Effects 0.000 description 2
- 230000001747 exhibiting effect Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000002601 radiography Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
- G01N29/4481—Neural networks
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/02—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
- G01N23/06—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and measuring the absorption
- G01N23/18—Investigating the presence of flaws defects or foreign matter
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N25/00—Investigating or analyzing materials by the use of thermal means
- G01N25/72—Investigating presence of flaws
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N27/00—Investigating or analysing materials by the use of electric, electrochemical, or magnetic means
- G01N27/72—Investigating or analysing materials by the use of electric, electrochemical, or magnetic means by investigating magnetic variables
- G01N27/82—Investigating or analysing materials by the use of electric, electrochemical, or magnetic means by investigating magnetic variables for investigating the presence of flaws
- G01N27/90—Investigating or analysing materials by the use of electric, electrochemical, or magnetic means by investigating magnetic variables for investigating the presence of flaws using eddy currents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2291/00—Indexing codes associated with group G01N29/00
- G01N2291/26—Scanned objects
Definitions
- the technical field of the invention is the interpretation of measurements by non-destructive testing carried out on a mechanical part or part of a structure.
- NDT meaning Non-destructive Testing
- the objective is to carry out a check and/or detect and monitor the appearance of structural defects. This involves monitoring the integrity of a controlled part, so as to prevent the occurrence of accidents, or to extend the period of use of the part under good safety conditions.
- NDT is commonly implemented in sensitive equipment, so as to optimize replacement or maintenance.
- the applications are numerous in the control of industrial equipment, for example in oil exploitation, in the nuclear industry, or in transport, for example in aeronautics.
- the sensors used in the NDT are non-destructive sensors, causing no damage to the parts checked.
- the controlled parts can be structural elements of industrial equipment or aircraft, or civil engineering works, for example bridges or dams.
- the methods used are varied. They can for example implement X-rays, or ultrasonic waves or detection by eddy currents.
- the sensors are connected to computer means, so as to be able to interpret the measurements made.
- the presence of a defect, in a part, leads to a signature of the defect, measurable by a sensor.
- the computer means perform an inversion. This involves, from the measurements, obtaining quantitative data relating to the defect, for example its position, or its shape, or its dimensions.
- the inversion can be performed by taking into account direct analytical models, for example polynomial models, making it possible to establish a relationship between the characteristics of a defect and measurements resulting from a sensor.
- the inversion of the model allows an estimation of said characteristics from measurements taken.
- the characteristics of the defects can be estimated by implementing supervised artificial intelligence algorithms, for example neural networks.
- supervised artificial intelligence algorithms for example neural networks.
- a difficulty linked to the use of neural networks is the use of a learning phase that is as complete as possible, so as to optimize the performance of the estimation. It takes time and requires a lot of data.
- the inventors propose a method addressing this question. The objective is to facilitate the learning of a neural network intended to perform an inversion, while maintaining good performance in estimating the characteristics of the defect.
- a first object of the invention is a method for characterizing a part, the part being likely to comprise a defect, the method comprising the following steps: a) carrying out non-destructive measurements using a sensor, the sensor being placed on the part or facing the part; b) formation of at least one measurement matrix using the measurements carried out during step a); c) use of the matrix as input data for a convolutional neural network, the convolutional neural network comprising:
- an extraction block configured to extract characteristics from each input data
- a classification block configured to carry out a classification of the characteristics extracted by the extraction block, the classification block leading to an output layer comprising at least one node; d) depending on the value of each node of the output layer, detection of the presence of a defect in the part, and possible characterization of the detected defect; the method comprising, prior to steps c) and d):
- a first database comprising measurements carried out or simulated, on a first model part, according to a first configuration, the first configuration being parameterized by parameters, the first database being formed by considering at least one variable parameter and at least one fixed parameter;
- a first neural network comprising an extraction block and a processing block, the processing block being configured to process characteristics extracted by the extraction block, the first neural network being first trained, using the first database; the method being characterized in that it also comprises:
- a constitution of a second database comprising measurements carried out or simulated on a second model part, representative of the characterized part, according to a second configuration, the second configuration being parameterized by modifying at least one fixed parameter of the first configuration ;
- the second convolutional neural network comprising the extraction block of the first neural network, and a classification block, the latter being configured during the second learning;
- the neural network used is the second convolutional neural network, resulting from the second learning.
- measurement is meant a measurement of a physical quantity liable to vary in the presence of a defect in the part. It can be an acoustic, electric, electrostatic, magnetic, electromagnetic (for example an intensity of a radiation), or mechanical quantity.
- the second configuration can notably take into account:
- the first database may comprise measurements performed or simulated on a model part comprising the defect.
- the processing block of the first neural network is a classification block, configured, during the first learning, to carry out a classification of the characteristics extracted by the extraction block, the first neural network being a convolutional neural network.
- the classification block of the second neural network can be initialized by using the classification block of the first neural network.
- the first neural network is of the auto-encoder type.
- Said processing block of the first neural network can be configured, during the first learning, to reconstruct data, coming from the first database, and forming input data of the first neural network.
- the defect may be of the type: delamination, and/or crack, and/or perforation and/or crack propagating from a perforation and/or presence of a porous zone and/or presence of an inclusion and/or or presence of corrosion.
- the part can be made of a composite material, comprising components assembled together.
- the defect can then be an assembly defect between the components.
- the measurements can be representative of a spatial distribution:
- the defect may be a variation of the spatial distribution with respect to a reference spatial distribution.
- the reference spatial distribution may have been previously modeled or established experimentally.
- the measures can be of the type:
- the process can be such as:
- the first database is made up from measurements carried out experimentally;
- the first database and the second database are formed from simulated measurements
- the first database and the second database are formed from experimental measurements.
- the first configuration and the second configuration can be parameterized by at least one of the parameters chosen from:
- the measurement conditions may include at least:
- the first model part and the second model part may be identical.
- the second model part has a different shape from the first model part
- the second model part is made of a different material from the first model part.
- the second database can comprise a number of data lower than the number of data of the first database.
- Figures IA and IB schematize the implementation of eddy current measurements on a conductive part.
- Figure 2A shows the structure of a convolutional neural network.
- FIG. 2B shows the main steps of a method according to the invention.
- FIG. 3A shows a defect considered during a first example.
- FIG. 3B is an example of an image resulting from simulated measurements on the defect shown in FIG. 3A.
- FIG. 3C presents classification performances of a neural network, considering respectively measurements whose signal-to-noise ratio is respectively 5 dB (left), 20 dB (center), and 40 dB (right).
- Figure 3D shows classification performance of a neural network according to the invention, considering respectively measurements whose signal-to-noise ratio is respectively 5 dB (left), 20 dB (center), and 40 dB (right ).
- FIGS. 4A and 4B show a defect considered during a second example.
- Figure 4C presents comparative classification performance
- FIG. 5 schematizes a variant of the invention.
- FIGS IA and IB schematize an example of application of the invention.
- a sensor 1 is arranged facing a part to be inspected 10, so as to characterize the part. It may in particular be a thermal or mechanical characterization.
- thermal or mechanical characterization we mean a characterization of the thermal or mechanical properties of the part: temperature, deformation, structure.
- the part to be characterized can be a monolithic part, or a more complex part resulting from an assembly of several elementary parts, for example an airfoil of an airplane wing or a skin of a fuselage.
- the characterization may consist in detecting the possible presence of a structural defect 11.
- the sensor 1 is configured to take measurements of eddy currents generated in the part to be checked 10.
- the latter is an electrically conductive part. .
- the principle of non-destructive measurements by eddy currents is known. Under the effect of excitation by a magnetic field 12, eddy currents 13 are induced in the part
- sensor 1 is a coil, powered by an amplitude modulated current. It generates a magnetic field 12, the field lines of which are shown in Figure IA. Eddy currents 13 are induced, forming current loops on the part 10. The eddy currents 13 generate a reaction magnetic field 14, the latter acting on the impedance of the coil 1. Thus, the measurement of the impedance of coil 1 is representative of the eddy currents 13 formed in part 10. In the presence of a fault
- the eddy currents 13 are modified, which results in a variation in the impedance of the coil.
- the measurement of the impedance of the coil constitutes a signature of the fault.
- sensor 1 acts as part 10 excitation, as well as part reaction sensor in response to excitation.
- the sensor is moved along the part, parallel to it.
- the part extends along an X axis and a Y axis.
- the sensor can be moved parallel to each axis, which is materialized by the double arrows.
- a matrix of sensors can be implemented.
- a series of measurements is thus available, forming a spatial distribution, preferably two-dimensional, of a measured quantity, in this case the impedance of sensor 1. It is usual to distinguish the real and imaginary parts of the impedance.
- the measured impedance is generally compared with an impedance in the absence of a fault, so as to obtain a map of the impedance variation AH.
- a measurement matrix M can be formed, representing the real part or the imaginary part of the impedance measured at each measurement point.
- the sensor 1 is connected to a processing unit 2, comprising a memory 3 comprising instructions to enable the implementation of a measurement processing algorithm, the main steps of which are described below.
- the processing unit is usually a computer, connected to a screen 4.
- the measurement matrix M corresponds to a spatial distribution of the response of the part to excitation, each measurement being a signature of the part.
- An inversion must be carried out, so as to be able to conclude that a defect is present, and, if necessary, a characterization of the latter.
- the inversion is performed by the processing algorithm implemented by the processing unit 2.
- structural defect it is understood a mechanical defect affecting the part. It may in particular be a crack, or a delamination, or a perforation, forming for example a through hole, or a crack propagating from a hole, or from an abnormally porous.
- the structural defect can also be a presence of an inclusion of an undesirable material, or of a corroded zone.
- the structural defect may affect the surface of part 10 located opposite the sensor. It can also be located deep in the room. The type of sensor used is selected according to the fault to be characterised.
- the part to be checked 10 can be made of a composite material. It then comprises components assembled to one another. It can be an assembly of plates or fibers.
- the defect may be an assembly defect: it may be a local delamination of plates, or a decohesion of fibers or fiber strands, or non-uniformity in the orientation of fibers, or a defect resulting from an impact or shock.
- the part to be inspected 10 has mechanical properties that are spatially distributed along the part.
- the defect may be a variation in the spatial distribution of the mechanical properties with respect to a reference spatial distribution. It may for example be a part of the part, in which the mechanical properties do not correspond to reference mechanical properties, or are outside a tolerance range.
- the mechanical property can be the Young's modulus, or the density, or a speed of propagation of a wave acoustic.
- the reference spatial distribution can come from a specification, and correspond to an objective to be achieved. It can result from a model or from experimental measurements.
- the preceding paragraph also applies to electrical or magnetic properties, or else to a stress to which the part to be inspected is exposed. It may for example be a temperature stress or a mechanical stress, for example a pressure stress, to which the part is subjected during its operation.
- the characterization of the part can consist in establishing a temperature of the part, or a level of mechanical stress (force, pressure, deformation) to which the part is subjected.
- the characterization can also consist in establishing a spatial distribution of a temperature of the part or, more generally, of a stress to which the part is subjected.
- the defect is then a difference between the spatial distribution and a reference spatial distribution.
- a defect can be the appearance of a hot spot, at the level of which the temperature of the part is abnormally high compared to a reference spatial distribution of the temperature. .
- the characterization of the defect aims to determine the type of defect, among the types mentioned above. It may also include a location of the fault, as well as an estimate of all or part of its dimensions.
- the processing of the measurements can be carried out by implementing a supervised artificial intelligence algorithm, for example based on a neural network.
- the learning of the algorithm can be carried out by constituting a database formed from measurements carried out or simulated on a part comprising a defect whose characteristics are known: type of defect, dimensions, location, possibly porosity or other physical quantity to characterize the defect.
- the database can be established by simulation, using dedicated simulation software.
- An example of dedicated software is the CIVA software (supplier: Extende), which notably makes it possible to simulate different non-destructive testing methods: ultrasound propagation, eddy current effects and X-ray radiography.
- CIVA software supplied: Extende
- Such software makes it possible to simulate measurements from of a model of the part.
- the use of such software can make it possible to constitute a database allowing the learning of the artificial intelligence algorithm used.
- FIG. 2A schematizes the architecture of such a network.
- the convolutional neural network includes a feature extraction block Ai, connected to a processing block Bi.
- the processing block is configured to process the features extracted from the extraction block Ai.
- the processing block Bi is a classification block. It allows a classification based on the features extracted by the extraction block Ai.
- the convolutional neural network is fed by input data Ain, which correspond to one or more images.
- the input data form an image, obtained by a concatenation of two images representing respectively the real part and the imaginary part of the variation in impedance AH measured at different measurement points, regularly distributed, facing the room, according to a matrix arrangement.
- the feature extraction block Ai comprises J layers Ci...Cj...Cj downstream of the input data. J being an integer greater than or equal to 1.
- Each layer Cj is obtained by applying a convolution filter to the images of a previous layer Cj-i.
- the index j is the rank of each layer.
- the layer Co corresponds to the input data Ai n .
- the parameters of the convolution filters applied to each layer are determined during training.
- the last layer Cj can include a number of terms exceeding several tens, or even several hundreds, or even several thousands. These terms correspond to features extracted from each image forming the input data.
- the method may include dimension reduction operations, for example operations usually designated by the term “pooling”. This involves replacing the values of a group of pixels by a single value, for example the average, or the maximum value, or the minimum value of the group considered.
- the last layer Cj is the subject of an operation usually designated by the term “Flatten” (flattening), so that the values of this layer form a vector.
- the classification block Bi is an interconnected neural network, usually designated by the term “fully connected” or multilayer perceptron. It has a Bin input layer, and a Bout output layer.
- the input layer Bin is formed by the characteristics of the vector resulting from the extraction block Ai. Between the Bin input layer and the Bout output layer, one or more hidden layers H can be provided. There is thus successively the input layer B in , each hidden layer H and the output layer B out .
- Each layer can be assigned a rank k.
- Each layer comprises nodes, the number of nodes of a layer being able to be different from the number of nodes of another layer.
- the value of a node y nk of a layer of rank k is such that
- WHERE is the value of a node of the previous layer k — 1, m representing an order of the node of the previous layer, m being an integer between 1 and Mu, Mu corresponding to the dimension of the previous layer, of rank k — 1; b m is a bias associated with each node y mk -i of the previous layer; f n is an activation function associated with the node of order n of the considered layer, n being an integer between 1 and Mk, Mk corresponding to the dimension of the layer of rank kw mn is a weighting term for the node of order m of the previous layer (rank k-1) and the node of order n of the layer considered (rank k).
- each activation function f n is determined by a person skilled in the art. It may for example be an activation function f n of hyperbolic or sigmoid tangent type.
- the output layer B or t comprises values making it possible to characterize a defect identified by the images of the input layer A in . It constitutes the result of the inversion carried out by the algorithm.
- the output layer may have only one node, taking the value 0 or 1 depending on whether the analysis reveals the presence of a defect or not.
- the output layer can comprise as many nodes as defect types considered, each node corresponding to a probability of presence of a type of defect among predetermined types (crack, hole, delamination, etc.). ).
- the output layer can comprise as many nodes as dimensions of a defect, which supposes taking into account a geometric model of defect.
- the output layer can include coordinates indicating the position, two-dimensional or three-dimensional, of a defect in the part.
- the output layer can contain information about the inspected part, for example a spatial distribution of mechanical properties (for example Young's modulus), or electrical or magnetic or thermal (for example temperature) or geometric (for example at least one dimension of the part).
- the dimension of the output layer corresponds to a number of points of the part in which the mechanical property is estimated on the basis of the input data.
- the applications can be combined, so as to obtain both location and dimensioning, or location, identification and dimensioning.
- the extraction block Ai can be established, for a measurement modality, using learning that is as exhaustive as possible, called first learning, taking into account a large database.
- the classification block can be adapted to different specific cases, in which the measurement modality is implemented.
- the invention makes it possible to parameterize different classification blocks Bi, B 2 , for different applications, while keeping the same extraction block Ai.
- the extraction block Ai and a first classification block Bi are parameterized.
- the first learning phase is implemented using a first DBi database, established according to a first configuration, by considering a first model part.
- the method includes the use of a second database DB 2 , on the basis of which a second learning is carried out.
- the second database is established according to a second configuration, different from the first configuration.
- the first configuration is parameterized by different parameters Pi, i being an integer identifying each parameter.
- These parameters may include in particular:
- Pi constitution of the database: experimental data or simulated data, or simulated data with different levels of precision or fidelity
- P2 shape of the model part considered
- P4 measurement conditions, for example acquisition time, positioning of the sensor relative to the part, references of the sensor used or modeled, environmental parameters (temperature, humidity, possibly pressure), measurement noise, type of processing performed on the measurements to estimate a property of the part, whether it is an electrical, magnetic or mechanical property;
- the first database DBi comprises different images, representative of measurements carried out or simulated on the first model part, by varying certain parameters Pj, called variable parameters, while other parameters P ⁇ j are fixed for all the images from the first database.
- the first DBi database is formed by only varying the parameter P 8 , representing the dimensions of the defect considered, while the parameters Pi to P 7 are constant.
- variable parameters correspond to the characteristics intended to be estimated by the neural network.
- the first database DBi may comprise a first number of images Ni which may exceed several hundreds, or even several thousands.
- the first neural network, formed by the combination of the blocks Ai and Bi is then supposed to present a satisfactory prediction performance.
- An important element of the invention is to be able to use the first learning to perform the second learning, according to a different configuration.
- different configuration it is understood that at least one of the fixed parameters of the first configuration is modified.
- the following examples show different possibilities for modifying a parameter: Modification of the first parameter Pi: the second learning is carried out by considering experimental measurements, in-situ, whereas the first learning is carried out by considering simulated measurements or carried out under laboratory conditions (or any other combination of experimental measurements and simulation). According to one possibility, the level of precision of the first database DBi and of the second database DB 2 are different.
- the first learning and the second learning are carried out based on simulated measurements with a respectively low and high level of precision:
- the first database can be obtained by a first analytical model, fast but not very precise, while the second database can result from a semi-analytical, numerical or stochastic model, slower to implement but more precise.
- the first database can be obtained with a sensor generating less precise measurements than the sensor used to constitute the second database.
- the number of sensors used (or simulated) can be different during the constitution of each database.
- Modification of the second parameter P 2 the second learning is carried out by considering a second model part, the shape of which is different from the first model part: the second model part can be rounded or curved while the first model part is flat.
- the first learning can for example be performed on a model part whose geometry is simple, easily modeled, or easy to manufacture.
- the second learning can then be carried out by considering a part whose geometry is more complex, and corresponds more to reality.
- Modification of the third parameter P3 the second learning is carried out by considering a second model part whose material has a different composition from the first model part: it can for example be a different alloy, or a realistic alloy, exhibiting a certain variability compared to a theoretical alloy considered during the first learning. The same reasoning applies to a composite material.
- Modification of the fourth parameter P 4 the measurement conditions taken into account in the second learning are different than during the first learning.
- the position of the sensor relative to the model part is different, or the temperature or humidity to which the sensor is exposed is different.
- the type of sensor may also be different.
- the second learning can be carried out by taking into account a realistic response of the sensor, including for example taking into account the measurement noise.
- the first database can be obtained by using a sensor (or a simulation) assigned a level of noise and/or uncertainty different from the sensor making it possible to obtain the second database.
- Modification of the fifth parameter P 5 the second learning is carried out by considering a number and/or a type of fault different from that considered in the first learning
- the first learning is carried out by fixing certain parameters. At least one of these parameters is modified during the second training, to constitute the second DB2 database.
- An important aspect of the invention is that during the second learning, the extraction block Ai, resulting from the first learning, is retained. It is considered that the first learning is sufficiently exhaustive for the performance of the extraction block, in terms of extraction of characteristics of the images supplied as input, to be considered sufficient.
- the extraction block can then be used during the second learning. In other words, the characteristics extracted by the block Ai constitute a good descriptor of the measurements forming the input layer.
- the second learning is thus limited to an update of the parameterization of the classification block, so as to obtain a second classification block B2 adapted to the configuration of the second learning.
- the second learning can then be implemented with a second DB2 database comprising less data than the first database.
- the second classification block B2 can be initialized by taking into account the parameters governing the first classification block Bi.
- the second classification block B2 may comprise the same number of hidden layers as the first classification block Bi. The latter can have the same number of nodes as the layers of the first classification block.
- the dimension of the output layer depends on the characteristics of the defect to be estimated. Also, the dimension of the layer of output of the second classification block B2 may be different from that of the first classification block Bi.
- the number of hidden layers and/or the dimension of the hidden layers of the second classification block is different from the number of hidden layers and/or the dimension of the hidden layers of the first classification block.
- the advantage of the invention is that with a sufficiently complete first learning, the second learning can be established by considering a number of data significantly lower than the number of data used to carry out the first learning. By significantly lower number of data, we mean at least 10 times or even 100 times less data.
- the second database DB2, formed to establish the second training, is smaller than the first database DBi.
- the method allows an implementation of a first learning in laboratory conditions, on the basis of simulations or optimized experimental conditions.
- This first learning is followed by a second learning closer to the reality of the field: taking into account experimental measurements, and/or more realistic measurement conditions, or a more complex shape or constitution of the part.
- the invention makes it possible to limit the number of measurements necessary for the second learning, while making it possible to obtain a neural network exhibiting good prediction performance. This is an important advantage, since acquiring measurements under realistic conditions is usually more complex than obtaining measurements in the laboratory or simulated measurements.
- the second learning can allow the taking into account of non-modelable specificities, for example measurement noise, or variations relative to the composition or to the shape of the part.
- Another advantage of the invention is to be able to use a first learning, carried out on a part made of a certain material, to carry out a second “frugal” learning, on a similar part, of a different material.
- the first learning can be perceived as a general learning, being suitable for different particular applications, or for different types or shapes of parts, or for different types of defects. It is essentially intended to have an extraction block Ai making it possible to extract relevant characteristics from the input data.
- the second learning is more targeted learning, on a particular application, or on a particular type of part, or on a particular type of defect.
- the invention facilitates obtaining the second training, because it requires significantly less input data than the first training.
- the same first learning can be used to perform different second learnings, corresponding respectively to different configurations.
- the first training can be performed on a first database that is relatively easy to obtain, compared to the second database. This makes it possible to provide a first more exhaustive database, taking into account for example a great variability in the dimensions and/or in the shape of the defect.
- FIG. 2B The main steps of the invention are schematized in FIG. 2B.
- Step 100 constitution of the first DBi database.
- the first DBi database is formed according to a first configuration.
- the first configuration is parameterized by first parameters, some of these first parameters being fixed.
- the first database is formed of images representative of measurements obtained (performed or simulated) according to the first configuration.
- Step 110 first learning.
- the first database is used to parameterize the blocks Ai and Bi, so as to optimize the prediction performance of a first convolutional neural network CNNi.
- Step 120 constitution of the second database DB 2 .
- the second database DB 2 is formed according to a second configuration. As previously described, at least one parameter, considered fixed during the first configuration, is modified.
- the size of the second database is preferably at least 10 times smaller than the size of the first database.
- Step 130 second learning.
- the second database DB 2 is used to train a second convolutional neural network CNN 2 formed by the first extraction block Ai, resulting from the first training, and a second block of classification B 2 , specific to the configuration adopted during the second learning.
- the configuration relating to the second learning can correspond to conditions considered to be close to the measurement conditions.
- the neural network convolutional CNN2 resulting from the second training is intended to be implemented to interpret measurements carried out on examined parts. This is the subject of the next step.
- Step 200 performing measurements
- Measurements are taken, on an examined part, according to the measurement configuration considered during the second learning.
- Step 210 Interpretation of the measurements
- the convolutional neural network CNN2 resulting from the second learning, is used to estimate the characteristics of a defect possibly present in the part examined, from the measurements carried out during step 200. These characteristics can be established from the output layer B or t of the CNN2 convolutional neural network. This network is therefore used to perform the step of inverting the measurements.
- FIG. 3A represents a simple defect, of the T-crack type, having 7 position or dimension characteristics: characteristics X1, X2 and X4 are lengths or widths of two branches along a plane P X Y; the characteristics X5 and X6 are depths of the two branches perpendicular to the plane P X Y; the characteristic X3 is an angular characteristic; characteristics X7 and X8 are characteristics of position of the defect in the plan PXY-
- Measurements carried out according to an eddy current modality were simulated, by describing a scan consisting of 41 ⁇ 46 measurement points at a distance of 0.3 mm above the part.
- the part was a flat metal part.
- the regular plot represented in FIG. 3A illustrates the movement of the sensor along the part, parallel to the plane P X Y.
- the image formed in FIG. 3B is an image of the real part of the variation in impedance AH.
- the variation in impedance corresponds to a difference, at each measurement point, between respectively simulated impedances in the presence and in the absence of a fault in the part.
- a first learning was carried out on the basis of simulations. During the first training, 2000 images were used taking into account a very low noise level (signal to noise ratio of 40 dB). Each input image is a concatenation of an image of the real part and of an image of the imaginary part of the impedance variation AH measured at each measurement point. During this learning, the dimensions of the defect were varied, the shape remaining the same.
- the first learning made it possible to parameterize a first convolutional neural network CNNi, comprising a first extraction block Ai and a first classification block Bi as previously described.
- the input layer comprises two images, corresponding respectively to the real part and to the imaginary part of the variation in impedance AH at the different measurement points.
- the extraction block Ai comprises four convolution layers Ci to C4 such that:
- Ci is obtained by applying 32 convolution kernels of 5x5 dimensions to the two input images.
- C2 is obtained by applying 32 convolution kernels of dimensions 3x3 to the layer Ci.
- C3 is obtained by applying 64 convolution kernels of dimensions 3 ⁇ 3 to the layer C 2 .
- a Maxpooling operation (grouping according to a maximum criterion) by groups of 2x2 pixels is performed between layers C2 and C3 as well as between layer C3 and layer C4, the latter being converted into a vector of dimension 1024.
- the dimension vector 1024 from the extraction block Ai constitutes the input layer Bin of a fully connected classification block Bi comprising a single hidden layer H (512 nodes), connected to an output layer Bout.
- the latter is a vector of dimension 8, each term corresponding respectively to an estimate of dimensions XI to X8.
- the first CNNi convolutional neural network was tested to estimate the 8 dimensional parameters X1 to X8 shown in Figure 3A.
- simulated test images were used, representative of experimental measurements with three levels of signal to noise ratio (SNR), respectively 5dB (low signal to noise ratio), 20 dB (medium signal to noise ratio), and 40 dB (high signal-to-noise ratio).
- SNR signal to noise ratio
- 5dB low signal to noise ratio
- 20 dB medium signal to noise ratio
- 40 dB high signal-to-noise ratio
- the signal-to-noise ratio was simulated by adding Gaussian white noise to the matrices (or images) forming the input layer of the neural network.
- Figure 3C shows the XI dimension prediction performance using test images whose signal-to-noise ratio is 5 dB (left), 20 dB (center), 40 dB (right), respectively.
- the abscissa axis corresponds to the true values
- the ordinate axis corresponds to the values estimated by the neural network CNNi. It is observed that the prediction performances are not satisfactory when the signal-to-noise ratio does not correspond to that which was considered during learning. On the other hand, the estimation performance is satisfactory when the signal-to-noise ratio corresponds to that considered during learning.
- indicators relating to the prediction performance have been indicated: MAE (Mean Absolute Error - Mean Absolute Error), MSE (Mean Squared Error - Mean Squared Error) and R2 (coefficient of determination).
- the inventors trained a second CNN2 neural network, using, in a second database, 20 simulated images taking into account a signal-to-noise ratio of 40 dB and 20 simulated images taking into account a signal-to-noise ratio of 5 dB, a total of 40 frames.
- the second neural network CNN2 was parameterized by keeping the extraction block Ai of the first neural network CNNi. Only the B2 classification block of the second neural network was parameterized, keeping the same number of layers and the same number of nodes per layer of the first CNNi neural network.
- FIG. 3D shows the estimation performance of the second neural network, relative to the estimation of the first dimension XI.
- Figure 3D is presented identically to Figure 3C: test images whose signal-to-noise ratio is 5 dB (left), 20 dB (center) and 40 dB (right). The estimation performances are correct whatever the signal-to-noise ratio considered.
- This first example demonstrates the relevance of the invention: it allows rapid adaptation of a neural network when passing from a first configuration to a second configuration by modifying a parameter kept fixed during the first configuration, by occurrence the signal-to-noise ratio.
- the second neural network was parameterized using a database of 40 images, ie 50 times less than the database used during training of the first neural network.
- Example 2 During a second example, the inventors went from a first learning configuration, taking into account a defect of a first predetermined shape, to a second learning configuration, based on a second shape, different from the first form.
- the second fault is shown in Figures 4A and 4B.
- the second complex defect is of the crack type forming three Ts presenting 23 position or dimension characteristics: the characteristics X1, X2, X3, X4, X5, X6 are lengths of branches along a plane PXY; the characteristics X7, X8, X9 and X10 are angles in the PXY plane;
- characteristics Xll, X12, X13, X14, X15, X16, X17, X18, X19 and X20 are position characteristics; the characteristics X21, X22, X23, not represented in FIG. 4B, are the thicknesses of each T perpendicular to the plane PXY-
- the inventors have parameterized a reference neural network CNN re f, by constituting a reference database DB re f.
- the DB reference database re f comprised 2000 images, each image being obtained by a concatenation of images of the real part and the imaginary part of the impedance variation, obtained by simulation of measurements, according to 89x69 measurement points regularly distributed according to a matrix mesh.
- the trajectory of the sensor has been shown in FIG. 4A.
- the reference neural network CNN ref was a convolutional neural network, of a structure analogous to the neural networks CNNi and CNN2 described in connection with the first example. The only differences are: the dimension of the input layer, the latter comprising two images of dimension 89 ⁇ 69, corresponding respectively to the real part and to the imaginary part of the impedance variation AH; the dimension of the output layer, comprising 23 nodes, each node corresponding to an estimate of a dimension X1 to X23.
- Curve (a) in Figure 4C shows the estimation performance (coefficient of determination) of the 23 dimensions using the reference neural network.
- the determination coefficients relating to each dimension were calculated on the basis of an application of the reference neural network to 400 different defects.
- the estimation performance is very good, which is not surprising since the reference neural network is a network specifically designed for this form of fault.
- the inventors compared the performance of the reference neural network with: on the one hand an auxiliary neural network derived from a reduced database DB aux , established in the same way as the reference database DB re f, but comprising only 50 images, corresponding to 50 sets of different dimensions, on the other hand a neural network formed according to the invention.
- the reduced DB aU x database was formed based on 50 different X1...X23 feature sets.
- the auxiliary neural network was parameterized based on this reduced database.
- the structure of the auxiliary neural network was identical to that of the reference neural network CNN ref .
- the classification performances are plotted in FIG. 4C (curve b). Unsurprisingly, the classification performance is poor, which is due to the “under-training” of the auxiliary neural network.
- the determination coefficients relating to each dimension were calculated on the basis of an application of the auxiliary neural network and of the neural network according to the invention to 400 different defects.
- a first CNNi neural network was configured, using a first DBi database comprising 2000 images resulting from simulations as described in connection with the first example, on a "simple" defect, as shown in Fig. 3A.
- the dimension of each image was 89x69.
- the input layer was formed of two images, respectively representing the real and imaginary parts of the impedance variation AH measured at 89 ⁇ 69 measurement points. As in example 1, the measurement points are regularly distributed according to a matrix mesh.
- the dimensions XI to X8 of the single defect were varied.
- auxiliary database DB aux specific to the complex defect
- a second database DB 2 to parameterize a second neural network CNN 2 , the latter using the extraction block Ai of the first neural network CNNi neurons.
- the parameterization of the second neural network is then reduced to a parameterization of the classification block B 2 of the second neural network CNN 2 .
- the latter corresponds to a neural network according to the invention.
- the classification block B 2 of the convolutional neural network CNN 2 was parameterized by modifying parameters, considered to be fixed during the constitution of the extraction block Ai of the first convolutional neural network CNNi. In this case, it is the shape of the defect.
- the extraction block Ai of the first convolutional neural network CNNi is parameterized by taking into account a simple form of defect (defect in T represented on the FIG. 3A), while the classification block B2 of the second convolutional neural network CNN2 is parameterized taking into account a different shape (complex defect comprising three Ts represented in FIG. 4B).
- the second neural network CNN 2 is a neural network according to the invention.
- the inventors implemented the second neural network on test images.
- the estimation performance of the CNN neural network 2 is represented in FIG. 4C, curve (c). It can be observed that despite a learning performed with the same database as the auxiliary network, which is a frugal database, the classification performances are superior to those of the auxiliary neural network. This confirms the advantage of the invention.
- the input data of the neural networks were formed of matrices resulting from simulations of measurements carried out according to the eddy current modality.
- the invention can be applied to other methods usually practiced in the field of non-destructive testing, provided that the input data is presented in matrix form, comparable to an image. More specifically, the other possible methods are: ultrasonic testing, in which an acoustic wave propagates through an examined part. This concerns ultrasound-type measurements, in which the measurements are generally representative of the reflection, by a defect, of an incident ultrasonic wave. This also concerns the propagation of guided ultrasonic waves.
- the physical quantities addressed are the properties of propagation of ultrasonic waves in the material, the presence of a defect resulting in a variation of the properties of propagation with respect to a part in the absence of a defect. It is possible to obtain representative images of the propagation of an ultrasonic wave along or through a part.
- Inspections by X or gamma rays according to which the examined part is subjected to irradiation by ionizing electromagnetic radiation.
- the presence of a defect results in a modification of the transmission properties of the irradiation radiation.
- the measurements make it possible to obtain images representative of the transmission of the irradiation radiation through the part examined.
- Thermography inspections according to which the examined part is subjected to illumination by electromagnetic radiation in the infra- Red.
- the presence of a defect results in a modification of the reflection properties of the illuminating radiation.
- the measurements make it possible to obtain images representative of the reflection of the illumination radiation by the part examined.
- transducers are arranged on the part, each transducer being configured to emit or detect a bending wave propagating through the plate.
- the wave propagation parameters depend on the elastic properties of the part, and in particular on the density and the Young's modulus. These depend on the room temperature.
- the propagation parameters of the bending wave make it possible to estimate the temperature of the part.
- different piezoelectric transducers are available punctually around a part of the part, several emitter/detector pairs can be defined. It is then possible to estimate a spatial distribution of the temperature in the part of the room delimited by the transducers, according to reconstruction algorithms known to those skilled in the art.
- the defect may be an anomaly in the spatial distribution of the temperature with respect to a reference spatial distribution.
- a first extraction block Ai is implemented coupled with a reconstruction block B′i.
- the reconstruction block B′i is a block for processing the data extracted by the first extraction block Ai.
- This variant implements a first neural network CNN'i, of the auto-encoder type. As represented in FIG. 5, the first neural network comprises the extraction block Ai and the reconstruction block B′i.
- a neural network of the auto-encoder type is a structure comprising an extraction block Ai, called encoder, making it possible to extract relevant information from an input datum Ain , defined in a starting space.
- the input datum is thus projected into a space, called latent space.
- the information extracted by the extraction block is called code.
- the auto-encoder comprises a reconstruction block B′i, allowing reconstruction of the code, so as to obtain an output datum A out , defined in a space generally identical to the starting space.
- the learning is carried out in such a way as to minimize an error between the input datum A in and the output data A or t.
- the code, extracted by the extraction block is considered to be representative of the main characteristics of the input data.
- the extraction block Ai allows compression of the information contained in the input datum A in .
- the first neural network can in particular be of the convolutional auto-encoder type: each layer of the extraction block Ai results from the application of a convolution kernel to a preceding layer.
- the convolution layers Ci...Cj...Cj have been represented, the layer Cj being the last layer of the extraction block Ai, comprising the code.
- the layers Di...Dj...Dj of the processing block B'i have also been shown, the layer Dj corresponding to the output datum A or t.
- the reconstruction block B′i does not aim to determine the characteristics of a defect.
- the reconstruction block allows a reconstruction of the output datum A or t, on the basis of the code (layer Cj), the reconstruction being as faithful as possible to the input datum Ai n .
- the classification block Bi and the reconstruction block B'i are used for the same purpose: to allow parameterization of the first extraction block Ai, the latter being able to be used during the second learning, to parameterize the second classification block B 2 .
- the method follows steps 100 to 210 previously described in connection with FIG. 2B.
- the first learning (step 110) consists in setting the extraction block Ai. It can be performed on the basis of at least a first database.
- the use of an auto-encoder makes it possible to combine different first databases. Some databases are representative of healthy parts, without defects, while other databases are representative of parts with a defect. For example, it is possible to combine: databases bringing together measurements taken on a healthy part, at different temperatures, in order to learn the effect of a temperature variation on the measurements; databases gathering measurements taken on a part with a defect, at a constant temperature, in order to learn the effect of the presence of a defect on the measurements.
- steps 120 to 210 are performed as previously described. This involves carrying out a second learning, on the basis of the second database, so as to parameterize a classification block B2, by using the extraction block Ai resulting from the first learning.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Biochemistry (AREA)
- Analytical Chemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Electrochemistry (AREA)
- Chemical Kinetics & Catalysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Investigating Or Analyzing Materials By The Use Of Magnetic Means (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR2008564A FR3113529B1 (en) | 2020-08-19 | 2020-08-19 | Part characterization process by non-destructive testing |
PCT/EP2021/073061 WO2022038233A1 (en) | 2020-08-19 | 2021-08-19 | Method for characterizing a part through non-destructive inspection |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4200603A1 true EP4200603A1 (en) | 2023-06-28 |
Family
ID=74347143
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP21756011.9A Pending EP4200603A1 (en) | 2020-08-19 | 2021-08-19 | Method for characterizing a part through non-destructive inspection |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230314386A1 (en) |
EP (1) | EP4200603A1 (en) |
CA (1) | CA3188699A1 (en) |
FR (1) | FR3113529B1 (en) |
WO (1) | WO2022038233A1 (en) |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2721111A1 (en) * | 1994-06-10 | 1995-12-15 | Icpi Lyon | Geometric and acoustic property determn. method for submerged layer of material in fluid such as air or water |
US10551297B2 (en) * | 2017-09-22 | 2020-02-04 | Saudi Arabian Oil Company | Thermography image processing with neural networks to identify corrosion under insulation (CUI) |
FR3075376B1 (en) * | 2017-12-14 | 2020-05-22 | Safran | NON-DESTRUCTIVE INSPECTION PROCESS FOR AN AERONAUTICAL PART |
-
2020
- 2020-08-19 FR FR2008564A patent/FR3113529B1/en active Active
-
2021
- 2021-08-19 EP EP21756011.9A patent/EP4200603A1/en active Pending
- 2021-08-19 CA CA3188699A patent/CA3188699A1/en active Pending
- 2021-08-19 US US18/042,087 patent/US20230314386A1/en active Pending
- 2021-08-19 WO PCT/EP2021/073061 patent/WO2022038233A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
FR3113529A1 (en) | 2022-02-25 |
FR3113529B1 (en) | 2023-05-26 |
US20230314386A1 (en) | 2023-10-05 |
CA3188699A1 (en) | 2022-02-24 |
WO2022038233A1 (en) | 2022-02-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019091705A1 (en) | Structural health monitoring for an industrial structure | |
EP3555585B1 (en) | Method and system for controlling the integrated health of a mechanical structure by diffuse elastic waves | |
CN109858408B (en) | Ultrasonic signal processing method based on self-encoder | |
CN107576633B (en) | Method for detecting internal defects of optical element by using improved 3PIE technology | |
EP4200604A1 (en) | Method for characterizing a part through non-destructive inspection | |
CN110286155B (en) | Damage detection method and system for multilayer composite material | |
Rao et al. | Quantitative reconstruction of defects in multi-layered bonded composites using fully convolutional network-based ultrasonic inversion | |
CN117540174B (en) | Building structure multi-source heterogeneous data intelligent analysis system and method based on neural network | |
Olivieri et al. | Near-field acoustic holography analysis with convolutional neural networks | |
Singh et al. | Real-time super-resolution mapping of locally anisotropic grain orientations for ultrasonic non-destructive evaluation of crystalline material | |
Ghafoor et al. | Non-contact detection of railhead defects and their classification by using convolutional neural network | |
Singh et al. | Deep learning based inversion of locally anisotropic weld properties from ultrasonic array data | |
WO2024074379A1 (en) | Method for bi-level optimisation of the location of sensors for detecting one or more defects in a structure using elastic guided wave tomography | |
EP4200603A1 (en) | Method for characterizing a part through non-destructive inspection | |
FR3075376A1 (en) | NON-DESTRUCTIVE CONTROL METHOD FOR AERONAUTICAL WORKPIECE | |
Gantala et al. | Automated defect recognition (ADR) for monitoring industrial components using neural networks with phased array ultrasonic images | |
Helvig et al. | Towards deep learning fusion of flying spot thermography and visible inspection for surface cracks detection on metallic materials | |
EP3140677B1 (en) | Method for processing seismic images | |
Masurkar et al. | Estimating the elastic constants of orthotropic composites using guided waves and an inverse problem of property estimation | |
FR3057357A1 (en) | METHOD AND DEVICE FOR DETECTING AND CHARACTERIZING REFLECTIVE ELEMENT IN AN OBJECT | |
EP3071992A1 (en) | Method for reconstructing a surface of a part | |
Ijjeh et al. | Delamination identification using global convolution networks | |
Kim | Identification of the local stiffness reduction of a damaged composite plate using the virtual fields method | |
Abruzzo et al. | Identifying Mergers Using Quantitative Morphologies in Zoom Simulations of High-Redshift Galaxies | |
Fisher et al. | Deep Learning Method Based on Denoising Autoencoders for Temperature Selection of Guided Waves Signals |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20230216 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIESALTERNATIVES |