CN112967272A - Welding defect detection method and device based on improved U-net and terminal equipment - Google Patents

Welding defect detection method and device based on improved U-net and terminal equipment Download PDF

Info

Publication number
CN112967272A
CN112967272A CN202110320828.5A CN202110320828A CN112967272A CN 112967272 A CN112967272 A CN 112967272A CN 202110320828 A CN202110320828 A CN 202110320828A CN 112967272 A CN112967272 A CN 112967272A
Authority
CN
China
Prior art keywords
segmentation
sample set
training sample
defect detection
welding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110320828.5A
Other languages
Chinese (zh)
Other versions
CN112967272B (en
Inventor
杨磊
刘艳红
曾庆山
王怀鑫
霍本岩
李方圆
吴振龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University
Original Assignee
Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University filed Critical Zhengzhou University
Priority to CN202110320828.5A priority Critical patent/CN112967272B/en
Publication of CN112967272A publication Critical patent/CN112967272A/en
Application granted granted Critical
Publication of CN112967272B publication Critical patent/CN112967272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

The invention relates to a welding defect detection method, a device and a terminal device based on improved U-net, which are used for marking a training sample set to obtain marking data and training a U-shaped segmentation network according to the training sample set and the marking data, wherein a segmentation encoder and a segmentation decoder in the U-shaped segmentation network use jump connection for enhancing context semantic information and improving the accuracy of the segmentation network in extracting the characteristics of tiny welding defect parts, a global attention mechanism is introduced into the U-shaped segmentation network to generate more distinguishing characteristics, so that the segmentation network better focuses on the characteristic expression of a defect area, the characteristic expression capability of the segmentation network on the welding defect area is enhanced, the interference of a background image is reduced, the segmentation performance of the network is improved, and finally, a welding image to be detected is input into a trained semantic segmentation model for defect detection, the welding defect detection accuracy can be improved.

Description

Welding defect detection method and device based on improved U-net and terminal equipment
Technical Field
The invention relates to a welding defect detection method and device based on improved U-net and terminal equipment.
Background
Nowadays, welding production is indispensable in the wide production fields of airplanes, automobiles, shipbuilding and the like. However, in a complex welding process, the quality of a welding portion is affected by many factors such as welding current, welding voltage, welding speed, nozzle temperature and the like, and therefore, welding defects are often generated in a robot welding process, which not only affect the structural strength and performance of a product, but also affect the aesthetic appearance of the product and also bring unpredictable safety hazards to the product.
The detection of welding quality is a key link of an intelligent welding robot. At present, the X-ray nondestructive testing technology is rapidly developed, compared with other testing methods such as arc sensing, magneto-optical sensors and the like, the X-ray nondestructive testing has better performance, and the internal structure and defect information of different products can be obtained, so that the welding defect part can be better identified and positioned. With the development of deep learning, the capability of the segmentation method based on deep learning on defect detection is improved. However, since the welding defect is a small pixel cluster and a small edge structure, it still faces a certain challenge to accurately detect the defect in an image with a high pixel.
Disclosure of Invention
In view of the above, the invention provides a welding defect detection method and device based on an improved U-net, and a terminal device.
A welding defect detection method based on improved U-net comprises the following steps:
acquiring a training sample set, wherein the training sample set comprises at least two welding defect sample images;
labeling the training sample set to obtain labeled data;
inputting the training sample set into a segmentation encoder in a U-shaped segmentation network, wherein the segmentation encoder performs feature extraction on defective parts through a plurality of convolutional layers and pooling layers, outputs the segmentation encoder into a segmentation decoder in the U-shaped segmentation network, performs upsampling through the convolutional layers and the upsampling layers, and outputs a semantic segmentation map with the same size as the training sample set; the segmentation coder and the segmentation decoder in the U-shaped segmentation network use jump connection, and a global attention mechanism is introduced into the U-shaped segmentation network; calculating the semantic segmentation graph and the labeled data through a cross entropy loss function, and optimizing parameters in a semantic segmentation model;
and inputting the welding image to be detected into the trained semantic segmentation model for defect detection.
More preferably, the input to each layer of the partition encoder is set as a fusion of three parts: an output of a layer above the local layer of the partition encoder, an output of the local layer of the partition encoder, and an output of a layer below the local layer of the partition encoder.
Preferably, after the training sample set is obtained, the welding defect detection method further includes:
and preprocessing the training sample set.
Preferably, the preprocessing the training sample set includes:
and sequentially carrying out linear transformation, Gamma transformation, gradient histogram equalization processing and image size conversion on the training sample set.
Preferably, the labeling the training sample set to obtain labeled data includes:
labeling the training sample set through a Labelme tool to obtain primary labeling data;
and carrying out binarization on the preliminary labeling data to obtain binarization labeling data.
A welding defect detection device based on improve U-net includes:
the system comprises a training sample set acquisition module, a welding defect detection module and a welding defect detection module, wherein the training sample set acquisition module is used for acquiring a training sample set which comprises at least two welding defect sample images;
the marking module is used for marking the training sample set to obtain marking data;
the network training module is used for inputting the training sample set into a segmentation encoder in a U-shaped segmentation network, the segmentation encoder performs feature extraction on a defective part through a plurality of convolution layers and pooling layers, the segmentation encoder outputs the feature extraction to a segmentation decoder in the U-shaped segmentation network, performs up-sampling through the convolution layers and an up-sampling layer, and outputs a semantic segmentation graph with the same size as the training sample set; the segmentation coder and the segmentation decoder in the U-shaped segmentation network use jump connection, and a global attention mechanism is introduced into the U-shaped segmentation network; calculating the semantic segmentation graph and the labeled data through a cross entropy loss function, and optimizing parameters in a semantic segmentation model;
and the defect detection module is used for inputting the welding image to be detected into the trained semantic segmentation model for defect detection.
A terminal device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, said processor implementing the steps of the improved U-net based weld defect detection method as described above when executing said computer program.
The invention has the beneficial effects that: marking after obtaining a training sample set to obtain marked data, inputting the training sample set into a segmentation encoder in a U-shaped segmentation network, extracting the characteristics of a defect part by the segmentation encoder through a plurality of layers of convolution layers and pooling layers, outputting the segmentation encoder into a segmentation decoder in the U-shaped segmentation network, performing upsampling through the convolution layers and the upsampling layers, and outputting a semantic segmentation graph with the same size as the training sample set; wherein, a segmentation encoder and a segmentation decoder in the U-shaped segmentation network are connected by jumping and are used for enhancing context semantic information and improving the accuracy of the segmentation network in extracting the characteristics of tiny welding defect parts, moreover, a global attention mechanism is introduced into the U-shaped segmentation network to generate more distinguishing characteristics, so that the segmentation network better focuses on the characteristic expression of a defect region, the characteristic expression capability of the segmentation network on the welding defect region is enhanced, the interference of a background image is reduced, the segmentation performance of the network is improved, a semantic segmentation graph and annotation data are operated by a cross entropy loss function, parameters in the semantic segmentation model are optimized, a training result is gradually close to real data, a high-accuracy semantic segmentation model can be obtained, and then, a welding image to be detected is input into the trained semantic segmentation model for defect detection, a defect detection result with relatively high accuracy can be obtained. Therefore, the welding defect detection method based on the improved U-net can improve the welding defect detection accuracy.
Drawings
In order to more clearly illustrate the technical solution of the embodiment of the present invention, the drawings needed to be used in the embodiment will be briefly described as follows:
FIG. 1 is a schematic overall flow chart of a welding defect detection method based on improved U-net according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a model structure of an improved U-shaped segmentation network;
FIG. 3 is a schematic diagram of the structure of a global attention module;
FIG. 4 is a schematic overall structure diagram of a welding defect detection device based on an improved U-net according to a second embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal device according to a third embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In order to explain the technical means described in the present application, the following description will be given by way of specific embodiments.
Referring to fig. 1, which is a flowchart of an implementation procedure of a welding defect detection method based on improved U-net provided in an embodiment of the present application, for convenience of description, only a part related to the embodiment of the present application is shown.
Step S101: obtaining a training sample set, wherein the training sample set comprises at least two welding defect sample images:
the method comprises the steps of obtaining a training sample set, wherein the training sample set is an unmarked initial sample set, the training sample set comprises at least two welding defect sample images, the specific number is set according to actual needs, and the more the number is, the more accurate the network model obtained by training is. It should be understood that the welding defect sample images in the training sample set may include sample images of different kinds of welding defects, and may also include normal welding sample images, i.e., sample images without welding defects.
In this embodiment, each sample image in the training sample set is obtained by performing image acquisition on a welding portion by using an X-ray device. Besides, a training sample set and a verification sample set can be obtained, wherein the verification sample set is used for verifying the network performance obtained by training, network parameters can be adjusted in a cross-validation mode, and the test sample set is used for testing the performance of the network model obtained by training. Wherein the validation sample set and the test sample set also include at least two weld defect sample images.
In order to ensure that each sample image meets the training requirement, in this embodiment, after the training sample set is obtained, the welding defect detection method further includes: and preprocessing the training sample set. After the image preprocessing, the sample image can well highlight the welding defect characteristics, and the size of the sample image meets the requirement of a segmentation network.
The image preprocessing process is set according to actual needs, and a specific preprocessing process is provided in this embodiment, which is as follows: and sequentially carrying out linear transformation, Gamma transformation, gradient histogram equalization processing and image size conversion on each sample image in the training sample set. The image size conversion means that the size of each sample image is converted into a uniform specification for network training, for example, the size of each sample image is converted into 640 × 320.
Step S102: labeling the training sample set to obtain labeled data:
in order to better realize the detection of the defects in the welding workpiece, the welding defect parts of the training sample set are accurately labeled, the manufacturing of the data set is completed, and the labeling data are labeling results obtained by labeling the defect parts in each sample image in the training sample set. In this embodiment, the training sample set is labeled by a Labelme tool to obtain preliminary labeling data, i.e., a json file is obtained, and then the preliminary labeling data is binarized to obtain binarized labeling data.
Step S103: inputting the training sample set into a segmentation encoder in a U-shaped segmentation network, wherein the segmentation encoder performs feature extraction on defective parts through a plurality of convolutional layers and pooling layers, outputs the segmentation encoder into a segmentation decoder in the U-shaped segmentation network, performs upsampling through the convolutional layers and the upsampling layers, and outputs a semantic segmentation map with the same size as the training sample set; the segmentation coder and the segmentation decoder in the U-shaped segmentation network use jump connection, and a global attention mechanism is introduced into the U-shaped segmentation network; calculating the semantic segmentation graph and the labeled data through a cross entropy loss function, and optimizing parameters in a semantic segmentation model:
the present embodiment performs defect detection through a U-shaped partition network, which includes a partition encoder and a partition decoder. The split encoder comprises a multilayer convolutional layer and a pooling layer, and the split decoder comprises a multilayer convolutional layer and an upsampling layer. Moreover, the segmentation encoder and the segmentation decoder are connected in a jumping mode, namely, a jumping connection structure with different orders is added between the segmentation encoder and the segmentation decoder, so that the image context information can be enhanced, the feature extraction capability is improved, namely, the network segmentation capability for tiny defects is improved, and the input of each layer of the segmentation encoder is set to be fused into three parts: the output of the layer above the local layer of the partition encoder, the output of the local layer of the partition encoder, and the output of the layer below the local layer of the partition encoder are shown in fig. 2. Moreover, a global attention mechanism is introduced into the U-shaped segmentation network, so that the U-shaped segmentation network better focuses on the feature expression of the defect area, the interference of background factors is reduced, and the segmentation performance of the network is improved. Fig. 3 is a schematic structural diagram of a global attention module.
After a training sample set and corresponding labeled data are obtained, the training sample set is input into a segmentation encoder, the segmentation encoder performs feature extraction of a defect part on different scales through a plurality of convolution layers and a pooling layer, the segmentation encoder outputs the feature extraction to a segmentation decoder, the segmentation decoder performs upsampling through the convolution layers and the upsampling layer, the image size is restored, and namely a semantic segmentation image with the same size as the training sample set is output.
And (3) operating the obtained semantic segmentation graph and the labeled data through a cross entropy loss function (namely, performing repeated iterative training), optimizing parameters in the semantic segmentation model, enabling the training result to be gradually close to the real condition (namely, gradually close to the labeled data), and storing the network parameters after the network training is finished. As a specific embodiment, the initial learning rate is set to be 1e-4, training is stopped after 20000 network iterations, and a semantic segmentation model, namely a welding defect detection segmentation model, is obtained.
The semantic segmentation model increases network context information and improves segmentation capability on the premise of a lightweight network.
Step S104: inputting a welding image to be detected into a trained semantic segmentation model for defect detection:
after the semantic segmentation model is trained, the welding image to be detected is input into the trained semantic segmentation model, and the welding image to be detected is segmented through the semantic segmentation model, so that the welding defect in the welding image to be detected is accurately detected.
Fig. 4 is a block diagram of a welding defect detection apparatus based on improved U-net according to the second embodiment of the present application, which corresponds to the welding defect detection method based on improved U-net described in the foregoing embodiment of the welding defect detection method based on improved U-net.
Referring to fig. 4, the welding defect detecting apparatus 200 based on the improved U-net includes:
a training sample set obtaining module 201, configured to obtain a training sample set, where the training sample set includes at least two welding defect sample images;
a labeling module 202, configured to label the training sample set to obtain labeled data;
the network training module 203 is used for inputting the training sample set into a segmentation encoder in a U-shaped segmentation network, wherein the segmentation encoder performs feature extraction of a defect part through a plurality of convolutional layers and pooling layers, the segmentation encoder outputs the training sample set into a segmentation decoder in the U-shaped segmentation network, performs upsampling through the convolutional layers and the upsampling layers, and outputs a semantic segmentation map with the same size as the training sample set; the segmentation coder and the segmentation decoder in the U-shaped segmentation network use jump connection, and a global attention mechanism is introduced into the U-shaped segmentation network; calculating the semantic segmentation graph and the labeled data through a cross entropy loss function, and optimizing parameters in a semantic segmentation model;
and the defect detection module 204 is configured to input the welding image to be detected into the trained semantic segmentation model for defect detection.
More preferably, the input to each layer of the partition encoder is set as a fusion of three parts: an output of a layer above the local layer of the partition encoder, an output of the local layer of the partition encoder, and an output of a layer below the local layer of the partition encoder.
More preferably, the welding defect detecting apparatus further includes:
and the preprocessing module is used for preprocessing the training sample set.
More preferably, the preprocessing module is specifically configured to: and sequentially carrying out linear transformation, Gamma transformation, gradient histogram equalization processing and image size conversion on the training sample set.
More preferably, the labeling module 202 is specifically configured to: labeling the training sample set through a Labelme tool to obtain primary labeling data; and carrying out binarization on the preliminary labeling data to obtain binarization labeling data.
It should be noted that, for the above-mentioned information interaction and execution process between the devices/modules, the specific functions and technical effects thereof are based on the same concept as the embodiment of the welding defect detection method based on the improved U-net according to the present application, and reference may be made to the part of the embodiment of the welding defect detection method based on the improved U-net, which is not described herein again.
It is clearly understood by those skilled in the art that, for convenience and brevity of description, the above-mentioned division of the functional modules is merely used as an example, in practical applications, the above-mentioned function allocation may be performed by different functional modules according to needs, that is, the internal structure of the welding defect detection apparatus 200 based on the improved U-net is divided into different functional modules to perform all or part of the above-mentioned functions. Each functional module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional modules are only used for distinguishing one functional module from another, and are not used for limiting the protection scope of the application. For the specific working process of each functional module, reference may be made to the corresponding process in the embodiment of the improved U-net-based welding defect detection method, which is not described herein again.
Fig. 5 is a schematic structural diagram of a terminal device according to a third embodiment of the present application. As shown in fig. 5, the terminal device 300 includes: a processor 302, a memory 301, and a computer program 303 stored in the memory 301 and operable on the processor 302. The number of the processors 302 is at least one, and fig. 5 takes one as an example. The processor 302, when executing the computer program 303, implements the implementation steps of the improved U-net based weld defect detection method described above, i.e., the steps shown in fig. 1.
The specific implementation process of the terminal device 300 can be seen in the embodiment of the welding defect detection method based on the improved U-net.
Illustratively, the computer program 303 may be partitioned into one or more modules/units that are stored in the memory 301 and executed by the processor 302 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 303 in the terminal device 300.
The terminal device 300 may be a desktop computer, a notebook, a palm computer, a main control and other computing devices, or may be a mobile terminal such as a mobile phone. Terminal device 300 may include, but is not limited to, a processor and a memory. Those skilled in the art will appreciate that fig. 5 is only an example of the terminal device 300 and does not constitute a limitation of the terminal device 300, and may include more or less components than those shown, or combine some of the components, or different components, for example, the terminal device 300 may further include input and output devices, network access devices, buses, etc.
The Processor 302 may be a CPU (Central Processing Unit), other general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 301 may be an internal storage unit of the terminal device 300, such as a hard disk or a memory. The memory 301 may also be an external storage device of the terminal device 300, such as a plug-in hard disk, SMC (Smart Media Card), SD (Secure Digital Card), Flash Card, or the like provided on the terminal device 300. Further, the memory 301 may also include both an internal storage unit of the terminal device 300 and an external storage device. The memory 301 is used for storing an operating system, application programs, a boot loader, data, and other programs, such as program codes of the computer program 303. The memory 301 may also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application further provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program may implement the steps in the above embodiment of the method for detecting welding defects based on improved U-net.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the embodiment of the welding defect detection method based on the improved U-net may be implemented by a computer program to instruct related hardware, where the computer program 303 may be stored in a computer readable storage medium, and when being executed by the processor 302, the computer program 303 may implement the steps in the embodiment of the welding defect detection method based on the improved U-net. Wherein the computer program 303 comprises computer program code, and the computer program 303 code may be in a source code form, an object code form, an executable file or some intermediate form, and the like. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, ROM (Read-Only Memory), RAM (Random Access Memory), electrical carrier wave signal, telecommunication signal, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (7)

1. A welding defect detection method based on improved U-net is characterized by comprising the following steps:
acquiring a training sample set, wherein the training sample set comprises at least two welding defect sample images;
labeling the training sample set to obtain labeled data;
inputting the training sample set into a segmentation encoder in a U-shaped segmentation network, wherein the segmentation encoder performs feature extraction on defective parts through a plurality of convolutional layers and pooling layers, outputs the segmentation encoder into a segmentation decoder in the U-shaped segmentation network, performs upsampling through the convolutional layers and the upsampling layers, and outputs a semantic segmentation map with the same size as the training sample set; the segmentation coder and the segmentation decoder in the U-shaped segmentation network use jump connection, and a global attention mechanism is introduced into the U-shaped segmentation network; calculating the semantic segmentation graph and the labeled data through a cross entropy loss function, and optimizing parameters in a semantic segmentation model;
and inputting the welding image to be detected into the trained semantic segmentation model for defect detection.
2. The improved U-net based welding defect detection method of claim 1, wherein an input of each layer of the partition encoder is set as a fusion of three parts: an output of a layer above the local layer of the partition encoder, an output of the local layer of the partition encoder, and an output of a layer below the local layer of the partition encoder.
3. The improved U-net based weld defect detection method of claim 1, wherein after the obtaining of the training sample set, the weld defect detection method further comprises:
and preprocessing the training sample set.
4. The improved U-net based welding defect detection method according to claim 3, wherein the preprocessing the training sample set comprises:
and sequentially carrying out linear transformation, Gamma transformation, gradient histogram equalization processing and image size conversion on the training sample set.
5. The method for detecting the welding defect based on the improved U-net according to claim 1, wherein the labeling the training sample set to obtain labeled data comprises:
labeling the training sample set through a Labelme tool to obtain primary labeling data;
and carrying out binarization on the preliminary labeling data to obtain binarization labeling data.
6. A welding defect detection device based on improve U-net, characterized by includes:
the system comprises a training sample set acquisition module, a welding defect detection module and a welding defect detection module, wherein the training sample set acquisition module is used for acquiring a training sample set which comprises at least two welding defect sample images;
the marking module is used for marking the training sample set to obtain marking data;
the network training module is used for inputting the training sample set into a segmentation encoder in a U-shaped segmentation network, the segmentation encoder performs feature extraction on a defective part through a plurality of convolution layers and pooling layers, the segmentation encoder outputs the feature extraction to a segmentation decoder in the U-shaped segmentation network, performs up-sampling through the convolution layers and an up-sampling layer, and outputs a semantic segmentation graph with the same size as the training sample set; the segmentation coder and the segmentation decoder in the U-shaped segmentation network use jump connection, and a global attention mechanism is introduced into the U-shaped segmentation network; calculating the semantic segmentation graph and the labeled data through a cross entropy loss function, and optimizing parameters in a semantic segmentation model;
and the defect detection module is used for inputting the welding image to be detected into the trained semantic segmentation model for defect detection.
7. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor when executing the computer program implements the steps of the improved U-net based welding defect detection method according to any of claims 1-5.
CN202110320828.5A 2021-03-25 2021-03-25 Welding defect detection method and device based on improved U-net and terminal equipment Active CN112967272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110320828.5A CN112967272B (en) 2021-03-25 2021-03-25 Welding defect detection method and device based on improved U-net and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110320828.5A CN112967272B (en) 2021-03-25 2021-03-25 Welding defect detection method and device based on improved U-net and terminal equipment

Publications (2)

Publication Number Publication Date
CN112967272A true CN112967272A (en) 2021-06-15
CN112967272B CN112967272B (en) 2023-08-22

Family

ID=76278390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110320828.5A Active CN112967272B (en) 2021-03-25 2021-03-25 Welding defect detection method and device based on improved U-net and terminal equipment

Country Status (1)

Country Link
CN (1) CN112967272B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379728A (en) * 2021-07-02 2021-09-10 上海电气集团股份有限公司 Method, system, equipment and readable storage medium for detecting defects on surface of rail
CN113657383A (en) * 2021-08-24 2021-11-16 凌云光技术股份有限公司 Defect region detection method and device based on lightweight segmentation model
CN113763358A (en) * 2021-09-08 2021-12-07 合肥中科类脑智能技术有限公司 Semantic segmentation based transformer substation oil leakage and metal corrosion detection method and system
CN114742832A (en) * 2022-06-13 2022-07-12 惠州威尔高电子有限公司 Welding defect detection method for MiniLED thin plate
CN117984024A (en) * 2024-04-03 2024-05-07 中国水利水电第十工程局有限公司 Welding data management method and system based on automatic production of ship lock lambdoidal doors

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180177461A1 (en) * 2016-12-22 2018-06-28 The Johns Hopkins University Machine learning approach to beamforming
CN108447062A (en) * 2018-02-01 2018-08-24 浙江大学 A kind of dividing method of the unconventional cell of pathological section based on multiple dimensioned mixing parted pattern
CN110458849A (en) * 2019-07-26 2019-11-15 山东大学 A kind of image partition method based on characteristic modification
CN110648334A (en) * 2019-09-18 2020-01-03 中国人民解放军火箭军工程大学 Multi-feature cyclic convolution saliency target detection method based on attention mechanism
CN111402209A (en) * 2020-03-03 2020-07-10 广州中国科学院先进技术研究所 U-Net-based high-speed railway steel rail damage detection method
CN111539886A (en) * 2020-04-21 2020-08-14 西安交通大学 Defogging method based on multi-scale feature fusion
CN111681232A (en) * 2020-06-10 2020-09-18 厦门理工学院 Industrial welding image defect detection method based on semantic segmentation
CN111739030A (en) * 2020-06-15 2020-10-02 大连理工大学 Fundus image blood vessel segmentation method of semantic and multi-scale fusion network
CN111932501A (en) * 2020-07-13 2020-11-13 太仓中科信息技术研究院 Seal ring surface defect detection method based on semantic segmentation
US20200380695A1 (en) * 2019-05-28 2020-12-03 Zongwei Zhou Methods, systems, and media for segmenting images
WO2021003821A1 (en) * 2019-07-11 2021-01-14 平安科技(深圳)有限公司 Cell detection method and apparatus for a glomerular pathological section image, and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180177461A1 (en) * 2016-12-22 2018-06-28 The Johns Hopkins University Machine learning approach to beamforming
CN108447062A (en) * 2018-02-01 2018-08-24 浙江大学 A kind of dividing method of the unconventional cell of pathological section based on multiple dimensioned mixing parted pattern
US20200380695A1 (en) * 2019-05-28 2020-12-03 Zongwei Zhou Methods, systems, and media for segmenting images
WO2021003821A1 (en) * 2019-07-11 2021-01-14 平安科技(深圳)有限公司 Cell detection method and apparatus for a glomerular pathological section image, and device
CN110458849A (en) * 2019-07-26 2019-11-15 山东大学 A kind of image partition method based on characteristic modification
CN110648334A (en) * 2019-09-18 2020-01-03 中国人民解放军火箭军工程大学 Multi-feature cyclic convolution saliency target detection method based on attention mechanism
CN111402209A (en) * 2020-03-03 2020-07-10 广州中国科学院先进技术研究所 U-Net-based high-speed railway steel rail damage detection method
CN111539886A (en) * 2020-04-21 2020-08-14 西安交通大学 Defogging method based on multi-scale feature fusion
CN111681232A (en) * 2020-06-10 2020-09-18 厦门理工学院 Industrial welding image defect detection method based on semantic segmentation
CN111739030A (en) * 2020-06-15 2020-10-02 大连理工大学 Fundus image blood vessel segmentation method of semantic and multi-scale fusion network
CN111932501A (en) * 2020-07-13 2020-11-13 太仓中科信息技术研究院 Seal ring surface defect detection method based on semantic segmentation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MUHAMMAD RAMEEZ UR RAHMAN ETAL.: "U-Net Based Defects Inspection in Photovoltaic Electroluminecscence Images", 《2019 IEEE INTERNATIONAL CONFERENCE ON BIG KNOWLEDGE》 *
MUHAMMAD RAMEEZ UR RAHMAN ETAL.: "U-Net Based Defects Inspection in Photovoltaic Electroluminecscence Images", 《2019 IEEE INTERNATIONAL CONFERENCE ON BIG KNOWLEDGE》, 30 December 2019 (2019-12-30), pages 4 *
代洋洋等: "UU-Net: 基于 U-Net 的 U 形多路径网络的视网膜血管分割", 《哈尔滨工程大学学报》, vol. 41, no. 5 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379728A (en) * 2021-07-02 2021-09-10 上海电气集团股份有限公司 Method, system, equipment and readable storage medium for detecting defects on surface of rail
CN113657383A (en) * 2021-08-24 2021-11-16 凌云光技术股份有限公司 Defect region detection method and device based on lightweight segmentation model
CN113657383B (en) * 2021-08-24 2024-05-24 凌云光技术股份有限公司 Defect region detection method and device based on lightweight segmentation model
CN113763358A (en) * 2021-09-08 2021-12-07 合肥中科类脑智能技术有限公司 Semantic segmentation based transformer substation oil leakage and metal corrosion detection method and system
CN113763358B (en) * 2021-09-08 2024-01-09 合肥中科类脑智能技术有限公司 Method and system for detecting oil leakage and metal corrosion of transformer substation based on semantic segmentation
CN114742832A (en) * 2022-06-13 2022-07-12 惠州威尔高电子有限公司 Welding defect detection method for MiniLED thin plate
CN117984024A (en) * 2024-04-03 2024-05-07 中国水利水电第十工程局有限公司 Welding data management method and system based on automatic production of ship lock lambdoidal doors

Also Published As

Publication number Publication date
CN112967272B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN112967272B (en) Welding defect detection method and device based on improved U-net and terminal equipment
CN109961446B (en) CT/MR three-dimensional image segmentation processing method, device, equipment and medium
JP2020533654A (en) Holographic anti-counterfeit code inspection method and equipment
CN111340796B (en) Defect detection method and device, electronic equipment and storage medium
CN114155244B (en) Defect detection method, device, equipment and storage medium
CN111914654B (en) Text layout analysis method, device, equipment and medium
CN114943673A (en) Defect image generation method and device, electronic equipment and storage medium
US20230401691A1 (en) Image defect detection method, electronic device and readable storage medium
CN113657202A (en) Component identification method, training set construction method, device, equipment and storage medium
CN112418345A (en) Method and device for quickly identifying fine-grained small target
CN111680750A (en) Image recognition method, device and equipment
CN115861255A (en) Model training method, device, equipment, medium and product for image processing
CN115344699A (en) Training method and device of text classification model, computer equipment and medium
US8548225B2 (en) Point selection in bundle adjustment
CN110619597A (en) Semitransparent watermark removing method and device, electronic equipment and storage medium
CN117095207A (en) Chip image labeling method, device, computing equipment and storage medium
CN113139617B (en) Power transmission line autonomous positioning method and device and terminal equipment
CN112950652B (en) Robot and hand image segmentation method and device thereof
CN110135464B (en) Image processing method and device, electronic equipment and storage medium
CN115131708A (en) Video segmentation method, device, equipment and medium based on fusion coding network
CN111898641A (en) Target model detection device, electronic equipment and computer readable storage medium
CN112926724A (en) Grading method and device for yield of injection molding product and electronic equipment
CN117974635B (en) Cable channel detection method, device, electronic equipment and computer readable medium
CN113111709B (en) Vehicle matching model generation method, device, computer equipment and storage medium
CN112927220A (en) Method and device for detecting insulator fault based on counterstudy and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant