CN116543267B - Image set processing method, image segmentation device, image set processing apparatus, image segmentation device, and storage medium - Google Patents
Image set processing method, image segmentation device, image set processing apparatus, image segmentation device, and storage medium Download PDFInfo
- Publication number
- CN116543267B CN116543267B CN202310807046.3A CN202310807046A CN116543267B CN 116543267 B CN116543267 B CN 116543267B CN 202310807046 A CN202310807046 A CN 202310807046A CN 116543267 B CN116543267 B CN 116543267B
- Authority
- CN
- China
- Prior art keywords
- image
- initial
- pixel point
- pixel
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003709 image segmentation Methods 0.000 title claims abstract description 72
- 238000003860 storage Methods 0.000 title claims abstract description 20
- 238000012545 processing Methods 0.000 title claims abstract description 17
- 238000003672 processing method Methods 0.000 title abstract description 26
- 230000004927 fusion Effects 0.000 claims abstract description 207
- 230000007547 defect Effects 0.000 claims abstract description 88
- 238000012549 training Methods 0.000 claims abstract description 71
- 238000000034 method Methods 0.000 claims abstract description 67
- 239000011324 bead Substances 0.000 claims abstract description 50
- 230000011218 segmentation Effects 0.000 claims description 90
- 238000004590 computer program Methods 0.000 claims description 42
- 238000002372 labelling Methods 0.000 claims description 34
- 230000000694 effects Effects 0.000 abstract description 15
- 230000008569 process Effects 0.000 abstract description 7
- 238000007500 overflow downdraw method Methods 0.000 description 12
- 238000003466 welding Methods 0.000 description 10
- 238000001514 detection method Methods 0.000 description 9
- 238000004519 manufacturing process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000007789 sealing Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004880 explosion Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 229910052744 lithium Inorganic materials 0.000 description 1
- 238000002844 melting Methods 0.000 description 1
- 230000008018 melting Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The application relates to an image set processing method, an image segmentation device, an image set processing device and a storage medium. According to the method, the target image set is obtained by acquiring the initial image set and carrying out pixel-level fusion on every two initial images in the initial image set, wherein the initial image set comprises a plurality of weld bead defect images. According to the method, the initial images in the initial image set are fused, so that the initial image set is amplified according to the fused images, the number of samples for training the image segmentation model is increased, and the training effect of the training image segmentation model can be improved to a certain extent; on the other hand, the method performs pixel-level fusion on the initial image, so that edge noise of the fused image can be reduced to a certain extent, the quality of the fused image is improved, the convergence rate of the image segmentation model in the training process is accelerated, and the training effect of the training image segmentation model can be improved to a certain extent.
Description
Technical Field
The present application relates to the field of battery detection technologies, and in particular, to an image set processing method, an image segmentation device, an apparatus, and a storage medium.
Background
The sealing nail welding is an indispensable link in the production process of the power battery, and whether the sealing nail welding reaches the standard directly affects the safety of the battery. The welding area of the sealing nail is called a welding bead, and due to the changes of temperature, environment and the like during welding, tiny defects such as pinholes, explosion points, explosion lines (cold joint), lack of welding, bead melting and the like often exist on the welding bead. The small defects on the welding bead directly affect the welding quality of the sealing nail, so the segmentation of the welding bead defects is very important.
Currently, the segmentation method for weld bead defects generally includes: the method comprises the steps of training a sample set based on a weld bead defect image in advance to obtain a segmentation network, enabling the trained segmentation network to have the function of segmenting defects in the weld bead defect image, and finally segmenting the defects in the weld bead image by utilizing the trained segmentation network.
However, the quality of the sample image required for the above-described segmentation network is low, resulting in poor training of the above-described segmentation network.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image set processing method, an image segmentation apparatus, a device, and a storage medium that can improve training effects.
In a first aspect, the present application provides an image set processing method. The method comprises the following steps:
acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
According to the image set processing method provided by the embodiment of the application, the initial images in the initial image set are fused, so that the initial image set is amplified according to the fused images, the number of samples for training the image segmentation model is increased, and the training effect of the training image segmentation model can be improved to a certain extent; on the other hand, the method performs pixel-level fusion on the initial image, so that edge noise of the fused image can be reduced to a certain extent, the quality of the fused image is improved, the convergence rate of the image segmentation model in the training process is accelerated, and the training effect of the training image segmentation model can be improved to a certain extent.
In one embodiment, the performing pixel-level fusion on each two initial images in the initial image set to obtain a target image set includes:
dividing the initial image set into a plurality of candidate image sets according to the working condition types of each initial image in the initial image set;
and carrying out pixel-level fusion on every two initial images in each candidate image set to obtain the target image set.
According to the image fusion method provided by the embodiment of the application, the initial images under the same working condition are fused, and because the image backgrounds under the same working condition are highly consistent, the fusion efficiency can be improved based on the initial images under the same working condition, and the fused images fused by the images under the same working condition can be more close to the real condition, so that the quality of the fused images can be improved to a certain extent, and the quality of a target image set is improved.
In one embodiment, the pixel-level fusion is performed on each two initial images in each candidate image set to obtain the fused image, which includes:
determining the distance between each first pixel point on the first initial image and each corresponding second pixel point on the second initial image according to the first initial image and the second initial image in any candidate image set;
And according to the distance between the first pixel point and the second pixel point and a preset gray threshold, carrying out pixel-level fusion on the first initial image and the second initial image to obtain fusion images corresponding to the first initial image and the second initial image.
The image fusion method provided by the embodiment of the application realizes the pixel-level fusion of the first initial image and the second initial image, reserves the detail information in each image of the first initial image and the second initial image, and can reduce the edge noise of the fused image to a certain extent, thereby improving the quality of the fused image.
In one embodiment, the performing pixel-level fusion on the first initial image and the second initial image according to the distance between the first pixel point and the second pixel point and the preset gray threshold value to obtain a fused image corresponding to the first initial image and the second initial image includes:
and under the condition that the distance is not greater than the preset gray threshold value, assigning the gray value of the first pixel point or the gray value of the second pixel point to the corresponding pixel point on the fusion image to obtain the fusion image.
In one embodiment, the performing pixel-level fusion on the first initial image and the second initial image according to the distance between the first pixel point and the second pixel point and the preset gray threshold value to obtain a fused image corresponding to the first initial image and the second initial image includes:
under the condition that the distance is larger than the preset gray threshold value, determining a comparison result between the distance and a target background pixel point, and determining a pixel point corresponding to the distance on the fusion image according to the comparison result to obtain the fusion image; the target background pixel point is determined according to the background pixel point of the first initial image or the second initial image.
In one embodiment, the determining, according to the comparison result, the pixel point corresponding to the distance on the fused image, to obtain the fused image includes:
if the comparison result shows that the distance is larger than the gray value of the target background pixel point, determining the pixel point with the large gray value in the first pixel point and the second pixel point as the corresponding pixel point on the fusion image to obtain the fusion image;
And if the comparison result shows that the distance is not larger than the gray value of the target background pixel point, determining the pixel point with the small gray value in the first pixel point and the second pixel point as the corresponding pixel point on the fusion image, and obtaining the fusion image.
The image fusion method provided by the embodiment of the application realizes the pixel-level fusion of the first initial image and the second initial image, reserves the detail information in each image of the first initial image and the second initial image, and can reduce the edge noise of the fused image to a certain extent, thereby improving the quality of the fused image.
In one embodiment, the target image set further comprises: a plurality of initial tag images and a plurality of fused tag images, the method further comprising:
labeling each initial image in the initial image set to obtain an initial label image corresponding to each initial image;
labeling each fusion image in the target image set according to each initial label image to obtain the fusion label image.
The labeling method provided by the embodiment of the application can label the fused image by referring to the initial label image, can improve the labeling efficiency to a certain extent, can also improve the labeling accuracy, and can further improve the quality of the concentrated fused label image of the target image.
In one embodiment, the labeling each fused image in the target image set according to each initial label image to obtain the fused label image includes:
determining a first initial image and a second initial image corresponding to any one of the fusion images in the target image set;
determining a first initial label image corresponding to the first initial image and a second initial label image corresponding to the second initial image from the plurality of initial label images;
determining a source label image of each pixel point on the fusion image from the first initial label image or the second initial label image according to the gray value source of each pixel point on the fusion image;
and labeling each pixel point of the fusion image according to the source label image of each pixel point on the fusion image to obtain the fusion label image.
The labeling method provided by the embodiment of the application can label the fused image by referring to the initial label image, can improve the labeling efficiency to a certain extent, can also improve the labeling accuracy, and can further improve the quality of the concentrated fused label image of the target image.
In a second aspect, the application further provides an image segmentation method. The method comprises the following steps:
acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
According to the image segmentation method disclosed by the embodiment of the application, the segmentation network is obtained by training based on the amplified target image set, so that the training effect of the segmentation network is excellent, and the accuracy of the segmentation network for segmenting the image can be improved to a certain extent.
In a third aspect, the application further provides a training method of the image segmentation model. The method comprises the following steps:
acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
The training method provided by the embodiment of the application is used for training based on the amplified target image set, so that the training effect can be improved to a certain extent, and the segmentation accuracy of the segmentation network can be further improved.
In a fourth aspect, the application further provides an image set processing device. The device comprises:
the acquisition module is used for acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
the fusion module is used for carrying out pixel-level fusion on each two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
In a fifth aspect, the present application further provides an image segmentation apparatus. The image segmentation apparatus includes:
the acquisition module is used for acquiring the image to be segmented; the image to be segmented comprises weld bead defects;
the segmentation module is used for inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In a sixth aspect, the present application further provides a training device for an image segmentation model, where the training device includes:
the acquisition module is used for acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and the training module is used for training the initial segmentation model according to the target image set to obtain an image segmentation model.
In a seventh aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
In an eighth aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
Acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In a ninth aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
In a tenth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
In an eleventh aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In a twelfth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
In a thirteenth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
In a fourteenth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
Inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In a fifteenth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to designate like parts throughout the accompanying drawings. In the drawings:
FIG. 1 is an internal block diagram of a computer device in one embodiment;
FIG. 2 is a schematic illustration of a defect image in one embodiment;
FIG. 3 is another schematic illustration of a defect image in one embodiment;
FIG. 4 is a flow chart of an image set processing method in one embodiment;
FIG. 5 is a flowchart of an image set processing method according to another embodiment;
FIG. 6 is a flowchart of an image set processing method according to another embodiment;
FIG. 7 is a flowchart of an image set processing method according to another embodiment;
FIG. 8 is a flowchart of an image set processing method according to another embodiment;
FIG. 9 is a flowchart of an image set processing method according to another embodiment;
FIG. 10 is a flow chart of an image segmentation method in one embodiment;
FIG. 11 is a flow chart of a training method of an image segmentation model in one embodiment;
FIG. 12 is a block diagram showing the structure of an image set processing apparatus in one embodiment;
FIG. 13 is a block diagram showing the structure of an image dividing apparatus in one embodiment;
FIG. 14 is a block diagram of a training apparatus for an image segmentation model in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the description of the drawings above are intended to cover a non-exclusive inclusion.
In the description of embodiments of the present application, the technical terms "first," "second," and the like are used merely to distinguish between different objects and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated, a particular order or a primary or secondary relationship. In the description of the embodiments of the present application, the meaning of "plurality" is two or more unless explicitly defined otherwise.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the production process of lithium batteries, minor defects may also cause serious damage to the quality of the product. These minor defects appear morphologically as: 1. some defects in an image with the resolution of 8K or 16K only occupy tens of pixels or even more than tens of pixels, and the target is extremely small, as shown in fig. 2; 2. the defect morphology is different as shown in fig. 3; 3. the gray value of the image is low, and the distinction degree from the background is small. Based on these performances, on the one hand, the image containing the defect is difficult to acquire, and on the other hand, the quality of the image containing the defect is poor, so that when the image containing the defect is used as a sample image to train an image segmentation model, an image detection model or other image processing models, various models are slow to converge or even difficult to converge in the training process, thus influencing the training effect, for example, if the image segmentation model is used, the segmentation accuracy of the image segmentation model is low; if the image detection model is the image detection model, the detection accuracy of the image detection model is low. The application provides an image set processing method for segmentation, detection and the like of weld bead defect images, which can effectively expand defect samples and improve model convergence. The following examples illustrate the process in detail.
The image set processing method provided by the embodiment of the application can be applied to the computer equipment shown in the figure 1. The computer device may be a terminal, and its internal structure may be as shown in fig. 1. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image set processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the architecture shown in fig. 1 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements may be implemented, as a particular computer device may include more or less components than those shown, or may be combined with some components, or may have a different arrangement of components.
In one embodiment, as shown in fig. 4, there is provided an image set processing method, which is described by taking an example that the method is applied to the computer device in fig. 1, and includes the following steps:
s201, acquiring an initial image set; the initial image set includes a plurality of weld bead defect images.
Wherein the initial image set may be used to train an image segmentation model, an image detection model, etc. The initial image set may include bead defect images of different operating condition types, and may also include bead defect images of the same operating condition type. The initial image set may include bead defect images of different types of defects, or may include bead defect images of the same type of defects.
In the embodiment of the application, when weld bead defect images are generated in the battery production and manufacturing process, the computer equipment can be connected with a corresponding image acquisition device to acquire a large number of weld bead defect images, and the acquired weld bead defect images are collected into an image set to be used as an initial image set; alternatively, the computer device may also download a large number of weld bead defect images from the cloud server or the network, and aggregate the obtained weld bead defect images into an image set as an initial image set.
S202, fusing every two initial images in the initial image set at a pixel level to obtain a target image set; the target image set comprises the fused image after fusion and the image in the initial image set, and is used for training the image segmentation model.
In the embodiment of the application, when the computer equipment acquires the initial image set based on the steps, any two initial images can be selected from the initial image set to carry out pixel-level fusion, so as to obtain a fused image of the two initial images, then any two initial images are selected from the rest initial images to carry out pixel-level fusion, so as to obtain the fused image of the two initial images, the steps are continuously carried out, and the two-to-two image fusion is carried out until all the initial images in the initial image set are fused, and then the obtained fused images corresponding to each two initial images form an image set, namely the fused image set. And finally integrating the fused image set and the initial image set to form a target image set, wherein the target image set comprises the fused image and the initial image in the initial image set.
Optionally, when the computer device obtains the initial image set based on the foregoing steps, two initial images with the same defect type may be selected from the initial image set to perform pixel level fusion, so as to obtain a fused image of the two initial images, and then two initial images with the same defect type are selected from the remaining initial images to perform pixel level fusion, so as to obtain a fused image of the two initial images, and the steps are continuously performed, so that the two images are fused until all types of initial images in the initial image set are fused, and then one image set is formed by the obtained fused images corresponding to each two initial images, namely the fused image set. And finally integrating the fused image set and the initial image set to form a target image set, wherein the target image set comprises the fused image and the images in the initial image set.
Alternatively, when the computer device acquires the initial image set based on the foregoing steps, pixel-level fusion may be performed on each two adjacent initial images in the initial image set, and the obtained fused image corresponding to each two adjacent initial images may form an image set, that is, a fused image set. And finally integrating the fused image set and the initial image set to form a target image set, wherein the target image set comprises the fused image and the images in the initial image set.
Optionally, after the computer device obtains the target image set, the target image set may be used as a sample image set to train the constructed image segmentation model, so that the image segmentation model may have a function of segmenting an image correspondingly, and the sample image set may train task models such as the constructed image detection model, so that each task model may have a corresponding task processing function.
According to the image set processing method provided by the embodiment of the application, the target image set is obtained by acquiring the initial image set and carrying out pixel-level fusion on every two initial images in the initial image set, wherein the initial image set comprises a plurality of weld bead defect images. According to the method, the initial images in the initial image set are fused, so that the initial image set is amplified according to the fused images, the number of samples for training the image segmentation model is increased, and the training effect of the training image segmentation model can be improved to a certain extent; on the other hand, the method performs pixel-level fusion on the initial image, so that edge noise of the fused image can be reduced to a certain extent, the quality of the fused image is improved, the convergence rate of the image segmentation model in the training process is accelerated, and the training effect of the training image segmentation model can be improved to a certain extent.
In one embodiment, as shown in fig. 5, a pixel-level image fusion method is provided, that is, the step S202 "performing pixel-level fusion on each two initial images in the initial image set to obtain a target image set", including:
s301, dividing the initial image set into a plurality of candidate image sets according to the working condition types of the initial images in the initial image set.
The initial image is industrial image data, and the industrial image data has obvious characteristics relative to a natural scene, namely, the background is relatively fixed and uniform. Therefore, the working condition type of the initial image represents the background environment where the weld bead defect exists in the initial image, and generally, the picture background under the same working condition is highly consistent.
In the embodiment of the application, when the computer equipment acquires the initial image set based on the steps, the working condition type of each initial image in the initial image set can be determined, and then the initial images belonging to the same working condition are divided into one image set according to the working condition type of each initial image to form a plurality of candidate image sets under different working conditions.
S302, fusing every two initial images in each candidate image set at a pixel level to obtain a target image set.
In the embodiment of the application, when the computer equipment acquires a plurality of candidate image sets based on the steps, any two initial images can be selected from the candidate image sets for pixel-level fusion to obtain the fused images of the two initial images, any two initial images are selected from the rest initial images for pixel-level fusion to obtain the fused images of the two initial images, the steps are continuously executed, the two-to-two image fusion is carried out until all the initial images in the candidate image sets are fused, then the above fusion step is continuously carried out on other candidate image sets until all the candidate image sets are fused, and finally the fused images in the candidate image sets and the initial image sets are integrated together to form the target image set.
Optionally, when the computer device obtains multiple candidate image sets based on the foregoing steps, for any candidate image set, two images with the same defect type may be selected from the candidate image set to perform pixel-level fusion, so as to obtain a fused image of the two initial images, then two initial images with the same defect type are selected from the remaining initial images to perform pixel-level fusion, so as to obtain a fused image of the two initial images, and the steps are continuously performed, so that the two-to-two image fusion is performed until all types of initial images in the candidate image set are fused, then the above fusion step is continuously performed on other candidate image sets until all candidate image sets are fused, and finally, the fused images in each candidate image set and the initial image set are integrated together to form the target image set.
Optionally, when the computer device acquires multiple candidate image sets based on the foregoing steps, for any one candidate image set, pixel-level fusion may be performed on every two adjacent initial images in the candidate image set, and then the above fusion step is continuously performed on other candidate image sets until all candidate image sets are fused, and finally, the fused images in each candidate image set and the initial image sets are integrated together to form the target image set.
According to the image fusion method provided by the embodiment of the application, the initial images under the same working condition are fused, and because the image backgrounds under the same working condition are highly consistent, the fusion efficiency can be improved based on the initial images under the same working condition, and the fused images fused by the images under the same working condition can be more close to the real condition, so that the quality of the fused images can be improved to a certain extent, and the quality of a target image set is improved.
Further, as shown in fig. 6, a pixel-level fusion method is provided, that is, in S302, the method of "performing pixel-level fusion on each two initial images in each candidate image set to obtain a fused image" includes:
S401, determining the distance between each first pixel point on the first initial image and each corresponding second pixel point on the second initial image according to the first initial image and the second initial image in any candidate image set.
The position of the first pixel point on the first initial image is the same as the position of the second pixel point on the second initial image, namely the corresponding relation between the first pixel point and the second pixel point.
The embodiment of the application relates to a method for fusing every two initial images in each candidate image set, namely taking any candidate image set as an example, selecting any two initial images from the candidate image set, and determining the two initial images as a first initial image and a second initial image. And performing difference value operation on the gray value of each first pixel point on the first initial image and the gray value of each corresponding second pixel point on the second initial image to obtain the distance between each first pixel point on the first initial image and each corresponding second pixel point on the second initial image.
S402, performing pixel-level fusion on the first initial image and the second initial image according to the distance between the first pixel point and the second pixel point and a preset gray threshold value, and obtaining fusion images corresponding to the first initial image and the second initial image.
In the embodiment of the application, when the computer equipment obtains the distance between the first pixel point and the second pixel point, the distance can be compared with the preset gray threshold value, if the distance between the first pixel point and the second pixel point is larger than the preset gray threshold value, the difference between the gray value of the first pixel point and the gray value of the second pixel point is larger, at the moment, the target pixel point can be determined from the first pixel point and the second pixel point according to the comparison result of the target defect gray value and the background gray value, and then the gray value of the target pixel point is assigned to the pixel point at the corresponding position of the fusion image. Optionally, the computer device may determine a target pixel from the first pixel and the second pixel according to a result of comparing the distance with the background gray value, and then assign the gray value of the target pixel to the pixel at the position corresponding to the fused image.
Optionally, the comparison result of the target defect gray value and the background gray value includes a first comparison result of comparing the gray value of the first pixel with the background gray value in the first initial image where the first pixel is located, and a second comparison result of comparing the gray value of the second pixel with the background gray value in the second initial image where the second pixel is located, based on which the method for determining the target pixel from the first pixel and the second pixel according to the comparison result of the target defect gray value and the background gray value is provided, where the method includes: comparing the gray value of the first pixel point with the background gray value in the first initial image where the first pixel point is located to obtain a first comparison result, comparing the gray value of the second pixel point with the background gray value in the second initial image where the second pixel point is located to obtain a second comparison result, and finally selecting one pixel point from the first pixel point and the second pixel point as a target pixel point according to the first comparison result and the second comparison result. Specifically, if the first comparison result indicates that the gray value of the first pixel point is greater than the background gray value in the first initial image where the first pixel point is located, and if the second comparison result indicates that the gray value of the second pixel point is greater than the background gray value in the second initial image where the second pixel point is located, selecting a pixel point with a large gray value from the first pixel point and the second pixel point as a target pixel point. If the first comparison result indicates that the gray value of the first pixel point is not greater than the background gray value in the first initial image where the first pixel point is located, and if the second comparison result indicates that the gray value of the second pixel point is not greater than the background gray value in the second initial image where the second pixel point is located, selecting a pixel point with a small gray value from the first pixel point and the second pixel point as a target pixel point.
If the distance between the first pixel point and the second pixel point is not greater than the preset gray threshold, the difference between the gray value of the first pixel point and the gray value of the second pixel point is smaller, and the gray value of the first pixel point or the gray value of the second pixel point can be assigned to the corresponding pixel point on the fusion image at this time, so that the fusion image is obtained.
The image fusion method provided by the embodiment of the application realizes the pixel-level fusion of the first initial image and the second initial image, reserves the detail information in each image of the first initial image and the second initial image, and can reduce the edge noise of the fused image to a certain extent, thereby improving the quality of the fused image.
In one embodiment, when the distance between the first pixel point and the second pixel point is greater than the preset gray threshold, the present application provides a fusion method for fusing the first initial image and the second initial image according to the comparison result of the distance and the background gray value in such a scene, that is, the computer device executes the step of S402 "fusing the first initial image and the second initial image at the pixel level according to the distance between the first pixel point and the second pixel point and the preset gray threshold, so as to obtain a fused image corresponding to the first initial image and the second initial image", and specifically performs the steps of:
Under the condition that the distance is larger than a preset gray threshold value, determining a comparison result between the distance and a target background pixel point, and determining a pixel point corresponding to the distance on the fusion image according to the comparison result to obtain the fusion image; the target background pixel point is determined according to the background pixel point of the first initial image or the second initial image.
In the embodiment of the application, when the computer equipment determines the distance between the first pixel point and the second pixel point, the distance can be compared with the background pixel point in the first initial image or the background pixel point in the second initial image to obtain a comparison result. If the comparison result shows that the distance is larger than the gray value of the background pixel point, the fusion target is indicated to be a high gray value, at the moment, the pixel point with the large gray value in the first pixel point and the second pixel point can be determined to be the corresponding pixel point on the fusion image, namely, the gray value of the pixel point with the large gray value is assigned to the corresponding pixel point on the fusion image, and the fusion image is obtained; if the comparison result shows that the distance is not larger than the gray value of the background pixel point, the fusion target is indicated to be a low gray value, at the moment, the pixel point with the small gray value in the first pixel point and the second pixel point can be determined to be the corresponding pixel point on the fusion image, namely, the gray value of the pixel point with the small gray value is assigned to the corresponding pixel point on the fusion image, and the fusion image is obtained.
The fusion method described in the above embodiment is exemplified, and assuming that the first initial image is the image a and the second initial image is the image B, the fusion method may be determined by the following relation (1):
(1);
wherein,,represents the +.>Gray value of each pixel, +.>And->Representing the +.sup.th in image A and image B, respectively, for fusion>Gray value of each pixel, +.>For the preset gray threshold +.>Representing the comparison result between the distance and the target background pixel point, < >>Representing the fusion target as a high gray value, +.>Indicating that the fusion target is a low gray value. When the +.>The gray value difference of each pixel point is not more than a preset gray threshold valueAt the time of fusing the image M +.>The gray value of each pixel point is randomly assigned as the +.th in the image A or the image B>Gray values of individual pixels. When the +.>The gray value phase difference of each pixel point is larger than a preset gray threshold valueWhen according to->The +.>The gray value of the larger or smaller pixel of the gray values of the individual pixels is assigned to the +.>And a pixel point.
Optionally, when the computer device fuses to obtain the fused image based on any one of the fusion methods, the first initial image and the second initial image associated with the fused image may be recorded and stored in association with the fused image, so that when the fused image is marked later, the corresponding first initial image and second initial image can be quickly found according to the fused image.
The image fusion method provided by the embodiment of the application realizes the pixel-level fusion of the first initial image and the second initial image, reserves the detail information in each image of the first initial image and the second initial image, and can reduce the edge noise of the fused image to a certain extent, thereby improving the quality of the fused image.
When the image segmentation model is trained by using the target image set according to any of the embodiments, the initial image and the fused image in the target image set need to be labeled, so as to obtain a corresponding label image, and further the image segmentation model is trained based on the image and the corresponding label image. Therefore, the target image set further includes: the application also provides a method for labeling each image in the target image set based on the plurality of initial label images and the plurality of fusion label images, as shown in fig. 7, the method comprises the following steps:
s501, labeling each initial image in the initial image set to obtain an initial label image corresponding to each initial image.
In the embodiment of the application, the defects on each initial image in the initial image set can be marked, so that the marked initial image, namely the initial label image, is obtained. Optionally, after labeling each initial image, the computer device may further construct an association relationship between each initial image and the corresponding initial label image, and record and store the association relationship, so as to quickly obtain the corresponding initial label image according to the initial image.
S502, labeling each fusion image in the target image set according to the initial label image corresponding to each initial image to obtain the fusion label image.
In the embodiment of the application, when the computer equipment acquires the initial label image corresponding to each initial image and each fusion image in the target image set, each fusion image can be marked, in the marking process, two initial images required by fusing the fusion images can be determined first, then the two initial label images corresponding to the two initial images are determined, finally the fusion images can be marked by referring to the labels in the initial label images, and the marked fusion label images are obtained.
The labeling method described in the foregoing embodiment is illustrated, and assuming that the first initial image is an image a, the second initial image is an image B, and the fusion label image M corresponding to the fusion image M is determined by the following relational expression (2):
(2);
wherein,,represents the +.o of the fusion tag image M>Label of individual pixels,/->And->Respectively represent +.f in tag image A and tag image B for fusion>Labels for individual pixels. When fusion image M +.>When the gray value of each pixel is from the image A, the first +. >Label assignment of individual pixels to the fusion label image M +.>A plurality of pixel points; when fusion image M +.>When the gray value of each pixel comes from the image B, the first +.>Label assignment of individual pixels to the fusion label image M +.>And a pixel point.
The labeling method provided by the embodiment of the application can label the fused image by referring to the initial label image, can improve the labeling efficiency to a certain extent, can also improve the labeling accuracy, and can further improve the quality of the concentrated fused label image of the target image.
Further, as shown in fig. 8, an implementation manner of the labeling is provided, that is, the step of S502 "labeling each fusion image in the target image set according to each initial label image to obtain a fusion label image", includes:
s601, determining a first initial image and a second initial image corresponding to a fusion image aiming at any fusion image in a target image set.
In the embodiment of the application, because each pixel point on the fused image is derived from the first initial image or the second initial image when fused in the earlier stage, when the computer equipment marks the fused image in advance, the computer equipment can determine the initial image corresponding to the fused image, namely the first initial image and the second initial image. Specifically, in the determining process, the determining can be performed according to the association relationship among the fusion image, the first initial image and the second initial image, and the association relationship is obtained, recorded and stored when the first initial image and the second initial image are fused in the earlier stage.
S602, determining a first initial label image corresponding to the first initial image and a second initial label image corresponding to the second initial image from the plurality of initial label images.
In the embodiment of the application, when the computer equipment acquires the first initial image, the first initial label image corresponding to the first initial image can be found from a plurality of initial label images according to the association relationship between the initial image and the initial label image; when the computer device acquires the second initial image, the second initial label image corresponding to the second initial image can be found from the plurality of initial label images according to the association relationship between the initial image and the initial label image.
S603, determining the source label image of each pixel point on the fusion image from the first initial label image or the second initial label image according to the gray value source of each pixel point on the fusion image.
The gray value source of each pixel point on the fused image can be determined to be the first initial image or the second initial image, and then the corresponding first initial label image is further determined according to the first initial image or the corresponding second initial label image is further determined according to the second initial image, namely the source label image corresponding to each pixel point on the fused image is determined.
S604, labeling each pixel point of the fusion image according to the source label image of each pixel point on the fusion image to obtain the fusion label image.
Because each pixel point on the source label image is the pixel point after the labeling, the labeling mode on the source label image of each pixel point on the fusion image can be referred to, each pixel point on the fusion image is labeled, and the fusion label image is obtained after the labeling. Specifically, the gray value of the pixel point on the source label image can be assigned to the pixel point at the corresponding position on the fusion label image to realize labeling.
The labeling method provided by the embodiment of the application can label the fused image by referring to the initial label image, can improve the labeling efficiency to a certain extent, can also improve the labeling accuracy, and can further improve the quality of the concentrated fused label image of the target image.
In summary, the present application also provides an image set processing method, as shown in fig. 9, which includes:
s701, acquiring an initial image set; the initial image set includes a plurality of weld bead defect images.
S702, dividing the initial image set into a plurality of candidate image sets according to the working condition types of the initial images in the initial image set.
S703, determining, for the first initial image and the second initial image in any candidate image set, a distance between each first pixel point on the first initial image and each corresponding second pixel point on the second initial image.
S704, determining whether the distance is not greater than a preset gray threshold, if the distance is not greater than the preset gray threshold, executing step S705, and if the distance is greater than the preset gray threshold, executing step S706.
And S705, assigning the gray value of the first pixel point or the gray value of the second pixel point to the corresponding pixel point on the fusion image to obtain the fusion image.
S706, determining a comparison result between the distance and the target background pixel point, if the comparison result indicates that the distance is greater than the gray value of the target background pixel point, executing step S707, and if the comparison result indicates that the distance is not greater than the gray value of the target background pixel point, executing step S708.
And S707, determining the pixel point with the large gray value in the first pixel point and the two pixel points as the corresponding pixel point on the fusion image, and obtaining the fusion image.
And S708, determining the pixel point with the small gray value in the first pixel point and the second pixel point as the corresponding pixel point on the fusion image, and obtaining the fusion image.
S709, labeling each initial image in the initial image set to obtain an initial label image corresponding to each initial image.
S710, determining a first initial image and a second initial image corresponding to the fused image aiming at any fused image in the target image set.
S711, a first initial label image corresponding to the first initial image and a second initial label image corresponding to the second initial image are determined from the plurality of initial label images.
S712, determining the source label image of each pixel point on the fusion image from the first initial label image or the second initial label image according to the gray value source of each pixel point on the fusion image.
S713, labeling each pixel point of the fusion image according to the source label image of each pixel point on the fusion image to obtain the fusion label image.
The above steps are described in the foregoing, and the detailed description is referred to the foregoing description, which is not repeated here.
The image set processing method fully utilizes the characteristics of the industrial defect image to design a simple and efficient data amplification method, can effectively expand defect samples and improve convergence conditions of an image segmentation model, and compared with the prior art, the method can keep defect details as much as possible and does not introduce other boundary noise, so that each image in the amplified image set is more natural to a certain extent, and the quality of each image is further improved.
In an embodiment, based on the image set processing method described in any one of the above embodiments, the present application further provides an image segmentation method, as shown in fig. 10, including:
s801, obtaining an image to be segmented; the image to be segmented includes weld bead defects.
In the embodiment of the application, when the image including the weld bead defect is generated in the battery production and manufacturing process, the computer equipment can be connected with the corresponding image acquisition device to acquire the image including the weld bead defect, and the acquired image including the weld bead defect is used as the image to be segmented.
S802, inputting an image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
The segmentation network is a trained neural network and is used for carrying out defect region segmentation on the input image.
In the embodiment of the application, when the computer equipment acquires the image to be segmented, the image to be segmented can be input into a pre-trained segmentation network for defect segmentation, and a segmentation result is obtained. The target image set required for training the segmentation network includes an initial image in the initial image set and a fused image obtained by fusing each two initial images in the initial image set at a pixel level, and the method for acquiring the initial image and the fused image in the initial image set may be obtained based on the image set processing method described in any one of the embodiments of fig. 4 to 10, and the detailed description is omitted herein.
According to the image segmentation method disclosed by the embodiment of the application, the segmentation network is obtained by training based on the amplified target image set, so that the training effect of the segmentation network is excellent, and the accuracy of the segmentation network for segmenting the image can be improved to a certain extent.
In an embodiment, based on the image set processing method described in any one of the above embodiments, the present application further provides a training method of an image segmentation model, as shown in fig. 11, where the training method includes:
s901, acquiring a target image set; the target image set comprises initial images in the initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
The embodiment of the application relates to a method for acquiring a target image set, which can be obtained based on the image set processing method described in any of the embodiments of fig. 2-8, and the detailed description is omitted herein.
S902, training an initial segmentation model according to a target image set to obtain an image segmentation model.
In the embodiment of the application, when the computer equipment acquires the target image set, the constructed initial segmentation model can be trained based on the target image set, so that an image segmentation model is obtained, and then the defect area in the defect image can be segmented based on the trained image segmentation model, namely the method is applied to the embodiment of fig. 9.
The training method provided by the embodiment of the application is used for training based on the amplified target image set, so that the training effect can be improved to a certain extent, and the segmentation accuracy of the segmentation network can be further improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an image set processing device for realizing the image set processing method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation in the embodiments of the image set processing apparatus or apparatuses provided below may refer to the limitation of the image set processing method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 12, there is provided an image set processing apparatus including:
an acquisition module 10 for acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
the fusion module 11 is used for carrying out pixel-level fusion on each two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
The respective modules in the image set processing apparatus described above may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, as shown in fig. 13, there is provided an image set processing apparatus including:
an acquisition module 20, configured to acquire an image to be segmented; the image to be segmented comprises weld bead defects.
The segmentation module 21 is configured to input the image to be segmented into a segmentation network for performing defect segmentation, so as to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In one embodiment, as shown in fig. 14, there is provided a training apparatus of an image segmentation model, including:
an acquisition module 30 for acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
The training module 31 is configured to train the initial segmentation model according to the target image set, so as to obtain an image segmentation model.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
Acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
The computer device provided in the foregoing embodiments has similar implementation principles and technical effects to those of the foregoing method embodiments, and will not be described herein in detail.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
Acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
Acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
The foregoing embodiment provides a computer readable storage medium, which has similar principles and technical effects to those of the foregoing method embodiment, and will not be described herein.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
Acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
The foregoing embodiment provides a computer program product, which has similar principles and technical effects to those of the foregoing method embodiment, and will not be described herein.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric RandomAccess Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can take many forms, such as static Random access memory (Static Random Access Memory, SRAM) or Dynamic Random access memory (Dynamic Random AccessMemory, DRAM), among others. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.
Claims (14)
1. A method of image set processing, the method comprising:
acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on pixel points in every two initial images in the initial image set to obtain a fused image set;
integrating the fusion image set and the initial image set to generate a target image set; the target image set comprises a fused image after fusion and an image in the initial image set, and is used for training an image segmentation model;
The step of fusing pixel points in every two initial images in the initial image set to obtain a fused image set comprises the following steps:
performing difference value operation on gray values of first pixel points on the first initial image and gray values of corresponding second pixel points on the second initial image according to the first initial image and the second initial image in each two initial images to obtain distances between the first pixel points on the first initial image and the corresponding second pixel points on the second initial image;
and selecting any one gray value of the first pixel point and the gray value of the second pixel point to be assigned to the corresponding pixel point on the fusion image by comparing the distance with a preset gray threshold value, or assigning the maximum gray value of the first pixel point and the gray value of the second pixel point to the corresponding pixel point on the fusion image, or assigning the minimum gray value of the first pixel point and the gray value of the second pixel point to the corresponding pixel point on the fusion image to obtain the fusion image.
2. The method according to claim 1, wherein the performing pixel-level fusion on each two initial images in the initial image set to obtain a fused image set includes:
Dividing the initial image set into a plurality of candidate image sets according to the working condition types of each initial image in the initial image set;
and carrying out pixel-level fusion on each two initial images in each candidate image set to obtain the fused image set.
3. The method according to claim 1, wherein selecting any one of the gray value of the first pixel and the gray value of the second pixel to be assigned to the corresponding pixel on the fused image by comparing the distance with a preset gray threshold, or assigning the maximum gray value of the first pixel and the gray value of the second pixel to the corresponding pixel on the fused image, or assigning the minimum gray value of the first pixel and the gray value of the second pixel to the corresponding pixel on the fused image, to obtain the fused image includes:
and selecting any one gray value of the first pixel point and the gray value of the second pixel point to be assigned to the corresponding pixel point on the fusion image under the condition that the distance is not larger than the preset gray threshold value, so as to obtain the fusion image.
4. The method according to claim 1, wherein selecting any one of the gray value of the first pixel and the gray value of the second pixel to be assigned to the corresponding pixel on the fused image by comparing the distance with a preset gray threshold, or assigning the maximum gray value of the first pixel and the gray value of the second pixel to the corresponding pixel on the fused image, or assigning the minimum gray value of the first pixel and the gray value of the second pixel to the corresponding pixel on the fused image, to obtain the fused image includes:
under the condition that the distance is larger than the preset gray threshold value, determining a comparison result between the distance and a target background pixel point, and determining a pixel point corresponding to the distance on the fusion image according to the comparison result to obtain the fusion image; the target background pixel point is determined according to the background pixel point of the first initial image or the second initial image.
5. The method of claim 4, wherein determining the pixel point on the fused image corresponding to the distance according to the comparison result, to obtain the fused image, comprises:
If the comparison result shows that the distance is larger than the gray value of the target background pixel point, determining the pixel point with the large gray value in the first pixel point and the second pixel point as the corresponding pixel point on the fusion image to obtain the fusion image;
and if the comparison result shows that the distance is not larger than the gray value of the target background pixel point, determining the pixel point with the small gray value in the first pixel point and the second pixel point as the corresponding pixel point on the fusion image, and obtaining the fusion image.
6. The method of any of claims 1-5, wherein the set of target images further comprises: a plurality of initial tag images and a plurality of fused tag images, the method further comprising:
labeling each initial image in the initial image set to obtain an initial label image corresponding to each initial image;
labeling each fusion image in the target image set according to each initial label image to obtain the fusion label image.
7. The method according to claim 6, wherein labeling each fused image in the target image set according to each initial label image to obtain the fused label image includes:
Determining a first initial image and a second initial image corresponding to any one of the fusion images in the target image set;
determining a first initial label image corresponding to the first initial image and a second initial label image corresponding to the second initial image from the plurality of initial label images;
determining a source label image of each pixel point on the fusion image from the first initial label image or the second initial label image according to the gray value source of each pixel point on the fusion image;
and labeling each pixel point of the fusion image according to the source label image of each pixel point on the fusion image to obtain the fusion label image.
8. An image segmentation method, characterized in that the segmentation method comprises:
acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing pixel points in every two initial images in the initial image set at a pixel level;
The method for fusing pixel levels of each two pixel points in the initial image set to obtain a fused image comprises the following steps:
performing difference value operation on gray values of first pixel points on the first initial image and gray values of corresponding second pixel points on the second initial image according to the first initial image and the second initial image in each two initial images to obtain distances between the first pixel points on the first initial image and the corresponding second pixel points on the second initial image;
and selecting any one gray value of the first pixel point and the gray value of the second pixel point to be assigned to the corresponding pixel point on the fusion image by comparing the distance with a preset gray threshold value, or assigning the maximum gray value of the first pixel point and the gray value of the second pixel point to the corresponding pixel point on the fusion image, or assigning the minimum gray value of the first pixel point and the gray value of the second pixel point to the corresponding pixel point on the fusion image to obtain the fusion image.
9. A method of training an image segmentation model, the method comprising:
acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing pixel points in every two initial images in the initial image set at a pixel level;
training an initial segmentation model according to the target image set to obtain an image segmentation model;
the method for fusing pixel levels of each two pixel points in the initial image set to obtain a fused image comprises the following steps:
performing difference value operation on gray values of first pixel points on the first initial image and gray values of corresponding second pixel points on the second initial image according to the first initial image and the second initial image in each two initial images to obtain distances between the first pixel points on the first initial image and the corresponding second pixel points on the second initial image;
and selecting any one gray value of the first pixel point and the gray value of the second pixel point to be assigned to the corresponding pixel point on the fusion image by comparing the distance with a preset gray threshold value, or assigning the maximum gray value of the first pixel point and the gray value of the second pixel point to the corresponding pixel point on the fusion image, or assigning the minimum gray value of the first pixel point and the gray value of the second pixel point to the corresponding pixel point on the fusion image to obtain the fusion image.
10. An image set processing apparatus, characterized in that the image set processing apparatus comprises:
the acquisition module is used for acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
the fusion module is used for carrying out pixel-level fusion on pixel points in every two initial images in the initial image set to obtain a fusion image set, and integrating the fusion image set and the initial image set to generate a target image set; the target image set comprises a fused image after fusion and an image in the initial image set, and is used for training an image segmentation model;
the fusion module is specifically configured to perform a difference operation on a gray value of each first pixel point on the first initial image and a gray value of each corresponding second pixel point on the second initial image with respect to the first initial image and the second initial image in each two initial images, so as to obtain a distance between each first pixel point on the first initial image and each corresponding second pixel point on the second initial image; and selecting any one gray value of the first pixel point and the gray value of the second pixel point to be assigned to the corresponding pixel point on the fusion image by comparing the distance with a preset gray threshold value, or assigning the maximum gray value of the first pixel point and the gray value of the second pixel point to the corresponding pixel point on the fusion image, or assigning the minimum gray value of the first pixel point and the gray value of the second pixel point to the corresponding pixel point on the fusion image to obtain the fusion image.
11. An image segmentation apparatus, characterized in that the image segmentation apparatus comprises:
the acquisition module is used for acquiring the image to be segmented; the image to be segmented comprises weld bead defects;
the segmentation module is used for inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing pixel points in every two initial images in the initial image set at a pixel level;
the method for fusing pixel levels of each two pixel points in the initial image set to obtain a fused image comprises the following steps:
performing difference value operation on gray values of first pixel points on the first initial image and gray values of corresponding second pixel points on the second initial image according to the first initial image and the second initial image in each two initial images to obtain distances between the first pixel points on the first initial image and the corresponding second pixel points on the second initial image;
And selecting any one gray value of the first pixel point and the gray value of the second pixel point to be assigned to the corresponding pixel point on the fusion image by comparing the distance with a preset gray threshold value, or assigning the maximum gray value of the first pixel point and the gray value of the second pixel point to the corresponding pixel point on the fusion image, or assigning the minimum gray value of the first pixel point and the gray value of the second pixel point to the corresponding pixel point on the fusion image to obtain the fusion image.
12. A training device for an image segmentation model, the training device comprising:
the acquisition module is used for acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing pixel points in every two initial images in the initial image set at a pixel level;
the training module is used for training the initial segmentation model according to the target image set to obtain an image segmentation model;
the method for fusing pixel levels of each two pixel points in the initial image set to obtain a fused image comprises the following steps:
Performing difference value operation on gray values of first pixel points on the first initial image and gray values of corresponding second pixel points on the second initial image according to the first initial image and the second initial image in each two initial images to obtain distances between the first pixel points on the first initial image and the corresponding second pixel points on the second initial image;
and selecting any one gray value of the first pixel point and the gray value of the second pixel point to be assigned to the corresponding pixel point on the fusion image by comparing the distance with a preset gray threshold value, or assigning the maximum gray value of the first pixel point and the gray value of the second pixel point to the corresponding pixel point on the fusion image, or assigning the minimum gray value of the first pixel point and the gray value of the second pixel point to the corresponding pixel point on the fusion image to obtain the fusion image.
13. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 9 when the computer program is executed.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310807046.3A CN116543267B (en) | 2023-07-04 | 2023-07-04 | Image set processing method, image segmentation device, image set processing apparatus, image segmentation device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310807046.3A CN116543267B (en) | 2023-07-04 | 2023-07-04 | Image set processing method, image segmentation device, image set processing apparatus, image segmentation device, and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116543267A CN116543267A (en) | 2023-08-04 |
CN116543267B true CN116543267B (en) | 2023-10-13 |
Family
ID=87449134
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310807046.3A Active CN116543267B (en) | 2023-07-04 | 2023-07-04 | Image set processing method, image segmentation device, image set processing apparatus, image segmentation device, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116543267B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110363706A (en) * | 2019-06-26 | 2019-10-22 | 杭州电子科技大学 | A kind of large area bridge floor image split-joint method |
CN112232349A (en) * | 2020-09-23 | 2021-01-15 | 成都佳华物链云科技有限公司 | Model training method, image segmentation method and device |
CN113077471A (en) * | 2021-03-26 | 2021-07-06 | 南京邮电大学 | Medical image segmentation method based on U-shaped network |
CN114663320A (en) * | 2020-12-22 | 2022-06-24 | 阿里巴巴集团控股有限公司 | Image processing method, data set expansion method, storage medium, and electronic device |
CN115829912A (en) * | 2022-07-29 | 2023-03-21 | 宁德时代新能源科技股份有限公司 | Method and device for detecting surface defects of battery cell |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111401411B (en) * | 2020-02-28 | 2023-09-29 | 北京小米松果电子有限公司 | Method and device for acquiring sample image set |
-
2023
- 2023-07-04 CN CN202310807046.3A patent/CN116543267B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110363706A (en) * | 2019-06-26 | 2019-10-22 | 杭州电子科技大学 | A kind of large area bridge floor image split-joint method |
CN112232349A (en) * | 2020-09-23 | 2021-01-15 | 成都佳华物链云科技有限公司 | Model training method, image segmentation method and device |
CN114663320A (en) * | 2020-12-22 | 2022-06-24 | 阿里巴巴集团控股有限公司 | Image processing method, data set expansion method, storage medium, and electronic device |
CN113077471A (en) * | 2021-03-26 | 2021-07-06 | 南京邮电大学 | Medical image segmentation method based on U-shaped network |
CN115829912A (en) * | 2022-07-29 | 2023-03-21 | 宁德时代新能源科技股份有限公司 | Method and device for detecting surface defects of battery cell |
Non-Patent Citations (1)
Title |
---|
Pixel-level image fusion: A survey of the state of the art;Shutao Li 等;Information Fusion;第第33卷卷;第100-112页 * |
Also Published As
Publication number | Publication date |
---|---|
CN116543267A (en) | 2023-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110751149B (en) | Target object labeling method, device, computer equipment and storage medium | |
US11106933B2 (en) | Method, device and system for processing image tagging information | |
CN111859002B (en) | Interest point name generation method and device, electronic equipment and medium | |
CN111914890B (en) | Image block matching method between images, image registration method and product | |
CN111583264B (en) | Training method for image segmentation network, image segmentation method, and storage medium | |
CN114898357A (en) | Defect identification method and device, electronic equipment and computer readable storage medium | |
CN113012189B (en) | Image recognition method, device, computer equipment and storage medium | |
CN114238541A (en) | Sensitive target information acquisition method and device and computer equipment | |
CN116542980B (en) | Defect detection method, defect detection apparatus, defect detection program, storage medium, and defect detection program | |
CN116543267B (en) | Image set processing method, image segmentation device, image set processing apparatus, image segmentation device, and storage medium | |
CN117332766A (en) | Flow chart generation method, device, computer equipment and storage medium | |
CN111898619A (en) | Picture feature extraction method and device, computer equipment and readable storage medium | |
WO2023066142A1 (en) | Target detection method and apparatus for panoramic image, computer device and storage medium | |
CN115861769A (en) | Training method of plastic shell defect detection model, and plastic shell defect detection method and device | |
CN111738922B (en) | Training method and device for density network model, computer equipment and storage medium | |
CN113743448B (en) | Model training data acquisition method, model training method and device | |
CN116612474B (en) | Object detection method, device, computer equipment and computer readable storage medium | |
CN116523803B (en) | Image processing method, shadow removing device, apparatus, and storage medium | |
CN116630629B (en) | Domain adaptation-based semantic segmentation method, device, equipment and storage medium | |
CN117975473A (en) | Bill text detection model training and detection method, device, equipment and medium | |
CN115965856B (en) | Image detection model construction method, device, computer equipment and storage medium | |
CN116503694B (en) | Model training method, image segmentation device and computer equipment | |
CN116664616B (en) | Interactive image segmentation labeling method and device, computer equipment and storage medium | |
CN117437425B (en) | Semantic segmentation method, semantic segmentation device, computer equipment and computer readable storage medium | |
CN117392477A (en) | Training of object detection model, apparatus, computer device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |