CN116543267A - Image set processing method, image segmentation device, image set processing apparatus, image segmentation device, and storage medium - Google Patents

Image set processing method, image segmentation device, image set processing apparatus, image segmentation device, and storage medium Download PDF

Info

Publication number
CN116543267A
CN116543267A CN202310807046.3A CN202310807046A CN116543267A CN 116543267 A CN116543267 A CN 116543267A CN 202310807046 A CN202310807046 A CN 202310807046A CN 116543267 A CN116543267 A CN 116543267A
Authority
CN
China
Prior art keywords
image
initial
image set
images
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310807046.3A
Other languages
Chinese (zh)
Other versions
CN116543267B (en
Inventor
吴凯
陈晓艺
江冠南
王智玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Contemporary Amperex Technology Co Ltd
Original Assignee
Contemporary Amperex Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Contemporary Amperex Technology Co Ltd filed Critical Contemporary Amperex Technology Co Ltd
Priority to CN202310807046.3A priority Critical patent/CN116543267B/en
Publication of CN116543267A publication Critical patent/CN116543267A/en
Application granted granted Critical
Publication of CN116543267B publication Critical patent/CN116543267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an image set processing method, an image segmentation device, an image set processing device and a storage medium. According to the method, the target image set is obtained by acquiring the initial image set and carrying out pixel-level fusion on every two initial images in the initial image set, wherein the initial image set comprises a plurality of weld bead defect images. According to the method, the initial images in the initial image set are fused, so that the initial image set is amplified according to the fused images, the number of samples for training the image segmentation model is increased, and the training effect of the training image segmentation model can be improved to a certain extent; on the other hand, the method performs pixel-level fusion on the initial image, so that edge noise of the fused image can be reduced to a certain extent, the quality of the fused image is improved, the convergence rate of the image segmentation model in the training process is accelerated, and the training effect of the training image segmentation model can be improved to a certain extent.

Description

Image set processing method, image segmentation device, image set processing apparatus, image segmentation device, and storage medium
Technical Field
The present disclosure relates to the field of battery detection technologies, and in particular, to an image set processing method, an image segmentation device, an apparatus, and a storage medium.
Background
The sealing nail welding is an indispensable link in the production process of the power battery, and whether the sealing nail welding reaches the standard directly affects the safety of the battery. The welding area of the sealing nail is called a welding bead, and due to the changes of temperature, environment and the like during welding, tiny defects such as pinholes, explosion points, explosion lines (cold joint), lack of welding, bead melting and the like often exist on the welding bead. The small defects on the welding bead directly affect the welding quality of the sealing nail, so the segmentation of the welding bead defects is very important.
Currently, the segmentation method for weld bead defects generally includes: the method comprises the steps of training a sample set based on a weld bead defect image in advance to obtain a segmentation network, enabling the trained segmentation network to have the function of segmenting defects in the weld bead defect image, and finally segmenting the defects in the weld bead image by utilizing the trained segmentation network.
However, the quality of the sample image required for the above-described segmentation network is low, resulting in poor training of the above-described segmentation network.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image set processing method, an image segmentation apparatus, a device, and a storage medium that can improve training effects.
In a first aspect, the present application provides an image set processing method. The method comprises the following steps:
acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
According to the image set processing method, the initial images in the initial image set are fused, so that the initial image set is amplified according to the fused images, the number of samples for training the image segmentation model is increased, and the training effect of the training image segmentation model can be improved to a certain extent; on the other hand, the method performs pixel-level fusion on the initial image, so that edge noise of the fused image can be reduced to a certain extent, the quality of the fused image is improved, the convergence rate of the image segmentation model in the training process is accelerated, and the training effect of the training image segmentation model can be improved to a certain extent.
In one embodiment, the performing pixel-level fusion on each two initial images in the initial image set to obtain a target image set includes:
dividing the initial image set into a plurality of candidate image sets according to the working condition types of each initial image in the initial image set;
and carrying out pixel-level fusion on every two initial images in each candidate image set to obtain the target image set.
According to the image fusion method provided by the embodiment of the application, the initial images under the same working condition are fused, and because the image backgrounds under the same working condition are highly consistent, the fusion efficiency can be improved based on the fusion of the initial images under the same working condition, the fused images fused by the images under the same working condition can be closer to the real condition, and the quality of the fused images can be improved to a certain extent, so that the quality of a target image set is improved.
In one embodiment, the pixel-level fusion is performed on each two initial images in each candidate image set to obtain the fused image, which includes:
determining the distance between each first pixel point on the first initial image and each corresponding second pixel point on the second initial image according to the first initial image and the second initial image in any candidate image set;
And according to the distance between the first pixel point and the second pixel point and a preset gray threshold, carrying out pixel-level fusion on the first initial image and the second initial image to obtain fusion images corresponding to the first initial image and the second initial image.
The image fusion method provided by the embodiment of the invention realizes the pixel-level fusion of the first initial image and the second initial image, reserves the detail information in each image of the first initial image and the second initial image, and can reduce the edge noise of the fused image to a certain extent, thereby improving the quality of the fused image.
In one embodiment, the performing pixel-level fusion on the first initial image and the second initial image according to the distance between the first pixel point and the second pixel point and the preset gray threshold value to obtain a fused image corresponding to the first initial image and the second initial image includes:
and under the condition that the distance is not greater than the preset gray threshold value, assigning the gray value of the first pixel point or the gray value of the second pixel point to the corresponding pixel point on the fusion image to obtain the fusion image.
In one embodiment, the performing pixel-level fusion on the first initial image and the second initial image according to the distance between the first pixel point and the second pixel point and the preset gray threshold value to obtain a fused image corresponding to the first initial image and the second initial image includes:
under the condition that the distance is larger than the preset gray threshold value, determining a comparison result between the distance and a target background pixel point, and determining a pixel point corresponding to the distance on the fusion image according to the comparison result to obtain the fusion image; the target background pixel point is determined according to the background pixel point of the first initial image or the second initial image.
In one embodiment, the determining, according to the comparison result, the pixel point corresponding to the distance on the fused image, to obtain the fused image includes:
if the comparison result shows that the distance is larger than the gray value of the target background pixel point, determining the pixel point with the large gray value in the first pixel point and the second pixel point as the corresponding pixel point on the fusion image to obtain the fusion image;
And if the comparison result shows that the distance is not larger than the gray value of the target background pixel point, determining the pixel point with the small gray value in the first pixel point and the second pixel point as the corresponding pixel point on the fusion image, and obtaining the fusion image.
The image fusion method provided by the embodiment of the invention realizes the pixel-level fusion of the first initial image and the second initial image, reserves the detail information in each image of the first initial image and the second initial image, and can reduce the edge noise of the fused image to a certain extent, thereby improving the quality of the fused image.
In one embodiment, the target image set further comprises: a plurality of initial tag images and a plurality of fused tag images, the method further comprising:
labeling each initial image in the initial image set to obtain an initial label image corresponding to each initial image;
labeling each fusion image in the target image set according to each initial label image to obtain the fusion label image.
The labeling method provided by the embodiment of the application can label the fusion image by referring to the initial label image, so that the labeling efficiency can be improved to a certain extent, the labeling accuracy can be improved, and the quality of the target image concentrated fusion label image can be improved.
In one embodiment, the labeling each fused image in the target image set according to each initial label image to obtain the fused label image includes:
determining a first initial image and a second initial image corresponding to any one of the fusion images in the target image set;
determining a first initial label image corresponding to the first initial image and a second initial label image corresponding to the second initial image from the plurality of initial label images;
determining a source label image of each pixel point on the fusion image from the first initial label image or the second initial label image according to the gray value source of each pixel point on the fusion image;
and labeling each pixel point of the fusion image according to the source label image of each pixel point on the fusion image to obtain the fusion label image.
The labeling method provided by the embodiment of the application can label the fusion image by referring to the initial label image, so that the labeling efficiency can be improved to a certain extent, the labeling accuracy can be improved, and the quality of the target image concentrated fusion label image can be improved.
In a second aspect, the present application further provides an image segmentation method. The method comprises the following steps:
acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
According to the image segmentation method, the segmentation network is obtained based on the amplified target image set, so that the training effect of the segmentation network is excellent, and the accuracy of image segmentation of the segmentation network can be improved to a certain extent.
In a third aspect, the present application further provides a training method of an image segmentation model. The method comprises the following steps:
acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
The training method provided by the embodiment of the application is based on the amplified target image set for training, so that the training effect can be improved to a certain extent, and the segmentation accuracy of the segmentation network is further improved.
In a fourth aspect, the present application further provides an image set processing apparatus. The device comprises:
the acquisition module is used for acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
the fusion module is used for carrying out pixel-level fusion on each two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
In a fifth aspect, the present application further provides an image segmentation apparatus. The image segmentation apparatus includes:
the acquisition module is used for acquiring the image to be segmented; the image to be segmented comprises weld bead defects;
the segmentation module is used for inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In a sixth aspect, the present application further provides a training apparatus for an image segmentation model, the training apparatus including:
the acquisition module is used for acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and the training module is used for training the initial segmentation model according to the target image set to obtain an image segmentation model.
In a seventh aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
In an eighth aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
Acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In a ninth aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
In a tenth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
In an eleventh aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In a twelfth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
In a thirteenth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
In a fourteenth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
Inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In a fifteenth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to designate like parts throughout the accompanying drawings. In the drawings:
FIG. 1 is an internal block diagram of a computer device in one embodiment;
FIG. 2 is a schematic illustration of a defect image in one embodiment;
FIG. 3 is another schematic illustration of a defect image in one embodiment;
FIG. 4 is a flow chart of an image set processing method in one embodiment;
FIG. 5 is a flowchart of an image set processing method according to another embodiment;
FIG. 6 is a flowchart of an image set processing method according to another embodiment;
FIG. 7 is a flowchart of an image set processing method according to another embodiment;
FIG. 8 is a flowchart of an image set processing method according to another embodiment;
FIG. 9 is a flowchart of an image set processing method according to another embodiment;
FIG. 10 is a flow chart of an image segmentation method in one embodiment;
FIG. 11 is a flow chart of a training method of an image segmentation model in one embodiment;
FIG. 12 is a block diagram showing the structure of an image set processing apparatus in one embodiment;
FIG. 13 is a block diagram showing the structure of an image dividing apparatus in one embodiment;
FIG. 14 is a block diagram of a training apparatus for an image segmentation model in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description and claims of the present application and in the description of the figures above are intended to cover non-exclusive inclusions.
In the description of the embodiments of the present application, the technical terms "first," "second," etc. are used merely to distinguish between different objects and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated, a particular order or a primary or secondary relationship. In the description of the embodiments of the present application, the meaning of "plurality" is two or more unless explicitly defined otherwise.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the production process of lithium batteries, minor defects may also cause serious damage to the quality of the product. These minor defects appear morphologically as: 1. some defects in an image with the resolution of 8K or 16K only occupy tens of pixels or even more than tens of pixels, and the target is extremely small, as shown in fig. 2; 2. the defect morphology is different as shown in fig. 3; 3. the gray value of the image is low, and the distinction degree from the background is small. Based on these performances, on the one hand, the image containing the defect is difficult to acquire, and on the other hand, the quality of the image containing the defect is poor, so that when the image containing the defect is used as a sample image to train an image segmentation model, an image detection model or other image processing models, various models are slow to converge or even difficult to converge in the training process, thus influencing the training effect, for example, if the image segmentation model is used, the segmentation accuracy of the image segmentation model is low; if the image detection model is the image detection model, the detection accuracy of the image detection model is low. The image set processing method is used for segmentation, detection and the like of weld bead defect images, and can effectively expand defect samples and improve model convergence. The following examples illustrate the process in detail.
The image set processing method provided by the embodiment of the application can be applied to the computer equipment shown in fig. 1. The computer device may be a terminal, and its internal structure may be as shown in fig. 1. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image set processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, as shown in fig. 4, there is provided an image set processing method, which is described by taking an example that the method is applied to the computer device in fig. 1, and includes the following steps:
s201, acquiring an initial image set; the initial image set includes a plurality of weld bead defect images.
Wherein the initial image set may be used to train an image segmentation model, an image detection model, etc. The initial image set may include bead defect images of different operating condition types, and may also include bead defect images of the same operating condition type. The initial image set may include bead defect images of different types of defects, or may include bead defect images of the same type of defects.
In the embodiment of the application, when welding bead defect images are generated in the battery production and manufacturing process, the computer equipment can be connected with a corresponding image acquisition device to acquire a large number of welding bead defect images, and the acquired welding bead defect images are collected into an image set to serve as an initial image set; alternatively, the computer device may also download a large number of weld bead defect images from the cloud server or the network, and aggregate the obtained weld bead defect images into an image set as an initial image set.
S202, fusing every two initial images in the initial image set at a pixel level to obtain a target image set; the target image set comprises the fused image after fusion and the image in the initial image set, and is used for training the image segmentation model.
In this embodiment, when the computer device obtains the initial image set based on the foregoing steps, any two initial images may be selected from the initial image set to perform pixel-level fusion, so as to obtain a fused image of the two initial images, then any two initial images are selected from the remaining initial images to perform pixel-level fusion, so as to obtain a fused image of the two initial images, and the foregoing steps are continuously performed, so that the two-to-two image fusion is performed, until all initial images in the initial image set are fused, and then the obtained fused images corresponding to each two initial images form an image set, that is, the fused image set. And finally integrating the fused image set and the initial image set to form a target image set, wherein the target image set comprises the fused image and the initial image in the initial image set.
Optionally, when the computer device obtains the initial image set based on the foregoing steps, two initial images with the same defect type may be selected from the initial image set to perform pixel level fusion, so as to obtain a fused image of the two initial images, and then two initial images with the same defect type are selected from the remaining initial images to perform pixel level fusion, so as to obtain a fused image of the two initial images, and the steps are continuously performed, so that the two images are fused until all types of initial images in the initial image set are fused, and then one image set is formed by the obtained fused images corresponding to each two initial images, namely the fused image set. And finally integrating the fused image set and the initial image set to form a target image set, wherein the target image set comprises the fused image and the images in the initial image set.
Alternatively, when the computer device acquires the initial image set based on the foregoing steps, pixel-level fusion may be performed on each two adjacent initial images in the initial image set, and the obtained fused image corresponding to each two adjacent initial images may form an image set, that is, a fused image set. And finally integrating the fused image set and the initial image set to form a target image set, wherein the target image set comprises the fused image and the images in the initial image set.
Optionally, after the computer device obtains the target image set, the target image set may be used as a sample image set to train the constructed image segmentation model, so that the image segmentation model may have a function of segmenting an image correspondingly, and the sample image set may train task models such as the constructed image detection model, so that each task model may have a corresponding task processing function.
According to the image set processing method, the target image set is obtained by acquiring the initial image set and fusing every two initial images in the initial image set at the pixel level, wherein the initial image set comprises a plurality of weld bead defect images. According to the method, the initial images in the initial image set are fused, so that the initial image set is amplified according to the fused images, the number of samples for training the image segmentation model is increased, and the training effect of the training image segmentation model can be improved to a certain extent; on the other hand, the method performs pixel-level fusion on the initial image, so that edge noise of the fused image can be reduced to a certain extent, the quality of the fused image is improved, the convergence rate of the image segmentation model in the training process is accelerated, and the training effect of the training image segmentation model can be improved to a certain extent.
In one embodiment, as shown in fig. 5, a pixel-level image fusion method is provided, that is, the step S202 "performing pixel-level fusion on each two initial images in the initial image set to obtain a target image set", including:
s301, dividing the initial image set into a plurality of candidate image sets according to the working condition types of the initial images in the initial image set.
The initial image is industrial image data, and the industrial image data has obvious characteristics relative to a natural scene, namely, the background is relatively fixed and uniform. Therefore, the working condition type of the initial image represents the background environment where the weld bead defect exists in the initial image, and generally, the picture background under the same working condition is highly consistent.
In this embodiment, when the computer device obtains the initial image set based on the foregoing steps, the working condition type of each initial image in the initial image set may be determined first, and then, according to the working condition type of each initial image, the initial images belonging to the same working condition are divided into one image set to form a plurality of candidate image sets under different working conditions.
S302, fusing every two initial images in each candidate image set at a pixel level to obtain a target image set.
In this embodiment, when the computer device obtains multiple candidate image sets based on the foregoing steps, any two initial images may be selected from the candidate image sets for performing pixel-level fusion, to obtain a fused image of the two initial images, and then any two initial images are selected from the remaining initial images for performing pixel-level fusion, to obtain a fused image of the two initial images, so as to continue to perform the above steps, and fusion is performed on two images until all initial images in the candidate image set are fused, and then the above fusion step is performed on other candidate image sets until all candidate image sets are fused, and finally, the fused images in each candidate image set and the initial image set are integrated together to form the target image set.
Optionally, when the computer device obtains multiple candidate image sets based on the foregoing steps, for any candidate image set, two images with the same defect type may be selected from the candidate image set to perform pixel-level fusion, so as to obtain a fused image of the two initial images, then two initial images with the same defect type are selected from the remaining initial images to perform pixel-level fusion, so as to obtain a fused image of the two initial images, and the steps are continuously performed, so that the two-to-two image fusion is performed until all types of initial images in the candidate image set are fused, then the above fusion step is continuously performed on other candidate image sets until all candidate image sets are fused, and finally, the fused images in each candidate image set and the initial image set are integrated together to form the target image set.
Optionally, when the computer device acquires multiple candidate image sets based on the foregoing steps, for any one candidate image set, pixel-level fusion may be performed on every two adjacent initial images in the candidate image set, and then the above fusion step is continuously performed on other candidate image sets until all candidate image sets are fused, and finally, the fused images in each candidate image set and the initial image sets are integrated together to form the target image set.
According to the image fusion method provided by the embodiment of the application, the initial images under the same working condition are fused, and because the image backgrounds under the same working condition are highly consistent, the fusion efficiency can be improved based on the fusion of the initial images under the same working condition, the fused images fused by the images under the same working condition can be closer to the real condition, and the quality of the fused images can be improved to a certain extent, so that the quality of a target image set is improved.
Further, as shown in fig. 6, a pixel-level fusion method is provided, that is, in S302, the method of "performing pixel-level fusion on each two initial images in each candidate image set to obtain a fused image" includes:
S401, determining the distance between each first pixel point on the first initial image and each corresponding second pixel point on the second initial image according to the first initial image and the second initial image in any candidate image set.
The position of the first pixel point on the first initial image is the same as the position of the second pixel point on the second initial image, namely the corresponding relation between the first pixel point and the second pixel point.
The embodiment of the application relates to a method for fusing every two initial images in each candidate image set, namely taking any candidate image set as an example, selecting any two initial images from the candidate image set, and determining the two initial images as a first initial image and a second initial image. And performing difference value operation on the gray value of each first pixel point on the first initial image and the gray value of each corresponding second pixel point on the second initial image to obtain the distance between each first pixel point on the first initial image and each corresponding second pixel point on the second initial image.
S402, performing pixel-level fusion on the first initial image and the second initial image according to the distance between the first pixel point and the second pixel point and a preset gray threshold value, and obtaining fusion images corresponding to the first initial image and the second initial image.
In this embodiment, when the computer device obtains the distance between the first pixel point and the second pixel point, the distance may be compared with the preset gray threshold, if the distance between the first pixel point and the second pixel point is greater than the preset gray threshold, it is indicated that the difference between the gray value of the first pixel point and the gray value of the second pixel point is greater, at this time, according to the result of comparing the target defect gray value with the background gray value, the target pixel point may be determined from the first pixel point and the second pixel point, and then the gray value of the target pixel point is assigned to the pixel point at the position corresponding to the fused image. Optionally, the computer device may determine a target pixel from the first pixel and the second pixel according to a result of comparing the distance with the background gray value, and then assign the gray value of the target pixel to the pixel at the position corresponding to the fused image.
Optionally, the comparison result of the target defect gray value and the background gray value includes a first comparison result of comparing the gray value of the first pixel with the background gray value in the first initial image where the first pixel is located, and a second comparison result of comparing the gray value of the second pixel with the background gray value in the second initial image where the second pixel is located, based on which the method for determining the target pixel from the first pixel and the second pixel according to the comparison result of the target defect gray value and the background gray value is provided, where the method includes: comparing the gray value of the first pixel point with the background gray value in the first initial image where the first pixel point is located to obtain a first comparison result, comparing the gray value of the second pixel point with the background gray value in the second initial image where the second pixel point is located to obtain a second comparison result, and finally selecting one pixel point from the first pixel point and the second pixel point as a target pixel point according to the first comparison result and the second comparison result. Specifically, if the first comparison result indicates that the gray value of the first pixel point is greater than the background gray value in the first initial image where the first pixel point is located, and if the second comparison result indicates that the gray value of the second pixel point is greater than the background gray value in the second initial image where the second pixel point is located, selecting a pixel point with a large gray value from the first pixel point and the second pixel point as a target pixel point. If the first comparison result indicates that the gray value of the first pixel point is not greater than the background gray value in the first initial image where the first pixel point is located, and if the second comparison result indicates that the gray value of the second pixel point is not greater than the background gray value in the second initial image where the second pixel point is located, selecting a pixel point with a small gray value from the first pixel point and the second pixel point as a target pixel point.
If the distance between the first pixel point and the second pixel point is not greater than the preset gray threshold, the difference between the gray value of the first pixel point and the gray value of the second pixel point is smaller, and the gray value of the first pixel point or the gray value of the second pixel point can be assigned to the corresponding pixel point on the fusion image at this time, so that the fusion image is obtained.
The image fusion method provided by the embodiment of the invention realizes the pixel-level fusion of the first initial image and the second initial image, reserves the detail information in each image of the first initial image and the second initial image, and can reduce the edge noise of the fused image to a certain extent, thereby improving the quality of the fused image.
In one embodiment, when the distance between the first pixel point and the second pixel point is greater than the preset gray threshold, the present application provides a method for fusing the first initial image and the second initial image according to the comparison result of the distance and the background gray value in such a scene, that is, the computer device executes the step of S402 "fusing the first initial image and the second initial image at the pixel level according to the distance between the first pixel point and the second pixel point and the preset gray threshold, so as to obtain a fused image corresponding to the first initial image and the second initial image", and specifically performs the steps of:
Under the condition that the distance is larger than a preset gray threshold value, determining a comparison result between the distance and a target background pixel point, and determining a pixel point corresponding to the distance on the fusion image according to the comparison result to obtain the fusion image; the target background pixel point is determined according to the background pixel point of the first initial image or the second initial image.
In this embodiment of the present application, when the computer device determines the distance between the first pixel point and the second pixel point, the distance may be compared with the background pixel point in the first initial image or the background pixel point in the second initial image, to obtain a comparison result. If the comparison result shows that the distance is larger than the gray value of the background pixel point, the fusion target is indicated to be a high gray value, at the moment, the pixel point with the large gray value in the first pixel point and the second pixel point can be determined to be the corresponding pixel point on the fusion image, namely, the gray value of the pixel point with the large gray value is assigned to the corresponding pixel point on the fusion image, and the fusion image is obtained; if the comparison result shows that the distance is not larger than the gray value of the background pixel point, the fusion target is indicated to be a low gray value, at the moment, the pixel point with the small gray value in the first pixel point and the second pixel point can be determined to be the corresponding pixel point on the fusion image, namely, the gray value of the pixel point with the small gray value is assigned to the corresponding pixel point on the fusion image, and the fusion image is obtained.
The fusion method described in the above embodiment is exemplified, and assuming that the first initial image is the image a and the second initial image is the image B, the fusion method may be determined by the following relation (1):
(1);
wherein, the liquid crystal display device comprises a liquid crystal display device,represents the +.>Gray value of each pixel, +.>And->Representing the +.sup.th in image A and image B, respectively, for fusion>Gray value of each pixel, +.>For the preset gray threshold +.>Representing the comparison result between the distance and the target background pixel point, < >>Representing the fusion target as a high gray value, +.>Indicating that the fusion target is a low gray value. When the +.>Gray values of the pixel points differ by no more than a preset gray threshold value>At the time of fusing the image M +.>The gray value of each pixel point is randomly assigned as the +.th in the image A or the image B>Gray values of individual pixels. When the +.>The gray value difference of each pixel point is larger than the preset gray threshold value +.>When according to->The +.>The gray value of the larger or smaller pixel of the gray values of the individual pixels is assigned to the +.>And a pixel point.
Optionally, when the computer device fuses to obtain the fused image based on any one of the fusion methods, the first initial image and the second initial image associated with the fused image may be recorded and stored in association with the fused image, so that when the fused image is marked later, the corresponding first initial image and second initial image can be quickly found according to the fused image.
The image fusion method provided by the embodiment of the invention realizes the pixel-level fusion of the first initial image and the second initial image, reserves the detail information in each image of the first initial image and the second initial image, and can reduce the edge noise of the fused image to a certain extent, thereby improving the quality of the fused image.
When the image segmentation model is trained by using the target image set according to any of the embodiments, the initial image and the fused image in the target image set need to be labeled, so as to obtain a corresponding label image, and further the image segmentation model is trained based on the image and the corresponding label image. Therefore, the target image set further includes: the application further provides a method for labeling each image in the target image set based on the plurality of initial label images and the plurality of fusion label images, as shown in fig. 7, the method comprises the following steps:
s501, labeling each initial image in the initial image set to obtain an initial label image corresponding to each initial image.
In the embodiment of the application, the defects on each initial image in the initial image set can be marked, so that the marked initial image, namely the initial label image, is obtained. Optionally, after labeling each initial image, the computer device may further construct an association relationship between each initial image and the corresponding initial label image, and record and store the association relationship, so as to quickly obtain the corresponding initial label image according to the initial image.
S502, labeling each fusion image in the target image set according to the initial label image corresponding to each initial image to obtain the fusion label image.
In the embodiment of the application, when the computer equipment acquires the initial label image corresponding to each initial image and each fusion image in the target image set, each fusion image can be marked, in the marking process, two initial images required when the fusion images are fused can be determined first, then two initial label images corresponding to the two initial images are determined, finally the fusion images can be marked by referring to the labels in the initial label images, and the marked fusion label images are obtained.
The labeling method described in the foregoing embodiment is illustrated, and assuming that the first initial image is an image a, the second initial image is an image B, and the fusion label image M corresponding to the fusion image M is determined by the following relational expression (2):
(2);
wherein, the liquid crystal display device comprises a liquid crystal display device,represents the +.o of the fusion tag image M>Label of individual pixels,/->And->Respectively represent +.f in tag image A and tag image B for fusion>Labels for individual pixels. When fusion image M +.>When the gray value of each pixel is from the image A, the first +. >Label assignment of each pixel point to the first fused label image MA plurality of pixel points; when fusion image M +.>When the gray value of each pixel comes from the image B, the first +.>Label assignment of individual pixels to the fusion label image M +.>And a pixel point.
The labeling method provided by the embodiment of the application can label the fusion image by referring to the initial label image, so that the labeling efficiency can be improved to a certain extent, the labeling accuracy can be improved, and the quality of the target image concentrated fusion label image can be improved.
Further, as shown in fig. 8, an implementation manner of the labeling is provided, that is, the step of S502 "labeling each fusion image in the target image set according to each initial label image to obtain a fusion label image", includes:
s601, determining a first initial image and a second initial image corresponding to a fusion image aiming at any fusion image in a target image set.
In this embodiment of the present invention, since each pixel point on the fused image is derived from the first initial image or the second initial image when fused in the early stage, when the computer device pre-marks the fused image, the computer device may determine the initial image corresponding to the fused image, that is, the first initial image and the second initial image. Specifically, in the determining process, the determining can be performed according to the association relationship among the fusion image, the first initial image and the second initial image, and the association relationship is obtained, recorded and stored when the first initial image and the second initial image are fused in the earlier stage.
S602, determining a first initial label image corresponding to the first initial image and a second initial label image corresponding to the second initial image from the plurality of initial label images.
In the embodiment of the application, when the computer device acquires the first initial image, the first initial label image corresponding to the first initial image can be found from the plurality of initial label images according to the association relationship between the initial image and the initial label image; when the computer device acquires the second initial image, the second initial label image corresponding to the second initial image can be found from the plurality of initial label images according to the association relationship between the initial image and the initial label image.
S603, determining the source label image of each pixel point on the fusion image from the first initial label image or the second initial label image according to the gray value source of each pixel point on the fusion image.
The gray value source of each pixel point on the fused image can be determined to be the first initial image or the second initial image, and then the corresponding first initial label image is further determined according to the first initial image or the corresponding second initial label image is further determined according to the second initial image, namely the source label image corresponding to each pixel point on the fused image is determined.
S604, labeling each pixel point of the fusion image according to the source label image of each pixel point on the fusion image to obtain the fusion label image.
Because each pixel point on the source label image is the pixel point after the labeling, the labeling mode on the source label image of each pixel point on the fusion image can be referred to, each pixel point on the fusion image is labeled, and the fusion label image is obtained after the labeling. Specifically, the gray value of the pixel point on the source label image can be assigned to the pixel point at the corresponding position on the fusion label image to realize labeling.
The labeling method provided by the embodiment of the application can label the fusion image by referring to the initial label image, so that the labeling efficiency can be improved to a certain extent, the labeling accuracy can be improved, and the quality of the target image concentrated fusion label image can be improved.
In summary, the image set processing method described in all the embodiments above further provides an image set processing method, as shown in fig. 9, including:
s701, acquiring an initial image set; the initial image set includes a plurality of weld bead defect images.
S702, dividing the initial image set into a plurality of candidate image sets according to the working condition types of the initial images in the initial image set.
S703, determining, for the first initial image and the second initial image in any candidate image set, a distance between each first pixel point on the first initial image and each corresponding second pixel point on the second initial image.
S704, determining whether the distance is not greater than a preset gray threshold, if the distance is not greater than the preset gray threshold, executing step S705, and if the distance is greater than the preset gray threshold, executing step S706.
And S705, assigning the gray value of the first pixel point or the gray value of the second pixel point to the corresponding pixel point on the fusion image to obtain the fusion image.
S706, determining a comparison result between the distance and the target background pixel point, if the comparison result indicates that the distance is greater than the gray value of the target background pixel point, executing step S707, and if the comparison result indicates that the distance is not greater than the gray value of the target background pixel point, executing step S708.
And S707, determining the pixel point with the large gray value in the first pixel point and the two pixel points as the corresponding pixel point on the fusion image, and obtaining the fusion image.
And S708, determining the pixel point with the small gray value in the first pixel point and the second pixel point as the corresponding pixel point on the fusion image, and obtaining the fusion image.
S709, labeling each initial image in the initial image set to obtain an initial label image corresponding to each initial image.
S710, determining a first initial image and a second initial image corresponding to the fused image aiming at any fused image in the target image set.
S711, a first initial label image corresponding to the first initial image and a second initial label image corresponding to the second initial image are determined from the plurality of initial label images.
S712, determining the source label image of each pixel point on the fusion image from the first initial label image or the second initial label image according to the gray value source of each pixel point on the fusion image.
S713, labeling each pixel point of the fusion image according to the source label image of each pixel point on the fusion image to obtain the fusion label image.
The above steps are described in the foregoing, and the detailed description is referred to the foregoing description, which is not repeated here.
The image set processing method fully utilizes the characteristics of the industrial defect image to design a simple and efficient data amplification method, can effectively expand defect samples and improve convergence conditions of an image segmentation model, and compared with the prior art, the method can keep defect details as much as possible and does not introduce other boundary noise, so that each image in the amplified image set is more natural to a certain extent, and quality of each image is further improved.
In an embodiment, based on the image set processing method described in any one of the foregoing embodiments, the present application further provides an image segmentation method, as shown in fig. 10, including:
s801, obtaining an image to be segmented; the image to be segmented includes weld bead defects.
In the embodiment of the application, when an image including a weld bead defect is generated in the battery production and manufacturing process, the computer equipment may connect with a corresponding image acquisition device to acquire the image including the weld bead defect, and take the acquired image including the weld bead defect as the image to be segmented.
S802, inputting an image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
The segmentation network is a trained neural network and is used for carrying out defect region segmentation on the input image.
In the embodiment of the application, when the computer equipment acquires the image to be segmented, the image to be segmented can be input into a pre-trained segmentation network for defect segmentation, and a segmentation result is obtained. The target image set required for training the segmentation network includes an initial image in the initial image set and a fused image obtained by fusing each two initial images in the initial image set at a pixel level, and the method for acquiring the initial image and the fused image in the initial image set may be obtained based on the image set processing method described in any one of the embodiments of fig. 4 to 10, and the detailed description is omitted herein.
According to the image segmentation method, the segmentation network is obtained based on the amplified target image set, so that the training effect of the segmentation network is excellent, and the accuracy of image segmentation of the segmentation network can be improved to a certain extent.
In an embodiment, based on the image set processing method described in any one of the foregoing embodiments, the present application further provides a training method of an image segmentation model, as shown in fig. 11, where the training method includes:
s901, acquiring a target image set; the target image set comprises initial images in the initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
The embodiment of the present application relates to a method for acquiring a target image set, which may be obtained based on the image set processing method described in any of the embodiments of fig. 2 to 8, and the detailed description is omitted herein.
S902, training an initial segmentation model according to a target image set to obtain an image segmentation model.
In this embodiment of the present application, when the computer device obtains the target image set, the built initial segmentation model may be trained based on the target image set, so as to obtain an image segmentation model, so that the defect area in the defect image may be segmented based on the trained image segmentation model, that is, the method described in the embodiment of fig. 9 may be applied.
The training method provided by the embodiment of the application is based on the amplified target image set for training, so that the training effect can be improved to a certain extent, and the segmentation accuracy of the segmentation network is further improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an image set processing device for realizing the image set processing method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation in the embodiments of the image set processing apparatus or apparatuses provided below may refer to the limitation of the image set processing method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 12, there is provided an image set processing apparatus including:
an acquisition module 10 for acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
the fusion module 11 is used for carrying out pixel-level fusion on each two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
The respective modules in the image set processing apparatus described above may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, as shown in fig. 13, there is provided an image set processing apparatus including:
an acquisition module 20, configured to acquire an image to be segmented; the image to be segmented comprises weld bead defects.
The segmentation module 21 is configured to input the image to be segmented into a segmentation network for performing defect segmentation, so as to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In one embodiment, as shown in fig. 14, there is provided a training apparatus of an image segmentation model, including:
an acquisition module 30 for acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
The training module 31 is configured to train the initial segmentation model according to the target image set, so as to obtain an image segmentation model.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
Acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
The computer device provided in the foregoing embodiments has similar implementation principles and technical effects to those of the foregoing method embodiments, and will not be described herein in detail.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
Acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
Acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
The foregoing embodiment provides a computer readable storage medium, which has similar principles and technical effects to those of the foregoing method embodiment, and will not be described herein.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
Acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
The foregoing embodiment provides a computer program product, which has similar principles and technical effects to those of the foregoing method embodiment, and will not be described herein.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric RandomAccess Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can take many forms, such as static Random access memory (Static Random Access Memory, SRAM) or Dynamic Random access memory (Dynamic Random AccessMemory, DRAM), among others. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (15)

1. A method of image set processing, the method comprising:
acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
carrying out pixel-level fusion on every two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
2. The method according to claim 1, wherein the performing pixel-level fusion on each two initial images in the initial image set to obtain a target image set includes:
dividing the initial image set into a plurality of candidate image sets according to the working condition types of each initial image in the initial image set;
and carrying out pixel-level fusion on every two initial images in each candidate image set to obtain the target image set.
3. The method of claim 2, wherein pixel-wise fusing each two initial images in each of the candidate image sets to obtain the fused image comprises:
determining the distance between each first pixel point on the first initial image and each corresponding second pixel point on the second initial image according to the first initial image and the second initial image in any candidate image set;
and according to the distance between the first pixel point and the second pixel point and a preset gray threshold, carrying out pixel-level fusion on the first initial image and the second initial image to obtain fusion images corresponding to the first initial image and the second initial image.
4. The method according to claim 3, wherein the performing pixel-level fusion on the first initial image and the second initial image according to the distance between the first pixel point and the second pixel point and the preset gray threshold value to obtain a fused image corresponding to the first initial image and the second initial image includes:
and under the condition that the distance is not greater than the preset gray threshold value, assigning the gray value of the first pixel point or the gray value of the second pixel point to the corresponding pixel point on the fusion image to obtain the fusion image.
5. The method according to claim 3, wherein the performing pixel-level fusion on the first initial image and the second initial image according to the distance between the first pixel point and the second pixel point and the preset gray threshold value to obtain a fused image corresponding to the first initial image and the second initial image includes:
under the condition that the distance is larger than the preset gray threshold value, determining a comparison result between the distance and a target background pixel point, and determining a pixel point corresponding to the distance on the fusion image according to the comparison result to obtain the fusion image; the target background pixel point is determined according to the background pixel point of the first initial image or the second initial image.
6. The method of claim 5, wherein determining the pixel point on the fused image corresponding to the distance according to the comparison result, to obtain the fused image, comprises:
if the comparison result shows that the distance is larger than the gray value of the target background pixel point, determining the pixel point with the large gray value in the first pixel point and the second pixel point as the corresponding pixel point on the fusion image to obtain the fusion image;
and if the comparison result shows that the distance is not larger than the gray value of the target background pixel point, determining the pixel point with the small gray value in the first pixel point and the second pixel point as the corresponding pixel point on the fusion image, and obtaining the fusion image.
7. The method of any of claims 1-6, wherein the set of target images further comprises: a plurality of initial tag images and a plurality of fused tag images, the method further comprising:
labeling each initial image in the initial image set to obtain an initial label image corresponding to each initial image;
labeling each fusion image in the target image set according to each initial label image to obtain the fusion label image.
8. The method according to claim 7, wherein labeling each fused image in the target image set according to each initial label image to obtain the fused label image includes:
determining a first initial image and a second initial image corresponding to any one of the fusion images in the target image set;
determining a first initial label image corresponding to the first initial image and a second initial label image corresponding to the second initial image from the plurality of initial label images;
determining a source label image of each pixel point on the fusion image from the first initial label image or the second initial label image according to the gray value source of each pixel point on the fusion image;
and labeling each pixel point of the fusion image according to the source label image of each pixel point on the fusion image to obtain the fusion label image.
9. An image segmentation method, characterized in that the segmentation method comprises:
acquiring an image to be segmented; the image to be segmented comprises weld bead defects;
inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
10. A method of training an image segmentation model, the method comprising:
acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and training the initial segmentation model according to the target image set to obtain an image segmentation model.
11. An image set processing apparatus, characterized in that the image set processing apparatus comprises:
the acquisition module is used for acquiring an initial image set; the initial image set includes a plurality of weld bead defect images;
the fusion module is used for carrying out pixel-level fusion on each two initial images in the initial image set to obtain a target image set; the target image set comprises fused images and images in the initial image set after fusion, and is used for training an image segmentation model.
12. An image segmentation apparatus, characterized in that the image segmentation apparatus comprises:
the acquisition module is used for acquiring the image to be segmented; the image to be segmented comprises weld bead defects;
the segmentation module is used for inputting the image to be segmented into a segmentation network for defect segmentation to obtain a segmentation result; the segmentation network is trained based on a target image set, wherein the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level.
13. A training device for an image segmentation model, the training device comprising:
the acquisition module is used for acquiring a target image set; the target image set comprises initial images in an initial image set and fusion images obtained by fusing every two initial images in the initial image set at a pixel level;
and the training module is used for training the initial segmentation model according to the target image set to obtain an image segmentation model.
14. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 10 when the computer program is executed.
15. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 10.
CN202310807046.3A 2023-07-04 2023-07-04 Image set processing method, image segmentation device, image set processing apparatus, image segmentation device, and storage medium Active CN116543267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310807046.3A CN116543267B (en) 2023-07-04 2023-07-04 Image set processing method, image segmentation device, image set processing apparatus, image segmentation device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310807046.3A CN116543267B (en) 2023-07-04 2023-07-04 Image set processing method, image segmentation device, image set processing apparatus, image segmentation device, and storage medium

Publications (2)

Publication Number Publication Date
CN116543267A true CN116543267A (en) 2023-08-04
CN116543267B CN116543267B (en) 2023-10-13

Family

ID=87449134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310807046.3A Active CN116543267B (en) 2023-07-04 2023-07-04 Image set processing method, image segmentation device, image set processing apparatus, image segmentation device, and storage medium

Country Status (1)

Country Link
CN (1) CN116543267B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363706A (en) * 2019-06-26 2019-10-22 杭州电子科技大学 A kind of large area bridge floor image split-joint method
CN112232349A (en) * 2020-09-23 2021-01-15 成都佳华物链云科技有限公司 Model training method, image segmentation method and device
CN113077471A (en) * 2021-03-26 2021-07-06 南京邮电大学 Medical image segmentation method based on U-shaped network
US20210272299A1 (en) * 2020-02-28 2021-09-02 Beijing Xiaomi Pinecone Electronics Co., Ltd. Method and apparatus for obtaining sample image set
CN114663320A (en) * 2020-12-22 2022-06-24 阿里巴巴集团控股有限公司 Image processing method, data set expansion method, storage medium, and electronic device
CN115829912A (en) * 2022-07-29 2023-03-21 宁德时代新能源科技股份有限公司 Method and device for detecting surface defects of battery cell

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363706A (en) * 2019-06-26 2019-10-22 杭州电子科技大学 A kind of large area bridge floor image split-joint method
US20210272299A1 (en) * 2020-02-28 2021-09-02 Beijing Xiaomi Pinecone Electronics Co., Ltd. Method and apparatus for obtaining sample image set
CN112232349A (en) * 2020-09-23 2021-01-15 成都佳华物链云科技有限公司 Model training method, image segmentation method and device
CN114663320A (en) * 2020-12-22 2022-06-24 阿里巴巴集团控股有限公司 Image processing method, data set expansion method, storage medium, and electronic device
CN113077471A (en) * 2021-03-26 2021-07-06 南京邮电大学 Medical image segmentation method based on U-shaped network
CN115829912A (en) * 2022-07-29 2023-03-21 宁德时代新能源科技股份有限公司 Method and device for detecting surface defects of battery cell

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHUTAO LI 等: "Pixel-level image fusion: A survey of the state of the art", INFORMATION FUSION, vol. 33, pages 100 - 112, XP029596965, DOI: 10.1016/j.inffus.2016.05.004 *

Also Published As

Publication number Publication date
CN116543267B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
US20200394416A1 (en) Method and apparatus for training feature extraction model, computer device, and computer-readable storage medium
JP2022532460A (en) Model training methods, equipment, terminals and programs
CN113112509B (en) Image segmentation model training method, device, computer equipment and storage medium
CN114049631A (en) Data labeling method and device, computer equipment and storage medium
CN116542980B (en) Defect detection method, defect detection apparatus, defect detection program, storage medium, and defect detection program
CN111914890B (en) Image block matching method between images, image registration method and product
CN116630630B (en) Semantic segmentation method, semantic segmentation device, computer equipment and computer readable storage medium
CN116543267B (en) Image set processing method, image segmentation device, image set processing apparatus, image segmentation device, and storage medium
CN111898619A (en) Picture feature extraction method and device, computer equipment and readable storage medium
CN114238541A (en) Sensitive target information acquisition method and device and computer equipment
CN114756634A (en) Method and device for discovering interest point change, electronic equipment and storage medium
CN116612474B (en) Object detection method, device, computer equipment and computer readable storage medium
CN116452702B (en) Information chart rapid design method, device, computer equipment and storage medium
CN116630629B (en) Domain adaptation-based semantic segmentation method, device, equipment and storage medium
CN116523803B (en) Image processing method, shadow removing device, apparatus, and storage medium
CN115965856B (en) Image detection model construction method, device, computer equipment and storage medium
CN117437425B (en) Semantic segmentation method, semantic segmentation device, computer equipment and computer readable storage medium
CN117975473A (en) Bill text detection model training and detection method, device, equipment and medium
CN116503694B (en) Model training method, image segmentation device and computer equipment
CN113743448B (en) Model training data acquisition method, model training method and device
CN116895000A (en) Training method and device for image recognition model, computer equipment and storage medium
US20240029262A1 (en) System and method for storage management of images
CN115063481A (en) Method and device for positioning target object in image and computer equipment
CN117575995A (en) Device defect detection method, device, computer equipment and storage medium
CN117392477A (en) Training of object detection model, apparatus, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant