CN112508128B - Training sample construction method, counting device, electronic equipment and medium - Google Patents

Training sample construction method, counting device, electronic equipment and medium Download PDF

Info

Publication number
CN112508128B
CN112508128B CN202011532393.2A CN202011532393A CN112508128B CN 112508128 B CN112508128 B CN 112508128B CN 202011532393 A CN202011532393 A CN 202011532393A CN 112508128 B CN112508128 B CN 112508128B
Authority
CN
China
Prior art keywords
image
segmentation
segmentation result
instance
original image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011532393.2A
Other languages
Chinese (zh)
Other versions
CN112508128A (en
Inventor
陈路燕
聂磊
邹建法
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011532393.2A priority Critical patent/CN112508128B/en
Publication of CN112508128A publication Critical patent/CN112508128A/en
Application granted granted Critical
Publication of CN112508128B publication Critical patent/CN112508128B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a training sample construction method, a counting device, electronic equipment and a medium, and relates to the technical field of image processing. The specific implementation scheme is as follows: acquiring an original image to be marked; performing instance segmentation on the original image by adopting an image recognition technology to obtain a first instance segmentation result corresponding to the original image; transmitting the first instance segmentation result to an adhesion segmentation platform, and acquiring a second instance segmentation result fed back by the adhesion segmentation platform; the second instance segmentation result comprises a segmentation result of the bonding instance in the first instance segmentation result; the original image is marked according to the second example segmentation result, the marked image is used as a training sample of the example segmentation model, the adhesion example in the original image can be accurately segmented, and further the counting accuracy can be improved and the manpower consumption can be reduced when the part counting is realized according to the training sample.

Description

Training sample construction method, counting device, electronic equipment and medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a training sample construction method, a counting device, an electronic device, and a medium.
Background
With the rapid development of electronic technology, electronic components tend to be miniaturized, which is difficult for counting and housing of electronic components. However, the electronic processing industry is very strict in controlling the number of electronic components, and effective production management of various parts storage and counting is required.
At present, parts are pasted on a material belt at the same intervals mainly through a surface packaging technology (Surface Mount Technology, SMT), and then the material belt is wound together to be rolled into a material tray, so that the containing management of the parts is realized. In the case of counting parts, the length of the tape and the interval between the parts to be attached are used for estimation.
However, the front and rear portions of the tape are usually free of adhesive parts, and the start position and end position of the tape adhesive part need to be manually determined, which is labor-intensive. In addition, the parts cannot be pasted with the same interval, and estimation errors exist.
Disclosure of Invention
The application provides a training sample construction method, a counting device, electronic equipment and a medium.
According to an aspect of the present application, there is provided a method for constructing a training sample, including:
Obtaining an original image to be annotated, wherein the original image comprises a plurality of image instances, the image instances comprise a plurality of adhesion instances, and the interval distance between the adhesion instances is smaller than or equal to the standard segmentation distance;
performing instance segmentation on the original image by adopting an image recognition technology to obtain a first instance segmentation result corresponding to the original image;
transmitting the first instance segmentation result to an adhesion segmentation platform, and acquiring a second instance segmentation result fed back by the adhesion segmentation platform; the second instance segmentation result comprises segmentation results of at least two bonding instances in the first instance segmentation result;
and marking the original image according to the second example segmentation result, and taking the marked image as a training sample of the example segmentation model.
According to another aspect of the present application, there is provided a counting method including:
acquiring a target image, wherein the target image comprises a plurality of image instances, the image instances comprise a plurality of adhesion instances, and the interval distance between the adhesion instances is smaller than or equal to the standard segmentation distance;
inputting the target image into a pre-trained instance segmentation model, and acquiring an instance segmentation result corresponding to the target image;
The example segmentation model is obtained by training a training sample generated by using the method for constructing the training sample according to any embodiment of the application;
based on the instance segmentation result, a quantity value of image instances included in the target image is determined.
According to another aspect of the present application, there is provided a construction apparatus for training samples, including:
the original image acquisition module is used for acquiring an original image to be annotated, wherein the original image comprises a plurality of image instances, the image instances comprise a plurality of adhesion instances, and the interval distance between the adhesion instances is smaller than or equal to the standard segmentation distance;
the first instance segmentation result acquisition module is used for carrying out instance segmentation on the original image by adopting an image recognition technology to acquire a first instance segmentation result corresponding to the original image;
the second instance segmentation result acquisition module is used for transmitting the first instance segmentation result to the adhesion segmentation platform and acquiring a second instance segmentation result fed back by the adhesion segmentation platform; the second instance segmentation result comprises segmentation results of at least two bonding instances in the first instance segmentation result;
And the training sample acquisition module is used for marking the original image according to the second example segmentation result, and taking the marked image as a training sample of the example segmentation model.
According to another aspect of the present application, there is provided a counting device comprising:
the target image acquisition module is used for acquiring a target image, wherein the target image comprises a plurality of image instances, the image instances comprise a plurality of adhesion instances, and the interval distance between the adhesion instances is smaller than or equal to the standard segmentation distance;
the example segmentation result acquisition module is used for inputting the target image into a pre-trained example segmentation model and acquiring an example segmentation result corresponding to the target image;
the example segmentation model is obtained by training a training sample generated by using the method for constructing the training sample according to any embodiment of the application;
and the quantity value determining module is used for determining the quantity value of the image instance included in the target image according to the instance segmentation result.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present application.
According to another aspect of the present application, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any of the embodiments of the present application.
According to another aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method as described in any of the embodiments of the present application.
According to the technology, the problem of training sample generation or the problem of electronic part counting is solved, and the quality of training sample construction is improved.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is a flow chart of a method of constructing training samples according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of constructing a training sample according to an embodiment of the present application;
FIG. 3 is a schematic diagram of segmentation results using an example segmentation model according to an embodiment of the present application;
FIG. 4 is a flow chart of a counting method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a construction device for training samples according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a counting device according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device of a method of constructing training samples or a method of counting according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flow chart of a method for constructing a training sample according to an embodiment of the present application, where the method is applicable to a case of constructing a training sample when an example segmentation model is generated when an electronic component technology is implemented by a deep learning technology, and the method may be performed by a device for constructing a training sample, where the device may be implemented by software and/or hardware, and integrated into an electronic device such as an SMT tray feeder. Specifically, referring to fig. 1, the method specifically includes the following steps:
step 110, obtaining an original image to be marked.
The original image comprises a plurality of image examples, the image examples comprise a plurality of adhesion examples, and the interval distance between the adhesion examples is smaller than or equal to the standard segmentation distance.
The original image may be an SMT tray X-ray image in embodiments of the present application. The electronic parts in the SMT material disc are very close in distance, and the counting effect of the electronic parts by adopting a common color image is poor. The application selects the X-ray image to avoid the influence of color, so that the counting of the electronic parts is more accurate. The original image may be obtained by communication from a downstream manufacturer or may be obtained by X-ray imaging of an SMT tray.
The image example may be an electronic component, and specifically, the original image may be subjected to image segmentation by a conventional image algorithm to obtain individual electronic components. For example, standard segmentation distance pairs may be used to image segment the original image using conventional image algorithms. The standard dividing distance may be determined according to a preset interval when the electronic parts are attached, or the standard dividing distance may be determined according to a specific position of the electronic parts by recognizing the position of the electronic parts according to an image algorithm.
The adhesion example may be an image in which the electronic component is adhered on the tape in the image example. The separation distance between the adhesion instances being less than or equal to the standard separation distance may be the adhesion region present in the image instance. The adhesion area may be the same position of the plurality of adhesion examples on the material belt, or the plurality of adhesion examples partially overlap, so that each adhesion example cannot be split when the original image is segmented by the traditional image algorithm.
And 120, performing instance segmentation on the original image by adopting an image recognition technology, and obtaining a first instance segmentation result corresponding to the original image.
Wherein the first instance segmentation result may be a process of acquiring an image instance. Conventional image algorithms may be used in this application to obtain image instances by image segmentation of the original image. However, in order to reduce the manpower consumption and improve the accuracy of image instance acquisition, so that the subsequent processing of adhering the instance is facilitated, and in the application, the instance segmentation can be performed on the original image by adopting an image recognition technology. The image recognition technique may be to also optimize the original image before image segmentation using an image algorithm. For example, the optimization process for the original image may be a binarization process, a noise reduction process, a background expansion process, or the like.
Step 130, transmitting the first instance segmentation result to the adhesion segmentation platform, and obtaining a second instance segmentation result fed back by the adhesion segmentation platform.
The second instance segmentation result comprises segmentation results of at least two adhesion instances in the first instance segmentation result.
The adhesion segmentation stage may be a stage for segmenting adhesion regions in the image instance. In the application, the adhesion segmentation platform can obtain a repeated labeling result of the first instance segmentation result, namely the second instance segmentation result, manually. The manual re-labeling can segment the bonding examples, for example, when a plurality of bonding examples exist in the first example segmentation result, each bonding example can be segmented in a manual re-labeling mode. The precision of image segmentation can be improved, and simultaneously the required manual consumption is low, and the radiation influence can not be generated on people.
And 140, marking the original image according to the second example segmentation result, and taking the marked image as a training sample of the example segmentation model.
The labels for the original image may include labels for the image instance and the adhesion instance. The same or different labels can be marked on the image instance and the adhesion instance in the original image. The annotated image may be used as a training sample of the example segmentation model. In the application, a large number of labeling images can be generated through the technical schemes described in the steps 110 to 140 and used as training samples of the example segmentation model, so that the accuracy and reliability of the example segmentation model training can be improved, and the problems that the technical error of electronic parts is high, the labor consumption is high and the adhesion area cannot be processed in the prior art can be solved.
According to the technical scheme, an original image to be marked is obtained; performing instance segmentation on the original image by adopting an image recognition technology to obtain a first instance segmentation result corresponding to the original image; transmitting the first instance segmentation result to an adhesion segmentation platform, and acquiring a second instance segmentation result fed back by the adhesion segmentation platform; according to the method, the original image is marked according to the second example segmentation result, the marked image is used as a training sample of the example segmentation model, the problem of building the training sample of the example segmentation model adopted during counting of electronic parts is solved, the reliability of generating the training sample can be improved, the accuracy of counting of the electronic parts is improved, manpower is saved, and the damage of X-rays to human bodies can be reduced.
Fig. 2 is a flow chart of a method for constructing a training sample according to an embodiment of the present application, which is a further refinement of the foregoing technical solution, where the technical solution in the embodiment may be combined with one or more of the foregoing embodiments.
Specifically, in an optional embodiment of the present application, performing an instance segmentation on an original image by using an image recognition technology to obtain a first instance segmentation result corresponding to the original image, including: and carrying out binarization processing on the original image, and carrying out connected domain segmentation on the image subjected to the binarization processing to obtain a first instance segmentation result.
In order to further improve accuracy of image instance determination, in an optional embodiment of the present application, before performing connected domain segmentation on the binarized image, the method further includes: performing image background expansion processing on the binarized image to increase the interval distance between adjacent image examples; after the connected domain segmentation is carried out on the image after the binarization processing, the method further comprises the following steps: and carrying out edge optimization on the segmentation result of the first example according to the pixel information of each pixel point in the original image.
In an optional embodiment of the present application, performing edge optimization on the first instance segmentation result according to pixel information of each pixel point in the original image includes: and (3) adopting a full-connection conditional random field (DenseRF) algorithm, and carrying out edge optimization on the segmentation result of the first example according to the pixel information of each pixel point in the original image.
Referring to fig. 2, the method specifically includes the steps of:
step 210, obtaining an original image to be annotated.
The original image comprises a plurality of image examples, the image examples comprise a plurality of adhesion examples, and the interval distance between the adhesion examples is smaller than or equal to the standard segmentation distance; the original image is an X-ray image of an SMT pallet, and examples of images are electronic components.
Step 220, binarizing the original image.
Optionally, the original image is an X-ray image. Although the X-ray image is visible to the naked eye as a black-and-white image, in reality there are more than two gray values in the X-ray image, but there are multiple gray values. For the accuracy of the example segmentation, binarization processing of the X-ray image is required. The binarized image can be obtained by judging pixels with gray levels greater than or equal to a threshold value in the original image as electronic parts, setting gray level values as 255, and setting the rest gray level values as 0, so that the original image presents an obvious black-and-white effect, and the connected domain segmentation of the image after the binarization processing is facilitated. The connected region may be a region having a gradation value of 255, and represents a region where one or more electronic components are located. The first instance segmentation result can be obtained by performing binarization processing and connected domain segmentation on the original image, namely, the image instance in the original image can be determined.
And 230, performing image background expansion processing on the binarized image to increase the interval distance between adjacent image examples.
The image background expansion processing can be to add pixel values at the edge of the image, so that the whole pixel values are expanded to achieve the expansion effect of the image background. The expansion processing of the image background can expand the white background area in the image after the binarization processing, the interval distance between the image examples is increased, and the segmentation of the image examples is facilitated, so that the number of electronic parts in the original image can be accurately determined.
And 240, carrying out connected domain segmentation on the image subjected to the image background expansion processing to obtain a first example segmentation result.
After performing image background expansion processing on an image, the connected domain can be marked and segmented through a connected domain algorithm, and the obtained result can be subjected to image Mask (Mask) processing, specifically, the connected domain can be set to be 1, and the remaining area can be set to be 0.
Step 250, adopting a DenseRF algorithm to perform edge optimization on the first instance segmentation result according to the pixel information of each pixel point in the original image.
The result obtained by marking and dividing the connected domain after the image background expansion processing is rough, the edge shape of the original electronic part is easy to lose, and the training effect obtained by training directly serving as a sample is poor.
In the technical solution of the embodiment of the present application, after the connected domain segmentation, edge optimization may be performed according to pixel information of each pixel point in the original image. So that the edge shape of the electronic component in the resulting connected domain can be restored.
For an image to be segmented, each pixel i of the image has a class label x i And observed value y i Thus, each pixel point is taken as a node, and the relation between the pixel points is taken as an edge, so that a conditional random field can be formed. The conditional random field may satisfy a Gibbs distribution, in particular, in a fully connected conditional random field Distribution. Where I is a global observation, E (x) is the energy of class label x, E (x) =∑ i Ψ u (x i )+∑ i<j Ψ p (x i ,x j ). Wherein E (x) comprises two terms, Σ i Ψ u (x i ) Representing a unitary potential energy function representing the assignment of pixel points i to class labels x i Generally used in segmentation tasks are probability maps of the previous segmentation model outputs. In the present application, Σ i Ψ u (x i ) The resulting 0 or 1 image is processed using an image Mask. Sigma (sigma) i<j Ψ p (x i ,x j ) Is a binary potential energy function representing the simultaneous assignment of pixel points i and j into class labels x i And x j The energy of the pixel points, which describes the global relation between the pixel points, encourages similar pixel points to be distributed with the same class labels, and pixel points with larger phase difference are distributed with different class labels. The positions and colors of the similar pixel points are measured, and the pixel points with similar colors and similar positions are more easily distributed into the same class label. Through the DenseRF algorithm, the connected domain in the image can be segmented at the boundary of the electronic part as much as possible when the image is segmented, the shape of the electronic part can be reserved, and the effect of the segmented image serving as a training sample is more accurate.
Step 260, transmitting the edge-optimized instance segmentation result to the adhesion segmentation platform, and obtaining a second instance segmentation result fed back by the adhesion segmentation platform.
The second instance segmentation result comprises segmentation results of at least two adhesion instances in the edge-optimized instance segmentation results.
And 270, marking the original image according to the second example segmentation result, and taking the marked image as a training sample of the example segmentation model.
And 280, training a set machine learning model according to a plurality of training samples to obtain an example segmentation model.
When the training sample is subjected to model training, a plurality of segmentation models can be selected. Generally, segmentation models can be divided into semantic segmentation models and instance segmentation models. When the semantic segmentation model identifies a plurality of similar individuals with relatively close distances in an image, the similar individuals can be marked only once, and when the instance segmentation model identifies the individuals in the image, each similar individual can be marked even if the distances of the two similar individuals are close again.
In the application, in order to accurately count the electronic parts by adopting the trained example segmentation model later, the example segmentation model in the segmentation model can be used as a set machine learning model, and model training is performed according to the constructed training sample, so that the trained example segmentation model is obtained.
In an alternative embodiment of the present application, the set machine learning model includes a MaskRCNN model. Finer division of the individual can be achieved, and for the same type of individual, the individual can be divided into different individuals even if adhered together.
FIG. 3 is a schematic diagram of segmentation results using an example segmentation model according to an embodiment of the present application. As shown in fig. 3, the example segmentation model trained by the application is adopted, so that a large number of adjustment parameters are not required for different electronic parts, and the accurate segmentation result of each electronic part can be obtained without additionally segmenting the adhesion area.
According to the technical scheme, an original image to be marked is obtained; performing binarization processing on the original image; performing image background expansion processing on the binarized image to increase the interval distance between adjacent image examples; carrying out connected domain segmentation on the image subjected to the image background expansion processing to obtain a first instance segmentation result; performing edge optimization on the segmentation result of the first example according to the pixel information of each pixel point in the original image by adopting a DenseRF algorithm; transmitting the example segmentation result after the edge optimization to an adhesion segmentation platform, and acquiring a second example segmentation result fed back by the adhesion segmentation platform; labeling the original image according to the second example segmentation result, and taking the labeled image as a training sample of the example segmentation model; the machine learning model is set according to a plurality of training samples to train the example segmentation model, the problem of training the example segmentation model adopted during counting of electronic parts is solved, and the reliability of the generation of the training samples can be improved, so that the reliability of the training result of the real force segmentation model is improved, the practicability of the model is improved, the accuracy of counting of the electronic parts is improved, the manpower is saved, and the damage of X rays to human bodies can be reduced.
Fig. 4 is a schematic flow chart of a counting method according to an embodiment of the present application, which is applicable to the case of counting electronic components, especially electronic components in an SMT tray, and the method may be performed by a counting device, which may be implemented by software and/or hardware and integrated in an electronic device, such as an SMT tray. Specifically, referring to fig. 4, the method specifically includes the following steps:
step 310, acquiring a target image.
The target image comprises a plurality of image examples, the image examples comprise a plurality of adhesion examples, and the interval distance between the adhesion examples is smaller than or equal to the standard segmentation distance.
The target image may be a data source acquired by communication from a downstream vendor or may be an image taken at an SMT bin for electronic component counting. Specifically, the target image may be an X-ray image of an SMT tray.
Step 320, inputting the target image into a pre-trained instance segmentation model, and obtaining an instance segmentation result corresponding to the target image.
The example segmentation model is obtained through training by using a training sample generated by the method for constructing the training sample provided by any embodiment of the application.
The example segmentation of the target image is obtained by deep learning through the example segmentation model provided by the application. The example segmentation model is generated based on training of a training sample pre-constructed in the application. The method for obtaining the example segmentation result of the target image can simplify the example segmentation without parameter adjustment of different electronic parts; the artificial participation in each instance segmentation is not needed, so that the artificial consumption is reduced; the accuracy of the instance division can be improved without additional processing of the adhesion area.
Step 330, determining a quantity value of the image instance included in the target image according to the instance segmentation result.
The electronic parts are segmented one by one in an instance segmentation result obtained after the instance segmentation model is used for carrying out instance segmentation on the target image to form an image instance. Because the adhesion area is considered during model training, the adhesion area does not exist in the obtained example segmentation result, only one electronic part exists in one image example, and the number of the electronic parts can be determined by determining the number value of the image examples. The counting accuracy of the electronic parts can be improved, parameters do not need to be adjusted for different counting of the electronic parts, and the counting device is convenient and accurate.
According to the technical scheme, the target image is acquired; inputting a target image into a pre-trained instance segmentation model, and acquiring an instance segmentation result corresponding to the target image; according to the example segmentation result, the number value of the image examples included in the target image is determined, the counting problem of the electronic parts is solved, the counting accuracy of the electronic parts is improved, the counting complexity of the electronic parts is simplified, the parameter adjustment of different electronic parts is not needed, the processing of the adhesion area is not needed, the labor is saved, and the damage effect of the X-rays to human bodies can be reduced.
Fig. 5 is a schematic structural view of a training sample constructing apparatus according to an embodiment of the present application, which may be disposed in an electronic device such as an SMT tray. Specifically, as shown in fig. 5, the apparatus includes: an original image acquisition module 510, a first instance segmentation result acquisition module 520, a second instance segmentation result acquisition module 530, and a training sample acquisition module 540.
The original image obtaining module 510 is configured to obtain an original image to be annotated, where the original image includes a plurality of image instances, the image instances include a plurality of adhesion instances, and a separation distance between the adhesion instances is less than or equal to a standard segmentation distance;
A first instance segmentation result obtaining module 520, configured to perform instance segmentation on the original image by using an image recognition technology, to obtain a first instance segmentation result corresponding to the original image;
the second instance segmentation result obtaining module 530 is configured to transmit the first instance segmentation result to the adhesion segmentation platform, and obtain a second instance segmentation result fed back by the adhesion segmentation platform; the second instance segmentation result comprises segmentation results of at least two adhesion instances in the first instance segmentation result;
the training sample obtaining module 540 is configured to label the original image according to the second example segmentation result, and use the labeled image as a training sample of the example segmentation model.
Optionally, the first instance segmentation result acquisition module 520 includes:
the first instance segmentation result acquisition unit is used for carrying out binarization processing on the original image and carrying out connected domain segmentation on the image subjected to the binarization processing to obtain a first instance segmentation result.
Optionally, the device further includes:
the expansion processing module is used for carrying out image background expansion processing on the images after the binarization processing before carrying out connected domain segmentation on the images after the binarization processing so as to increase the interval distance between adjacent image examples;
The edge optimization module is used for carrying out connected domain segmentation on the image after the binarization processing to obtain a first instance segmentation result, and carrying out edge optimization on the first instance segmentation result according to the pixel information of each pixel point in the original image.
Optionally, the edge optimization module includes:
and the edge optimization unit is used for carrying out edge optimization on the segmentation result of the first example according to the pixel information of each pixel point in the original image by adopting a full-connection conditional random field DenseRF algorithm.
Optionally, the original image is an X-ray image of a Surface Mount Technology (SMT) material tray, and the image is an electronic component.
Optionally, the device further includes:
the example segmentation model acquisition module is used for performing set machine learning model training according to a plurality of training samples after the marked image is used as a training sample of the example segmentation model, so as to obtain the example segmentation model.
Optionally, the example segmentation model includes a maskrnn model.
The training sample constructing device provided by the embodiment of the application can execute the training sample constructing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Fig. 6 is a schematic structural diagram of a counting device according to an embodiment of the present application, which may be disposed in an electronic device such as an SMT tray. Specifically, as shown in fig. 6, the apparatus includes: the target image acquisition module 610, the instance segmentation result acquisition module 620 and the magnitude determination module 630.
The target image obtaining module 610 is configured to obtain a target image, where the target image includes a plurality of image instances, the image instances include a plurality of adhesion instances, and a separation distance between the adhesion instances is less than or equal to a standard segmentation distance;
an instance segmentation result obtaining module 620, configured to input a target image into a pre-trained instance segmentation model, and obtain an instance segmentation result corresponding to the target image;
the example segmentation model is obtained by training a training sample generated by using the method for constructing the training sample provided by any embodiment of the application;
the quantity value determining module 630 is configured to determine a quantity value of an image instance included in the target image according to the instance segmentation result.
The counting device provided by the embodiment of the application can execute the counting method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
According to embodiments of the present application, there is also provided an electronic device, a readable storage medium and a computer program product.
Fig. 7 shows a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 701 performs the respective methods and processes described above, for example, a construction method or a counting method of training samples. For example, in some embodiments, the method of constructing or counting training samples may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When the computer program is loaded into the RAM 703 and executed by the calculation unit 701, one or more steps of the above-described construction method or counting method of training samples may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the construction method or the counting method of the training samples in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present application may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (12)

1. A method of constructing a training sample, comprising:
obtaining an original image to be annotated, wherein the original image comprises a plurality of image instances, the image instances comprise a plurality of adhesion instances, and the interval distance between the adhesion instances is smaller than or equal to the standard segmentation distance; the original image is an X-ray image of a SMT material disc in a surface packaging technology, and the image example is an electronic part;
Performing instance segmentation on the original image by adopting an image recognition technology to obtain a first instance segmentation result corresponding to the original image;
transmitting the first instance segmentation result to an adhesion segmentation platform, and acquiring a second instance segmentation result fed back by the adhesion segmentation platform; the second instance segmentation result comprises segmentation results of at least two bonding instances in the first instance segmentation result; the adhesion segmentation platform obtains a repeated marking result of the first instance segmentation result by manpower as a second instance segmentation result;
labeling the original image according to the second example segmentation result, and taking the labeled image as a training sample of an example segmentation model;
performing instance segmentation on the original image by adopting an image recognition technology to obtain a first instance segmentation result corresponding to the original image, wherein the method comprises the following steps:
binarizing the original image,
performing image background expansion processing on the binarized image to increase the interval distance between adjacent image examples; carrying out connected domain segmentation on the image to obtain a segmentation result of the first example;
and carrying out edge optimization on the first example segmentation result according to the pixel information of each pixel point in the original image so as to carry out edge shape recovery on the electronic parts in the first example segmentation result.
2. The method of claim 1, wherein performing edge optimization on the first instance segmentation result according to pixel information of each pixel point in the original image comprises:
and carrying out edge optimization on the first example segmentation result according to the pixel information of each pixel point in the original image by adopting a full-connection conditional random field DenseRF algorithm.
3. The method of any of claims 1-2, further comprising, after taking the annotated image as a training sample of the example segmentation model:
and training a set machine learning model according to a plurality of training samples to obtain the example segmentation model.
4. A method as claimed in claim 3, wherein the set machine learning model comprises a maskrnn model.
5. A counting method, comprising:
acquiring a target image, wherein the target image comprises a plurality of image instances, the image instances comprise a plurality of adhesion instances, and the interval distance between the adhesion instances is smaller than or equal to the standard segmentation distance;
inputting the target image into a pre-trained instance segmentation model, and acquiring an instance segmentation result corresponding to the target image;
wherein the example segmentation model is trained using training samples generated by the method of constructing training samples according to any one of claims 1-4;
Based on the instance segmentation result, a quantity value of image instances included in the target image is determined.
6. A training sample constructing apparatus, comprising:
the original image acquisition module is used for acquiring an original image to be annotated, wherein the original image comprises a plurality of image instances, the image instances comprise a plurality of adhesion instances, and the interval distance between the adhesion instances is smaller than or equal to the standard segmentation distance; the original image is an X-ray image of a SMT material disc in a surface packaging technology, and the image example is an electronic part;
the first instance segmentation result acquisition module is used for carrying out instance segmentation on the original image by adopting an image recognition technology to acquire a first instance segmentation result corresponding to the original image;
the second instance segmentation result acquisition module is used for transmitting the first instance segmentation result to the adhesion segmentation platform and acquiring a second instance segmentation result fed back by the adhesion segmentation platform; the second instance segmentation result comprises segmentation results of at least two bonding instances in the first instance segmentation result; the adhesion segmentation platform obtains a repeated marking result of the first instance segmentation result by manpower as a second instance segmentation result;
The training sample acquisition module is used for marking the original image according to the second example segmentation result, and taking the marked image as a training sample of the example segmentation model;
the first instance segmentation result acquisition module comprises:
the first instance segmentation result acquisition unit is used for carrying out binarization processing on the original image and carrying out connected domain segmentation on the image subjected to the binarization processing to obtain a first instance segmentation result;
the device further comprises:
the expansion processing module is used for carrying out image background expansion processing on the image after the binarization processing before carrying out connected domain segmentation on the image after the binarization processing so as to increase the interval distance between the adjacent image examples;
and the edge optimization module is used for carrying out connected domain segmentation on the image subjected to binarization processing to obtain the first instance segmentation result, and carrying out edge optimization on the first instance segmentation result according to the pixel information of each pixel point in the original image so as to carry out edge shape recovery on the electronic parts in the first instance segmentation result.
7. The apparatus of claim 6, wherein the edge optimization module comprises:
And the edge optimization unit is used for carrying out edge optimization on the first example segmentation result according to the pixel information of each pixel point in the original image by adopting a full-connection conditional random field DenseRF algorithm.
8. The apparatus of any of claims 6-7, further comprising:
and the example segmentation model acquisition module is used for taking the marked image as a training sample of the example segmentation model, and then performing set machine learning model training according to a plurality of training samples to obtain the example segmentation model.
9. The apparatus of claim 8, wherein the instance segmentation model comprises a maskrnn model.
10. A counting device, comprising:
the target image acquisition module is used for acquiring a target image, wherein the target image comprises a plurality of image instances, the image instances comprise a plurality of adhesion instances, and the interval distance between the adhesion instances is smaller than or equal to the standard segmentation distance;
the example segmentation result acquisition module is used for inputting the target image into a pre-trained example segmentation model and acquiring an example segmentation result corresponding to the target image;
wherein the example segmentation model is trained using training samples generated by the method of constructing training samples according to any one of claims 1-4;
And the quantity value determining module is used for determining the quantity value of the image instance included in the target image according to the instance segmentation result.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4 or the method of claim 5.
12. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-4 or the method of claim 5.
CN202011532393.2A 2020-12-22 2020-12-22 Training sample construction method, counting device, electronic equipment and medium Active CN112508128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011532393.2A CN112508128B (en) 2020-12-22 2020-12-22 Training sample construction method, counting device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011532393.2A CN112508128B (en) 2020-12-22 2020-12-22 Training sample construction method, counting device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN112508128A CN112508128A (en) 2021-03-16
CN112508128B true CN112508128B (en) 2023-07-25

Family

ID=74923338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011532393.2A Active CN112508128B (en) 2020-12-22 2020-12-22 Training sample construction method, counting device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN112508128B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379784B (en) * 2021-05-30 2022-03-25 南方医科大学 Counting method of SMT material tray electronic components based on X-ray projection
CN113947771B (en) * 2021-10-15 2023-06-27 北京百度网讯科技有限公司 Image recognition method, apparatus, device, storage medium, and program product
CN114170483B (en) * 2022-02-11 2022-05-20 南京甄视智能科技有限公司 Training and using method, device, medium and equipment of floater identification model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095957A (en) * 2014-05-12 2015-11-25 浙江理工大学 Silkworm cocoon counting method based on image segmentation
US9939272B1 (en) * 2017-01-06 2018-04-10 TCL Research America Inc. Method and system for building personalized knowledge base of semantic image segmentation via a selective random field approach
CN109242869A (en) * 2018-09-21 2019-01-18 科大讯飞股份有限公司 A kind of image instance dividing method, device, equipment and storage medium
CN109801308A (en) * 2018-12-28 2019-05-24 西安电子科技大学 The dividing method of adhesion similar round target image
CN109919159A (en) * 2019-01-22 2019-06-21 西安电子科技大学 A kind of semantic segmentation optimization method and device for edge image
CN111862119A (en) * 2020-07-21 2020-10-30 武汉科技大学 Semantic information extraction method based on Mask-RCNN
CN111986183A (en) * 2020-08-25 2020-11-24 中国科学院长春光学精密机械与物理研究所 Chromosome scattergram image automatic segmentation and identification system and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10635927B2 (en) * 2017-03-06 2020-04-28 Honda Motor Co., Ltd. Systems for performing semantic segmentation and methods thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095957A (en) * 2014-05-12 2015-11-25 浙江理工大学 Silkworm cocoon counting method based on image segmentation
US9939272B1 (en) * 2017-01-06 2018-04-10 TCL Research America Inc. Method and system for building personalized knowledge base of semantic image segmentation via a selective random field approach
CN109242869A (en) * 2018-09-21 2019-01-18 科大讯飞股份有限公司 A kind of image instance dividing method, device, equipment and storage medium
CN109801308A (en) * 2018-12-28 2019-05-24 西安电子科技大学 The dividing method of adhesion similar round target image
CN109919159A (en) * 2019-01-22 2019-06-21 西安电子科技大学 A kind of semantic segmentation optimization method and device for edge image
CN111862119A (en) * 2020-07-21 2020-10-30 武汉科技大学 Semantic information extraction method based on Mask-RCNN
CN111986183A (en) * 2020-08-25 2020-11-24 中国科学院长春光学精密机械与物理研究所 Chromosome scattergram image automatic segmentation and identification system and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
InstanceCut: from Edges to Instances with MultiCut;A Kirillov et al;《Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;全文 *
基于深度学习的群猪图像实例分割方法;高云;郭继亮;黎煊;雷明刚;卢军;童宇;;农业机械学报(第04期);全文 *

Also Published As

Publication number Publication date
CN112508128A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN112508128B (en) Training sample construction method, counting device, electronic equipment and medium
CN113378833B (en) Image recognition model training method, image recognition device and electronic equipment
CN114066900A (en) Image segmentation method and device, electronic equipment and storage medium
CN113674421B (en) 3D target detection method, model training method, related device and electronic equipment
CN112861885B (en) Image recognition method, device, electronic equipment and storage medium
CN112560874A (en) Training method, device, equipment and medium for image recognition model
CN107506792B (en) Semi-supervised salient object detection method
CN109285181B (en) Method and apparatus for recognizing image
CN113378832B (en) Text detection model training method, text prediction box method and device
CN114419035B (en) Product identification method, model training device and electronic equipment
CN113657483A (en) Model training method, target detection method, device, equipment and storage medium
CN116403083A (en) Image processing method and device, electronic equipment and storage medium
CN113610809B (en) Fracture detection method, fracture detection device, electronic equipment and storage medium
CN113378958A (en) Automatic labeling method, device, equipment, storage medium and computer program product
CN113537192A (en) Image detection method, image detection device, electronic equipment and storage medium
CN117333443A (en) Defect detection method and device, electronic equipment and storage medium
CN116935368A (en) Deep learning model training method, text line detection method, device and equipment
CN114677566B (en) Training method of deep learning model, object recognition method and device
CN114882313B (en) Method, device, electronic equipment and storage medium for generating image annotation information
CN113984072B (en) Vehicle positioning method, device, equipment, storage medium and automatic driving vehicle
CN115761698A (en) Target detection method, device, equipment and storage medium
CN113936158A (en) Label matching method and device
CN113947146A (en) Sample data generation method, model training method, image detection method and device
CN114187487A (en) Processing method, device, equipment and medium for large-scale point cloud data
CN114120410A (en) Method, apparatus, device, medium and product for generating label information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant