CN111753692A - Target object extraction method, product detection method, device, computer and medium - Google Patents

Target object extraction method, product detection method, device, computer and medium Download PDF

Info

Publication number
CN111753692A
CN111753692A CN202010542166.1A CN202010542166A CN111753692A CN 111753692 A CN111753692 A CN 111753692A CN 202010542166 A CN202010542166 A CN 202010542166A CN 111753692 A CN111753692 A CN 111753692A
Authority
CN
China
Prior art keywords
image
target object
target
contour
extracting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010542166.1A
Other languages
Chinese (zh)
Other versions
CN111753692B (en
Inventor
张黎
陈彦宇
谭泽汉
马雅奇
周慧子
谭龙田
陈琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202010542166.1A priority Critical patent/CN111753692B/en
Publication of CN111753692A publication Critical patent/CN111753692A/en
Application granted granted Critical
Publication of CN111753692B publication Critical patent/CN111753692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a target object extraction method, a product detection device, a computer and a medium, wherein the target object extraction method comprises the steps of obtaining an image; inputting the image into a training model trained in advance for feature detection to obtain feature information of the image; obtaining a region of interest of the image from the image based on the characteristic information; obtaining a coverage area of the target object in the image according to the region of interest; obtaining a target contour of the target object in the image based on the coverage area of the target object in the image; and extracting the target object from the image according to the target contour in the image. The image is subjected to feature detection through the training model to obtain feature information of the image, a region of interest in the image is determined, the position of the target object in the image is obtained, the outline of the target object in the image is obtained, and the target object is extracted from the image. The automatic extraction of the target object in the image is realized, so that the extraction of the refrigerator panel is more accurate, and the efficiency is improved.

Description

Target object extraction method, product detection method, device, computer and medium
Technical Field
The invention relates to the technical field of image recognition, in particular to a target object extraction method, a product detection device, a computer and a medium.
Background
In the field of modern manufacturing production, image registration is a key link in product quality detection based on computer vision, and the registration effect of images directly influences the quality detection speed and effect. The configuration effect depends on the selection of the feature region to a large extent, so that it is particularly important to select a desired feature region. At present, in the quality detection of refrigerator panels, most characteristic areas are selected manually or in a semi-automatic mode, manual selection is time-consuming and labor-consuming and low in efficiency, and meanwhile, the characteristic areas are selected manually, so that the efficiency and the detection precision of the existing detection system are influenced.
Disclosure of Invention
In view of the above, it is necessary to provide a target object extraction method, a product detection method, an apparatus, a computer, and a medium for solving the above technical problems.
An image target object extraction method, comprising:
acquiring an image;
inputting the image into a training model trained in advance for feature detection to obtain feature information of the image;
selecting a preset number of interested areas for each feature information;
performing foreground and background binary classification on the image containing the region of interest to obtain a foreground and background pixel value, performing classification based on a preset pixel threshold value, and obtaining a coverage area of a target object in the image from the region of interest;
obtaining a target contour of the target object in the image based on the coverage area of the target object in the image;
and extracting the target object from the image according to the target contour in the image.
In one embodiment, before the step of acquiring the image, the method further comprises:
acquiring a training image;
carrying out contour calibration on a target object of the training image to generate a training sample, wherein the training sample is an image file with the contour of the target object calibrated;
and inputting the training samples into a convolutional neural network for learning to obtain the training model containing the characteristic information of the target object of each training image.
In one embodiment, the step of selecting a preset number of regions of interest for each of the feature information includes:
selecting a preset number of candidate interesting regions for each characteristic information;
and carrying out foreground and background binary classification on the candidate interesting regions, and screening the interesting regions of the image from a preset number of the candidate interesting regions.
In one embodiment, the step of obtaining a target contour of the target object in the image based on the coverage area of the target object in the image comprises:
and extracting a target contour of the target object in the image by adopting a target detection method and a threshold segmentation method based on the coverage area of the target object in the image.
In one embodiment, the step of extracting the target object from the image according to the target contour in the image comprises:
performing edge processing on the extracted target contour to obtain edge position information of the target contour;
and extracting the target object from the image based on the edge position information of the target contour.
In one embodiment, the step of performing edge processing on the extracted target contour to obtain edge position information of the target contour includes:
obtaining a mask image containing black and white two colors of the image based on the target contour;
and performing edge processing on the black-white two-color mask image by adopting a Canny detection algorithm to obtain edge position information of the target contour.
In one embodiment, the step of extracting the target object from the image based on the edge position information of the target contour includes:
acquiring the gray value of each pixel point in the image by adopting a threshold segmentation method;
comparing the gray value of each pixel point in the image with a preset gray threshold value, and performing binarization processing on the comparison result of the gray value of each pixel point in the image and the preset gray threshold value to obtain an image after binarization processing;
and extracting the target object from the image after the binarization processing based on the edge position information of the target contour.
A product inspection method, comprising:
according to the image target object extraction method in any embodiment, the target object in the product image is extracted, the image information of the target object is obtained, and whether the product meets the preset requirement or not is judged according to the image information of the target object.
An image target object extraction apparatus comprising:
the image acquisition module is used for acquiring an image;
the characteristic information acquisition module is used for inputting the image into a pre-trained training model for characteristic detection to acquire characteristic information of the image;
the interesting region obtaining module is used for selecting a preset number of interesting regions for each piece of characteristic information;
the target position information obtaining module is used for carrying out foreground and background binary classification on the image containing the region of interest to obtain a foreground and background pixel value, carrying out classification based on a preset pixel threshold value and obtaining a coverage area of a target object in the image from the region of interest;
a target contour obtaining module, configured to obtain a target contour of the target object in the image based on a coverage area of the target object in the image;
and the target object extraction module is used for extracting the target object from the image according to the target contour in the image.
The method comprises the steps of carrying out feature detection on an image through a pre-trained training model based on a semantic segmentation technology to obtain feature information of the image, further determining a region of interest in the image, positioning and obtaining the position of a target object in the image, obtaining the outline of the target object in the image based on the position in the image, and extracting the target object from the image according to the outline. The automatic extraction of the target object in the image is realized, manual selection is not needed, the extraction of the refrigerator panel is more accurate, the operation is quicker and more convenient, the time is effectively saved, the labor is simplified, and the efficiency is improved.
Drawings
FIG. 1A is a schematic flow chart diagram illustrating a method for extracting an image target object, according to an embodiment;
FIG. 1B is a schematic flowchart of a method for extracting an image target object according to another embodiment;
FIG. 1C is a schematic flowchart of a method for extracting an image target object according to yet another embodiment;
FIG. 1D is a flowchart illustrating a method for extracting an image target object according to still another embodiment;
FIG. 2 is a block diagram showing the structure of an image target object extracting apparatus according to an embodiment;
FIG. 3 is a diagram of the internal structure of a computer device in one embodiment;
FIG. 4 is a flow diagram that illustrates the training process for training a model, under an embodiment;
FIG. 5 is a flow diagram that illustrates the image target object extraction process, in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be understood that, in the present application, the image target object extraction method may be applied to extracting a panel of a refrigerator, and may also be applied to extracting panels in other manufacturing products, or in other scenes where an object portrait needs to be extracted from an image, which are not repeated in this application, in the following embodiment, the image target object extraction method is applied to extracting a panel of a refrigerator for further description. It should be understood that the target object extracted in the present application refers to an image or a figure of the target object in the image.
In one embodiment, as shown in fig. 1A, there is provided an image target object extraction method,
step 110, an image is acquired.
Specifically, the image is an image of a target object to be extracted. For example, the image is an image of a refrigerator, the image of the refrigerator includes a refrigerator panel therein, and thus, the target object is a refrigerator panel, and the target object is an image of a panel of the refrigerator, which may be a front panel, a rear panel, or a side panel.
And 120, inputting the image into a pre-trained training model for feature detection to obtain feature information of the image.
Specifically, the training model is trained in advance, and can perform feature detection on features of the image. In this embodiment, the training model performs feature detection on the image by using a branch network of a segmented deep learning Mask-RCNN framework, and the training model includes a pre-trained feature information set of a target object. In this embodiment, the training model performs feature detection on the image based on semantic segmentation, and further obtains feature information of the image, so that the feature information of the input image can be detected by inputting the image into the training model.
In one embodiment, step 120 includes preprocessing the image to obtain a preprocessed image; inputting the preprocessed image into a pre-trained training model for learning, and obtaining the characteristic information of the image.
In the embodiment, irrelevant information in the image is eliminated through image preprocessing, useful real information is reserved, the image denoising is realized, the detectability of relevant information is enhanced, and data is simplified. It should be understood that, when processing a color image, three channels are required to be sequentially processed, which results in a large time overhead, and in this embodiment, through the graying of the preprocessing, each pixel of the grayscale image only needs one byte to store the grayscale value, where the grayscale range is 0-255, so that the data amount of the pixel of the image is reduced, which is beneficial to improving the overall processing speed and reducing the data amount required to be processed.
Step 130, selecting a preset number of regions of interest for each piece of feature information.
In this step, at least one region of interest is selected for each feature information in the image. The region of interest is an image region selected from the image. The selection mode can be selected according to the number of the characteristic information, or the position of the characteristic information, or the selection mode can be selected manually. It should be understood that, for selecting the region of interest for the feature information, the prior art may be adopted, and the description in this embodiment is not redundant.
Step 140, performing foreground and background binary classification on the image including the region of interest to obtain foreground and background pixel values, performing classification based on a preset pixel threshold, and obtaining a coverage area of the target object in the image from the region of interest.
Specifically, after binary classification of a foreground and a background is performed on an image, pixel values of the foreground and the background of the image are obtained, a preset pixel threshold value is obtained, regions of interest are classified based on the pixel threshold value, through comparison between the pixel values of the foreground and the background and the preset pixel threshold value, due to the fact that the foreground and the background are subjected to binary processing, the pixels of the foreground and the background are converted into gray values, further based on the comparison between the gray values and the preset pixel threshold value, the pixels larger than the preset pixel threshold value can be divided into target objects, the pixels smaller than the preset pixel threshold value are divided into backgrounds, and therefore the coverage area of the target objects is determined, and therefore the coverage area of the target objects in the image is selected from the regions of interest. In this step, the regions of interest are classified, and the position region of the target object in the image is located and searched, that is, the position information of the target object in the image is obtained. Specifically, according to the difference of the pixel values of the foreground and the background of the image and according to a preset pixel threshold, classifying the interested regions to obtain the interested region corresponding to the target object, so as to locate the position region of the target object in the image and further obtain the coverage region of the target object in the image.
And 150, acquiring a target contour of the target object in the image based on the coverage area of the target object in the image.
Specifically, the target contour is a contour of the target object in the image, and in this embodiment, after performing feature detection of semantic segmentation on the image, the contour of the target object in the image is extracted by using a target detection method and a threshold segmentation method, so as to obtain the target contour of the target object.
Step 160, extracting the target object from the image according to the target contour in the image.
Specifically, after a target contour of the target object in the image is determined, the target object in the target contour is extracted from the image according to the target contour and a threshold segmentation method, so that the target object in the image is automatically extracted, and it should be understood that the extracted target object is the image of the target object. For example, the target object is extracted, and image information of the target object is obtained. Therefore, the image of the panel of the refrigerator is automatically extracted from the image of the refrigerator without manual selection, so that the extraction of the panel of the refrigerator is more accurate, quicker and more convenient, the time is effectively saved, the manpower is simplified, and the efficiency is improved.
In the above embodiment, the pre-trained training model is used to perform feature detection on the image based on the semantic segmentation technology to obtain feature information of the image, and further determine the region of interest in the image, so as to locate and obtain the position of the target object in the image, and based on the position in the image, obtain the contour of the target object in the image, and extract the target object from the image according to the contour. The automatic extraction of the target object in the image is realized, manual selection is not needed, the extraction of the refrigerator panel is more accurate, the operation is quicker and more convenient, the time is effectively saved, the labor is simplified, and the efficiency is improved.
In one embodiment, as shown in fig. 1B, step 110 further comprises:
step 101, acquiring a training image.
102, carrying out contour calibration on the target object of the training image to generate a training sample, wherein the training sample is an image file for calibrating the contour of the target object.
Step 103, inputting the training samples into a convolutional neural network for learning, so as to obtain the training model containing the feature information of the target object of each training image.
Specifically, the training image is an image of the same type as the image in the above embodiment, the training image includes a target object, for example, the training image is an image of a refrigerator and includes images of refrigerators of different types, and the image in the above embodiment is also an image of a refrigerator, and each image of a refrigerator includes a panel. Before the training image is input into the convolutional neural network for learning, the training image reaches the standard to generate an image file which marks the outline of the target object in the training image, and the image file is a training sample. The training sample is input into a convolutional neural network for learning to obtain a training model, and the training model comprises a set of characteristic information of a target object of a training image, so that a large number of training images are trained to obtain sets of characteristic information of different refrigerator panels.
In one embodiment, images of various types of refrigerator panels are collected, and a target object is subjected to detailed contour calibration in a training image through a LabelMe tool, which can be used for creating a customized annotation task or performing image annotation. After the contour labeling of the target object is completed, a JSON (JavaScript object notation) file is generated. The JSON file is the training sample. Subsequently, each JSON file is converted into a DataSet, wherein the DataSet comprises img.png, info.yaml, label.png, label _ names.txt and label _ viz.png, and a mask data set is generated. The convolutional neural network reads a mask data set based on a mask-rcnn algorithm to perform model training, and the feature information set of the refrigerator panel is generated after the training is completed, wherein the feature information set can also be called as a feature data set and comprises feature information of a plurality of refrigerator panels.
In one embodiment, as shown in FIG. 1C, step 130 comprises:
step 131, selecting a preset number of candidate interested regions for each feature information.
Step 132, performing foreground and background binary classification on the candidate interesting regions, and screening out interesting regions of the image from a preset number of candidate interesting regions.
In this embodiment, a preset number of candidate regions of interest are set for each feature information. Specifically, a preset number Of candidate Regions Of Interest (ROI) are set for each point Of the obtained set Of the plurality Of feature information, respectively, the preset number being set in advance. And sending the candidate interesting regions into a regional network for foreground and background binary classification, filtering out a part of candidate interesting regions, and screening out a proper interesting region. It should be understood that the area network is a candidate for performing area framing on an input image, and uses information of edges, textures, colors, color changes and the like of the image to select an area which may contain an object in the image. In this embodiment, in the process of sending the candidate interesting regions into the regional network to perform binary classification of foreground and background, the candidate interesting regions are classified according to the difference between the pixel values of the foreground and background of the image and the preset pixel threshold, so as to screen out the interesting regions.
In one embodiment, the step of obtaining a target contour of the target object in the image based on the coverage area of the target object in the image comprises: and extracting a target contour of the target object in the image by adopting a target detection method and a threshold segmentation method based on the coverage area of the target object in the image.
Specifically, in the present embodiment, in the foregoing embodiment, the position information of the target object is detected by semantic segmentation, and based on the position information, the contour of the target object in the image is extracted in the complex background by using the target detection method and the threshold segmentation method, so as to obtain the target contour. It is worth mentioning that the threshold segmentation algorithm can adopt fast rcnn, yolo, SSD, etc. By the target detection method and the threshold segmentation method, the target contour can be efficiently and accurately extracted.
In one embodiment, as shown in FIG. 1D, step 160 comprises:
step 161, performing edge processing on the extracted target contour to obtain edge position information of the target contour.
Step 162, extracting the target object from the image based on the edge position information of the target contour.
In this embodiment, an optimal edge of the target contour is found through edge processing, so as to obtain accurate position information of the edge of the target contour, and based on the edge position information, the position and the range of the target contour can be accurately located, so as to improve the precision of the target object extracted from the image.
In one embodiment, the step of performing edge processing on the extracted target contour to obtain edge position information of the target contour includes: obtaining a mask image containing black and white two colors of the image based on the target contour; and performing edge processing on the black-white two-color mask image by adopting a Canny detection algorithm to obtain edge position information of the target contour.
In the embodiment, after the target contour in the image is extracted, the mask image only containing black and white colors is extracted, and the image Canny detection algorithm is adopted to perform edge processing on the mask image to obtain an optimal edge, so that the edge of the target contour is more accurate, and the accuracy of extracting the target object is improved.
In one embodiment, the step of extracting the target object from the image based on the edge position information of the target contour includes: acquiring the gray value of each pixel point in the image by adopting a threshold segmentation method; comparing the gray value of each pixel point in the image with a preset gray threshold value, and performing binarization processing on the comparison result of the gray value of each pixel point in the image and the preset gray threshold value to obtain an image after binarization processing; and extracting the target object from the image after the binarization processing based on the edge position information of the target contour.
Specifically, a threshold segmentation method is adopted to determine a certain gray value of each pixel point in the image within the gray range, the obtained gray value of each pixel in the image is compared with the gray threshold obtained by the threshold segmentation method in the previous step, a proper threshold is selected for segmentation, binarization algorithm processing is carried out, and finally the target object is extracted. The appropriate threshold can be set according to the pixel gray levels of the target object and the background, and the preset gray level threshold is obtained by setting the threshold. Through the above process, the influence of illumination on the characteristics of the image due to the environment can be reduced.
It should be understood that the threshold separation method is to select a suitable pixel value as the preset gray threshold, and the preset gray threshold can be manually set according to the pixel value of the analysis image. The image is processed into an image with high contrast and easy recognition by using the preset gray threshold as a boundary. For example, if the gray level is greater than the preset gray level threshold, the output image is white, and if the gray level is less than or equal to the preset gray level threshold, the output image is black and white, where white is the target object and black is the background in the image other than the target object. Therefore, the target object can be separated from the image, and the target object can be extracted.
The following is a specific example:
firstly, marking training of refrigerator panel graphics. Please refer to fig. 4, a deep learning method of artificial intelligence is used to collect data of the refrigerator panel feature image, mark the refrigerator panel features of each type, and perform model training according to the sample information by using a convolutional neural network algorithm to obtain a semantically recognized panel object.
Second, feature detection search. And performing feature detection on the candidate region through a branch network of an example segmentation deep learning Mask-RCNN framework. Referring to fig. 5, firstly, preprocessing an input image, and then inputting the preprocessed image into a pre-trained neural network to obtain corresponding data information; then, setting a predetermined number of ROIs for each point in this image data information, thereby obtaining a plurality of candidate ROIs; sending the candidate ROI into a regional network for foreground and background binary classification, and filtering out a part of candidate ROI; and finally, classifying the ROIs, and positioning and searching the refrigerator panel position area.
And finally, extracting the target object. The method comprises the steps of detecting target characteristics through semantic segmentation, extracting a general outline of a panel in a complex background by utilizing a target detection and threshold segmentation technology, extracting a mask only containing black and white colors, carrying out edge processing on the mask by using an image canny detection algorithm to find an optimal edge, identifying an actual edge in an image as much as possible, determining a certain gray value of each pixel point in a gray range in a refrigerator panel image by adopting a threshold segmentation method, comparing the gray value of each pixel in the obtained image with a previously determined threshold value, carrying out segmentation process, carrying out binarization algorithm processing, selecting a proper threshold value to reduce the influence of illumination caused by environment on the image characteristics, and finally automatically extracting a characteristic refrigerator panel image.
In this embodiment, the threshold segmentation algorithm may also be fast rcnn, yolo, SSD, or the like.
In the method, the area of the refrigerator panel area is detected by using an artificial intelligence deep learning algorithm, a semantic segmentation technology and image edge detection, pixel-level classification is carried out, refrigerator panel information is automatically, quickly and accurately extracted, and the working efficiency of quality inspection personnel is improved.
The method and the device automatically extract the characteristics of the refrigerator panel in the manufacturing industry by utilizing the artificial intelligence image segmentation algorithm. Compared with the traditional manual or semi-automatic method, the method provided by the application is based on the deep learning algorithm, the objective, standardized and unified feature automatic extraction can be achieved by training the information data of the refrigerator panel, the accuracy is greatly improved compared with the manual selection of the feature area, the problems that the manual selection of the feature area is different from person to person and is not objective enough are solved, and the efficiency and the detection precision of the detection system are improved.
The method provides an automatic refrigerator panel feature extraction method based on artificial intelligence, realizes automatic and accurate extraction of a refrigerator panel in the manufacturing industry by utilizing an artificial intelligence deep learning algorithm and a semantic segmentation technology, solves the problem of manual calibration or semi-automatic specific environment photographing in the traditional sense, achieves automatic and accurate extraction, is rapid and convenient, and effectively saves time and labor.
In one embodiment, a product detection method is characterized by extracting a target object in a product image according to the image target object extraction method in any one of the above embodiments, obtaining image information of the target object, and judging whether a product meets a preset requirement according to the image information of the target object.
In this embodiment, the product inspection method may also be referred to as a defect inspection method of a target object of a product image, and the method is used for inspecting a defect of a certain portion on a product, for example, the method is used for inspecting a defect of a panel of a refrigerator. The target object of the product is extracted through a target object extraction method, for example, a refrigerator panel is extracted from a refrigerator image firstly, so that the refrigerator panel is accurately positioned, and then whether the product meets preset requirements or not is judged according to image information of the target object, and then defect detection is carried out on the refrigerator panel. The product detection method includes, in addition to the steps of the image target object extraction method, a step of detecting a defect of the target object, and it is worth mentioning that the step of detecting the defect of the target object may be implemented by using the prior art, or may be implemented by using a conventional defect detection method in the art, and these defect detection means can be known by those skilled in the art, and are not described in detail in this embodiment.
In one embodiment, as shown in fig. 2, there is provided an image target object extracting apparatus including:
an image acquisition module 210 for acquiring an image;
a feature information obtaining module 220, configured to input the image into a pre-trained training model for feature detection, so as to obtain feature information of the image;
an interested region obtaining module 230, configured to select a preset number of interested regions for each piece of feature information;
a target position information obtaining module 240, configured to perform foreground and background binary classification on the image including the region of interest to obtain a foreground and background pixel value, perform classification based on a preset pixel threshold, and obtain a coverage area of a target object in the image from the region of interest;
a target contour obtaining module 250, configured to obtain a target contour of the target object in the image based on a coverage area of the target object in the image;
and the target object extraction module 260 is used for extracting the target object from the image according to the target contour in the image.
In one embodiment, the image target object extracting apparatus further includes:
the training image acquisition module is used for acquiring a training image;
the training image marking module is used for carrying out contour calibration on a target object of the training image to generate a training sample, and the training sample is an image file with the contour of the target object calibrated;
and the training model generation module is used for inputting the training samples into a convolutional neural network for learning to obtain the training model containing the characteristic information of the target object of each training image.
In one embodiment, the region of interest obtaining module comprises:
a candidate interested region obtaining unit, configured to select a preset number of candidate interested regions for each piece of feature information;
and the interested region screening unit is used for carrying out foreground and background binary classification on the candidate interested regions and screening the interested regions of the image from a preset number of the candidate interested regions.
In one embodiment, the target contour obtaining module is further configured to extract a target contour of the target object in the image by using a target detection method and a threshold segmentation method based on a coverage area of the target object in the image.
In one embodiment, the target object extraction module comprises:
an edge position information obtaining unit, configured to perform edge processing on the extracted target contour to obtain edge position information of the target contour;
and the target object extracting unit is used for extracting the target object from the image based on the edge position information of the target contour.
In one embodiment, the edge position information obtaining unit includes:
a mask image obtaining subunit, configured to obtain a mask image containing black and white two colors of the image based on the target contour;
and the edge position information obtaining subunit is used for carrying out edge processing on the black-white two-color mask image by adopting a Canny detection algorithm to obtain the edge position information of the target outline.
In one embodiment, the target object extraction unit includes:
a gray value obtaining subunit, configured to obtain a gray value of each pixel point in the image by using a threshold segmentation method;
a binarization processing subunit, configured to compare the grayscale value of each pixel in the image with a preset grayscale threshold, and perform binarization processing on a comparison result between the grayscale value of each pixel in the image and the preset grayscale threshold to obtain an image after binarization processing;
and the target object extraction subunit is used for extracting the target object from the image after the binarization processing based on the edge position information of the target contour.
For specific limitations of the image target object extraction device, reference may be made to the above limitations on the image target object extraction method, which are not described herein again. The respective modules in the image target object extracting apparatus may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided. The internal structure thereof may be as shown in fig. 3. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for external computers to communicate through a network connection. The computer program is executed by a processor to implement an image target object extraction method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 3 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program:
acquiring an image;
inputting the image into a training model trained in advance for feature detection to obtain feature information of the image;
selecting a preset number of interested areas for each feature information;
performing foreground and background binary classification on the image containing the region of interest to obtain a foreground and background pixel value, performing classification based on a preset pixel threshold value, and obtaining a coverage area of a target object in the image from the region of interest;
obtaining a target contour of the target object in the image based on the coverage area of the target object in the image;
and extracting the target object from the image according to the target contour in the image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a training image;
carrying out contour calibration on a target object of the training image to generate a training sample, wherein the training sample is an image file with the contour of the target object calibrated;
and inputting the training samples into a convolutional neural network for learning to obtain the training model containing the characteristic information of the target object of each training image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
selecting a preset number of candidate interesting regions for each characteristic information;
and carrying out foreground and background binary classification on the candidate interesting regions, and screening the interesting regions of the image from a preset number of the candidate interesting regions.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and extracting a target contour of the target object in the image by adopting a target detection method and a threshold segmentation method based on the coverage area of the target object in the image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing edge processing on the extracted target contour to obtain edge position information of the target contour;
and extracting the target object from the image based on the edge position information of the target contour.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
obtaining a mask image containing black and white two colors of the image based on the target contour;
and performing edge processing on the black-white two-color mask image by adopting a Canny detection algorithm to obtain edge position information of the target contour.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring the gray value of each pixel point in the image by adopting a threshold segmentation method;
comparing the gray value of each pixel point in the image with a preset gray threshold value, and performing binarization processing on the comparison result of the gray value of each pixel point in the image and the preset gray threshold value to obtain an image after binarization processing;
and extracting the target object from the image after the binarization processing based on the edge position information of the target contour.
In one embodiment, a computer device is provided that includes a memory storing a computer program and a processor implementing a product detection method when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image;
inputting the image into a training model trained in advance for feature detection to obtain feature information of the image;
selecting a preset number of interested areas for each feature information;
performing foreground and background binary classification on the image containing the region of interest to obtain a foreground and background pixel value, performing classification based on a preset pixel threshold value, and obtaining a coverage area of a target object in the image from the region of interest;
obtaining a target contour of the target object in the image based on the coverage area of the target object in the image;
and extracting the target object from the image according to the target contour in the image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a training image;
carrying out contour calibration on a target object of the training image to generate a training sample, wherein the training sample is an image file with the contour of the target object calibrated;
and inputting the training samples into a convolutional neural network for learning to obtain the training model containing the characteristic information of the target object of each training image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
selecting a preset number of candidate interesting regions for each characteristic information;
and carrying out foreground and background binary classification on the candidate interesting regions, and screening the interesting regions of the image from a preset number of the candidate interesting regions.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and extracting a target contour of the target object in the image by adopting a target detection method and a threshold segmentation method based on the coverage area of the target object in the image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing edge processing on the extracted target contour to obtain edge position information of the target contour;
and extracting the target object from the image based on the edge position information of the target contour.
In one embodiment, the computer program when executed by the processor further performs the steps of:
obtaining a mask image containing black and white two colors of the image based on the target contour;
and performing edge processing on the black-white two-color mask image by adopting a Canny detection algorithm to obtain edge position information of the target contour.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring the gray value of each pixel point in the image by adopting a threshold segmentation method;
comparing the gray value of each pixel point in the image with a preset gray threshold value, and performing binarization processing on the comparison result of the gray value of each pixel point in the image and the preset gray threshold value to obtain an image after binarization processing;
and extracting the target object from the image after the binarization processing based on the edge position information of the target contour.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when executed by a processor, implements a product detection method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. An image target object extraction method, comprising:
acquiring an image;
inputting the image into a training model trained in advance for feature detection to obtain feature information of the image;
selecting a preset number of interested areas for each feature information;
performing foreground and background binary classification on the image containing the region of interest to obtain a foreground and background pixel value, performing classification based on a preset pixel threshold value, and obtaining a coverage area of a target object in the image from the region of interest;
obtaining a target contour of the target object in the image based on the coverage area of the target object in the image;
and extracting the target object from the image according to the target contour in the image.
2. The method of claim 1, further comprising, prior to the step of acquiring an image:
acquiring a training image;
carrying out contour calibration on a target object of the training image to generate a training sample, wherein the training sample is an image file with the contour of the target object calibrated;
and inputting the training samples into a convolutional neural network for learning to obtain the training model containing the characteristic information of the target object of each training image.
3. The method of claim 1, wherein the step of selecting a predetermined number of regions of interest for each of the feature information comprises:
selecting a preset number of candidate interesting regions for each characteristic information;
and carrying out foreground and background binary classification on the candidate interesting regions, and screening the interesting regions of the image from a preset number of the candidate interesting regions.
4. The method of claim 1, wherein the step of obtaining a target contour of the target object in the image based on a coverage area of the target object in the image comprises:
and extracting a target contour of the target object in the image by adopting a target detection method and a threshold segmentation method based on the coverage area of the target object in the image.
5. The method according to any one of claims 1-4, wherein said step of extracting said target object from said image based on a target contour in said image comprises:
performing edge processing on the extracted target contour to obtain edge position information of the target contour;
and extracting the target object from the image based on the edge position information of the target contour.
6. The method according to claim 5, wherein the step of performing edge processing on the extracted target contour to obtain edge position information of the target contour comprises:
obtaining a mask image containing black and white two colors of the image based on the target contour;
and performing edge processing on the black-white two-color mask image by adopting a Canny detection algorithm to obtain edge position information of the target contour.
7. The method of claim 5, wherein the step of extracting the target object from the image based on the edge position information of the target contour comprises:
acquiring the gray value of each pixel point in the image by adopting a threshold segmentation method;
comparing the gray value of each pixel point in the image with a preset gray threshold value, and performing binarization processing on the comparison result of the gray value of each pixel point in the image and the preset gray threshold value to obtain an image after binarization processing;
and extracting the target object from the image after the binarization processing based on the edge position information of the target contour.
8. A method of product inspection, comprising:
the image target object extraction method according to any one of claims 1 to 7, extracting a target object in a product image, obtaining image information of the target object, and judging whether the product meets a preset requirement according to the image information of the target object.
9. An image target object extraction device characterized by comprising:
the image acquisition module is used for acquiring an image;
the characteristic information acquisition module is used for inputting the image into a pre-trained training model for characteristic detection to acquire characteristic information of the image;
the interesting region obtaining module is used for selecting a preset number of interesting regions for each piece of characteristic information;
the target position information obtaining module is used for carrying out foreground and background binary classification on the image containing the region of interest to obtain a foreground and background pixel value, carrying out classification based on a preset pixel threshold value and obtaining a coverage area of a target object in the image from the region of interest;
a target contour obtaining module, configured to obtain a target contour of the target object in the image based on a coverage area of the target object in the image;
and the target object extraction module is used for extracting the target object from the image according to the target contour in the image.
10. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202010542166.1A 2020-06-15 2020-06-15 Target object extraction method, product detection method, device, computer and medium Active CN111753692B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010542166.1A CN111753692B (en) 2020-06-15 2020-06-15 Target object extraction method, product detection method, device, computer and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010542166.1A CN111753692B (en) 2020-06-15 2020-06-15 Target object extraction method, product detection method, device, computer and medium

Publications (2)

Publication Number Publication Date
CN111753692A true CN111753692A (en) 2020-10-09
CN111753692B CN111753692B (en) 2024-05-28

Family

ID=72675169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010542166.1A Active CN111753692B (en) 2020-06-15 2020-06-15 Target object extraction method, product detection method, device, computer and medium

Country Status (1)

Country Link
CN (1) CN111753692B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330598A (en) * 2020-10-14 2021-02-05 浙江华睿科技有限公司 Method and device for detecting stiff silk defects on chemical fiber surface and storage medium
CN112712077A (en) * 2020-12-30 2021-04-27 中冶赛迪重庆信息技术有限公司 Steel flow contour determination method, system, terminal and medium
CN113076907A (en) * 2021-04-16 2021-07-06 青岛海尔电冰箱有限公司 Method for identifying information of articles in refrigerator, refrigerator and computer storage medium
CN113223041A (en) * 2021-06-25 2021-08-06 上海添音生物科技有限公司 Method, system and storage medium for automatically extracting target area in image
CN113361487A (en) * 2021-07-09 2021-09-07 无锡时代天使医疗器械科技有限公司 Foreign matter detection method, device, equipment and computer readable storage medium
CN113505781A (en) * 2021-06-01 2021-10-15 北京旷视科技有限公司 Target detection method and device, electronic equipment and readable storage medium
CN113902910A (en) * 2021-12-10 2022-01-07 中国科学院自动化研究所 Vision measurement method and system
CN114789452A (en) * 2022-06-21 2022-07-26 季华实验室 Robot grabbing method and system based on machine vision
CN115170792A (en) * 2022-09-07 2022-10-11 烟台艾睿光电科技有限公司 Infrared image processing method, device and equipment and storage medium
WO2024065976A1 (en) * 2022-09-28 2024-04-04 广东利元亨智能装备股份有限公司 Cell alignment degree measurement method, controller, detection system, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139415A (en) * 2015-09-29 2015-12-09 小米科技有限责任公司 Foreground and background segmentation method and apparatus of image, and terminal
CN109635733A (en) * 2018-12-12 2019-04-16 哈尔滨工业大学 View-based access control model conspicuousness and the modified parking lot of queue and vehicle target detection method
CN109741346A (en) * 2018-12-30 2019-05-10 上海联影智能医疗科技有限公司 Area-of-interest exacting method, device, equipment and storage medium
CN109993734A (en) * 2019-03-29 2019-07-09 北京百度网讯科技有限公司 Method and apparatus for output information
CN110490212A (en) * 2019-02-26 2019-11-22 腾讯科技(深圳)有限公司 Molybdenum target image processing arrangement, method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139415A (en) * 2015-09-29 2015-12-09 小米科技有限责任公司 Foreground and background segmentation method and apparatus of image, and terminal
CN109635733A (en) * 2018-12-12 2019-04-16 哈尔滨工业大学 View-based access control model conspicuousness and the modified parking lot of queue and vehicle target detection method
CN109741346A (en) * 2018-12-30 2019-05-10 上海联影智能医疗科技有限公司 Area-of-interest exacting method, device, equipment and storage medium
CN110490212A (en) * 2019-02-26 2019-11-22 腾讯科技(深圳)有限公司 Molybdenum target image processing arrangement, method and apparatus
CN109993734A (en) * 2019-03-29 2019-07-09 北京百度网讯科技有限公司 Method and apparatus for output information

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330598A (en) * 2020-10-14 2021-02-05 浙江华睿科技有限公司 Method and device for detecting stiff silk defects on chemical fiber surface and storage medium
CN112330598B (en) * 2020-10-14 2023-07-25 浙江华睿科技股份有限公司 Method, device and storage medium for detecting stiff yarn defects on chemical fiber surface
CN112712077B (en) * 2020-12-30 2023-04-07 中冶赛迪信息技术(重庆)有限公司 Steel flow contour determination method, system, terminal and medium
CN112712077A (en) * 2020-12-30 2021-04-27 中冶赛迪重庆信息技术有限公司 Steel flow contour determination method, system, terminal and medium
CN113076907A (en) * 2021-04-16 2021-07-06 青岛海尔电冰箱有限公司 Method for identifying information of articles in refrigerator, refrigerator and computer storage medium
CN113505781A (en) * 2021-06-01 2021-10-15 北京旷视科技有限公司 Target detection method and device, electronic equipment and readable storage medium
CN113223041A (en) * 2021-06-25 2021-08-06 上海添音生物科技有限公司 Method, system and storage medium for automatically extracting target area in image
CN113223041B (en) * 2021-06-25 2024-01-12 上海添音生物科技有限公司 Method, system and storage medium for automatically extracting target area in image
CN113361487A (en) * 2021-07-09 2021-09-07 无锡时代天使医疗器械科技有限公司 Foreign matter detection method, device, equipment and computer readable storage medium
CN113902910A (en) * 2021-12-10 2022-01-07 中国科学院自动化研究所 Vision measurement method and system
CN114789452B (en) * 2022-06-21 2022-09-16 季华实验室 Robot grabbing method and system based on machine vision
CN114789452A (en) * 2022-06-21 2022-07-26 季华实验室 Robot grabbing method and system based on machine vision
CN115170792A (en) * 2022-09-07 2022-10-11 烟台艾睿光电科技有限公司 Infrared image processing method, device and equipment and storage medium
CN115170792B (en) * 2022-09-07 2023-01-10 烟台艾睿光电科技有限公司 Infrared image processing method, device and equipment and storage medium
WO2024051067A1 (en) * 2022-09-07 2024-03-14 烟台艾睿光电科技有限公司 Infrared image processing method, apparatus, and device, and storage medium
WO2024065976A1 (en) * 2022-09-28 2024-04-04 广东利元亨智能装备股份有限公司 Cell alignment degree measurement method, controller, detection system, and storage medium

Also Published As

Publication number Publication date
CN111753692B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN111753692B (en) Target object extraction method, product detection method, device, computer and medium
CN110110799B (en) Cell sorting method, cell sorting device, computer equipment and storage medium
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN110060237B (en) Fault detection method, device, equipment and system
TWI744283B (en) Method and device for word segmentation
CN110148130B (en) Method and device for detecting part defects
CN110838126B (en) Cell image segmentation method, cell image segmentation device, computer equipment and storage medium
CN111160301B (en) Tunnel disease target intelligent identification and extraction method based on machine vision
CN105913093A (en) Template matching method for character recognizing and processing
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
CN110598687A (en) Vehicle identification code detection method and device and computer equipment
CN112365497A (en) High-speed target detection method and system based on Trident Net and Cascade-RCNN structures
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN113205511B (en) Electronic component batch information detection method and system based on deep neural network
CN112861861B (en) Method and device for recognizing nixie tube text and electronic equipment
CN111340796A (en) Defect detection method and device, electronic equipment and storage medium
CN110751619A (en) Insulator defect detection method
CN114998192B (en) Defect detection method, device, equipment and storage medium based on deep learning
CN108460344A (en) Dynamic area intelligent identifying system in screen and intelligent identification Method
CN115731220A (en) Grey cloth defect positioning and classifying method, system, equipment and storage medium
CN114723677A (en) Image defect detection method, image defect detection device, image defect detection equipment and storage medium
CN110751013B (en) Scene recognition method, apparatus and computer readable storage medium
CN115995023A (en) Flaw detection method, flaw detection device, electronic device, computer-readable storage medium, and product detection method
CN113269236B (en) Assembly body change detection method, device and medium based on multi-model integration
CN111027399A (en) Remote sensing image surface submarine identification method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant