WO2023201924A1 - 对象缺陷检测方法、装置、计算机设备和存储介质 - Google Patents

对象缺陷检测方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2023201924A1
WO2023201924A1 PCT/CN2022/108429 CN2022108429W WO2023201924A1 WO 2023201924 A1 WO2023201924 A1 WO 2023201924A1 CN 2022108429 W CN2022108429 W CN 2022108429W WO 2023201924 A1 WO2023201924 A1 WO 2023201924A1
Authority
WO
WIPO (PCT)
Prior art keywords
defect type
image
probability
detected
target
Prior art date
Application number
PCT/CN2022/108429
Other languages
English (en)
French (fr)
Inventor
田倬韬
王远
易振彧
刘枢
吕江波
沈小勇
Original Assignee
深圳思谋信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳思谋信息科技有限公司 filed Critical 深圳思谋信息科技有限公司
Publication of WO2023201924A1 publication Critical patent/WO2023201924A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the present application relates to the field of machine vision technology, and in particular to an object defect detection method, device, computer equipment and storage medium.
  • Machine vision technology is an interdisciplinary subject involving many fields such as artificial intelligence, computer science, image processing, and pattern recognition.
  • Machine vision uses machines instead of human eyes to make measurements and judgments. It collects, processes, and calculates images from specific physical objects, and finally conducts actual inspection, control, and application. It is widely used in product defect detection in the manufacturing industry.
  • Defects that appear in products during the production process often have a certain degree of randomness, that is, defect types, shapes and sizes vary.
  • the defect detection models in traditional machine vision inspection are only for a specific product or a specific category. Defect detection cannot accurately identify different product defect types.
  • an object defect detection method is provided.
  • embodiments of the present application provide an object defect detection method, including:
  • the image to be detected is an image obtained by photographing the target object;
  • the target object is an object that needs to be defect detected;
  • each of the first sub-feature maps into the one-to-one corresponding first target area classifier to obtain the first defect type probability corresponding to the image to be detected;
  • the first defect type probability includes the image to be detected The probability of each pixel belonging to each defect type;
  • the feature map to the target global classifier to obtain the second defect type probability corresponding to the image to be detected;
  • the second defect type probability includes the probability of each pixel in the image to be detected for each defect type. Probability of belonging;
  • the defect type corresponding to the target object is determined according to the first defect type probability and the second defect type probability.
  • an object defect detection device including:
  • the acquisition module is used to obtain the feature map corresponding to the image to be detected;
  • the image to be detected is an image obtained by photographing the target object;
  • the target object is an object that needs to be defect detected;
  • a dividing module used to divide the feature map into regions to obtain the first sub-feature map corresponding to each pixel in the image to be detected
  • the first input module is used to input each of the first sub-feature maps into the one-to-one corresponding first target area classifier to obtain the first defect type probability corresponding to the image to be detected; the first defect type The probability includes the attribution probability of each pixel in the image to be detected for each defect type;
  • the second input module is used to input the feature map to the target global classifier to obtain the second defect type probability corresponding to the image to be detected; the second defect type probability includes each pixel in the image to be detected. Attribution probability for each described defect type;
  • a determining module configured to determine the defect type corresponding to the target object according to the first defect type probability and the second defect type probability.
  • inventions of the present application provide a computer device.
  • the computer device includes a memory and a processor.
  • the memory stores a computer program.
  • the processor executes the computer program, the following steps are implemented:
  • the image to be detected is an image obtained by photographing the target object;
  • the target object is an object that needs to be defect detected;
  • each of the first sub-feature maps into the one-to-one corresponding first target area classifier to obtain the first defect type probability corresponding to the image to be detected;
  • the first defect type probability includes the image to be detected The probability of each pixel belonging to each defect type;
  • the feature map to the target global classifier to obtain the second defect type probability corresponding to the image to be detected;
  • the second defect type probability includes the probability of each pixel in the image to be detected for each defect type. Probability of belonging;
  • the defect type corresponding to the target object is determined according to the first defect type probability and the second defect type probability.
  • inventions of the present application provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the computer program is executed by a processor, the following steps are implemented:
  • the image to be detected is an image obtained by photographing the target object;
  • the target object is an object that needs to be defect detected;
  • each of the first sub-feature maps into the one-to-one corresponding first target area classifier to obtain the first defect type probability corresponding to the image to be detected;
  • the first defect type probability includes the image to be detected The probability of each pixel belonging to each defect type;
  • the feature map to the target global classifier to obtain the second defect type probability corresponding to the image to be detected;
  • the second defect type probability includes the probability of each pixel in the image to be detected for each defect type. Probability of belonging;
  • the defect type corresponding to the target object is determined according to the first defect type probability and the second defect type probability.
  • inventions of the present application provide a computer program product.
  • the computer program product includes a computer program.
  • the computer program is executed by a processor, the following steps are implemented:
  • the image to be detected is an image obtained by photographing the target object;
  • the target object is an object that needs to be defect detected;
  • each of the first sub-feature maps into the one-to-one corresponding first target area classifier to obtain the first defect type probability corresponding to the image to be detected;
  • the first defect type probability includes the image to be detected The probability of each pixel belonging to each defect type;
  • the feature map to the target global classifier to obtain the second defect type probability corresponding to the image to be detected;
  • the second defect type probability includes the probability of each pixel in the image to be detected for each defect type. Probability of belonging;
  • the defect type corresponding to the target object is determined according to the first defect type probability and the second defect type probability.
  • the above object defect detection methods, devices, computer equipment, storage media and computer program products can be applied to any depth semantic segmentation model by inputting the first sub-feature map corresponding to each pixel in the image to be detected into a one-to-one corresponding In the first target area classifier, each pixel in the image to be detected has a corresponding first target area classifier to classify the defect type, so that the local detail information of the image to be detected can be fully and effectively utilized.
  • Perform defect type identification on the target object in the image to obtain the probability of the first defect type corresponding to the image to be detected; by directly inputting the feature map into the target global classifier, the global information of the image to be detected can be fully utilized to identify the defect type of the target object.
  • the second defect type probability corresponding to the image to be detected is obtained; finally, the defect type corresponding to the target object is determined through the first defect type probability and the second defect type probability corresponding to the image to be detected, which can prevent defect detection using only the first defect type probability.
  • type recognition due to excessive attention to the local information of the image to be detected, when the surface defect of the target object is large, the defect position cannot be effectively and completely located, resulting in the inability to accurately identify the defect type; it can also be prevented from using only the second
  • defect type of the target object cannot be accurately identified; and then fully combine the advantages of the above two situations to improve the accuracy of the target object. Accuracy of defect type identification.
  • Figure 1 is a schematic flow chart of an object defect detection method in some embodiments
  • Figure 2 is a schematic flowchart of the steps of obtaining the probability of the first defect type in some embodiments
  • Figure 3 is a schematic flow chart of an object defect detection method in other embodiments.
  • Figure 4 is a structural block diagram of an object defect detection device in some embodiments.
  • Figure 5 is an internal structure diagram of a computer device in some embodiments.
  • an object defect detection method is provided, which can be applied to any deep semantic segmentation model.
  • This embodiment illustrates the application of this method to a server. It can be understood that this method can also be applied to a terminal or a system including a terminal and a server, and is implemented through the interaction between the terminal and the server.
  • the method includes the following steps:
  • Step S110 Obtain the feature map corresponding to the image to be detected.
  • the image to be detected is an image obtained by photographing the target object.
  • the target object is an object that needs to be defect detected.
  • the server can obtain the to-be-detected image obtained by photographing the target object that needs to be defect-detected, and input the to-be-detected image into the trained deep semantic segmentation model.
  • the feature extraction module in the deep semantic segmentation model passes The feature extraction function extracts features from the image to be detected and outputs a feature map.
  • the feature extraction function can be G. If the input image to be detected is I, the feature map output by the feature extraction module is X.
  • the calculation formula is as follows:
  • the size of the feature map X is [h, w, d], h represents the height of the feature map, w represents the width of the feature map, and d represents the number of feature channels of the feature map.
  • Step S120 Divide the feature map into regions to obtain the first sub-feature map corresponding to each pixel in the image to be detected.
  • the server can divide the feature map into regions to obtain the first sub-feature map corresponding to each pixel in the image to be detected.
  • Step S130 Input each first sub-feature map into a one-to-one corresponding first target area classifier to obtain the first defect type probability corresponding to the image to be detected.
  • the first defect type probability includes the attribution probability of each pixel in the image to be detected for each defect type.
  • the first target area classifier can also be named area-aware prototype.
  • the target classifier P used by the deep semantic segmentation model of this solution has a size of [H*W, n, d], where n represents the number of defect types to be predicted, that is, there are n defect types.
  • the unique target region classifier Pi i ⁇ 1,2,...,H*W ⁇ for each region in the target classifier P is performed on the feature map.
  • the area predicted by the defect type is an area with size [kh, kw, d]; where Pi size is [1, n, d].
  • the server inputs each first sub-feature map into the one-to-one corresponding first target area classifier.
  • the defect type probability corresponding to each first sub-feature map output by each first target area classifier the corresponding image of the image to be detected can be obtained.
  • the first defect type probability includes the attribution probability of each pixel in the image to be detected for each defect type.
  • Step S140 Input the feature map to the target global classifier to obtain the second defect type probability corresponding to the image to be detected.
  • the second defect type probability includes the attribution probability of each pixel in the image to be detected for each defect type.
  • the server can directly input the feature map to the globally shared target global classifier to obtain the defect type probability output by the target global classifier for the feature map, so that the second defect corresponding to the image to be detected can be obtained based on the defect type probability.
  • Type probability the second defect type probability includes the attribution probability of each pixel in the image to be detected for each defect type.
  • Step S150 Determine the defect type corresponding to the target object based on the first defect type probability and the second defect type probability.
  • the server can determine the target defect type probability corresponding to the image to be detected based on the first defect type probability and the second defect type probability.
  • the target defect type probability includes the target attribution of each pixel in the image to be detected for each defect type. probability, so that the server can determine the defect type corresponding to the target object based on the target attribution probability of each pixel in the image to be detected for each defect type.
  • the defect type corresponding to the target object may be the surface defect type of the target object, such as cracking, silver crazing, grain lines, ripples, ripples, embrittlement and other defect types.
  • the above object defect detection method can be applied to any depth semantic segmentation model by obtaining the feature map corresponding to the image to be detected; where the image to be detected is an image obtained by photographing the target object that needs to be defect detected; then, The feature map is divided into regions to obtain the first sub-feature map corresponding to each pixel in the image to be detected; each first sub-feature map is input into the one-to-one corresponding first target area classifier to obtain the first sub-feature map corresponding to the image to be detected.
  • a defect type probability wherein, the first defect type probability includes the attribution probability of each pixel in the image to be detected for each defect type; at the same time, the feature map is input to the target global classifier to obtain the second defect type corresponding to the image to be detected probability; wherein, the second defect type probability includes the attribution probability of each pixel in the image to be detected for each defect type; the defect type corresponding to the target object is determined according to the first defect type probability and the second defect type probability; in this way, by dividing the to-be-detected image into The first sub-feature map corresponding to each pixel in the detected image is input into the one-to-one corresponding first target area classifier, so that each pixel in the image to be detected has a corresponding first target area classifier to determine the defect type.
  • the local detail information of the image to be detected can be fully and effectively used to identify the defect type of the target object in the image to be detected, and the probability of the first defect type corresponding to the image to be detected can be obtained; by directly inputting the feature map into the target global classification
  • the device can make full use of the global information of the image to be detected to identify the defect type of the target object, and obtain the probability of the second defect type corresponding to the image to be detected; finally, through the probability of the first defect type and the probability of the second defect type corresponding to the image to be detected, Determining the defect type corresponding to the target object can prevent the failure to effectively and completely locate the defect when the surface defect of the target object is large due to excessive attention to the local information of the image to be detected when only using the first defect type probability to identify the defect type.
  • step S130 includes:
  • Step S210 Input each first sub-feature map into the one-to-one corresponding first target area classifier to obtain the defect type probability corresponding to each first sub-feature map.
  • the server when the server inputs each first sub-feature map into the one-to-one corresponding first target area classifier to obtain the probability of the first defect type corresponding to the image to be detected, the server can input each first sub-feature map into the first target area classifier.
  • the map is input into the one-to-one corresponding first target area classifier, and the defect type probability corresponding to each first sub-feature map output by each first target area classifier is first obtained.
  • the defect type probability corresponding to each sub-feature map can be
  • the similarity calculation method can be any vector similarity calculation method, such as direct dot multiplication of vectors, cosine similarity between vectors, etc., and there is no limit here.
  • Step S220 Splice the first sub-feature maps according to the spatial order of the first sub-feature maps to obtain the spliced first sub-feature map.
  • the server can splice each first sub-feature map according to the spatial order of each first sub-feature map. , obtain the first sub-feature map after splicing.
  • Step S230 Determine the defect type probability corresponding to the feature map based on the defect type probability corresponding to each spliced first sub-feature map.
  • the defect type probability corresponding to the feature map includes the attribution probability of each pixel in the feature map for each defect type.
  • the server can determine the defect type probability corresponding to the feature map based on the defect type probability corresponding to each spliced first sub-feature map.
  • the defect type corresponding to the feature map The probability includes the probability of belonging to each defect type for each pixel in the feature map.
  • Step S240 Determine the first defect type probability corresponding to the image to be detected based on the defect type probability corresponding to the feature map.
  • the server can, after obtaining the probability of the defect type corresponding to the feature map, based on each pixel in the feature map.
  • the probability of the points belonging to each defect type is determined to determine the probability of each pixel in the image to be detected to each defect type, and the first defect type probability is obtained.
  • the technical solution of this embodiment is to obtain the defect type probability corresponding to each first sub-feature map by inputting each first sub-feature map into a one-to-one corresponding first target area classifier; according to the first sub-feature map In spatial order, each first sub-feature map is spliced to obtain the spliced first sub-feature map; according to the defect type probability corresponding to each spliced first sub-feature map, the defect type probability corresponding to the feature map is determined; where, The defect type probability corresponding to the feature map includes the attribution probability of each pixel in the feature map for each defect type; finally, according to the defect type probability corresponding to the feature map, the first defect type probability is determined; in this way, by dividing each pixel in the image to be detected The first sub-feature map corresponding to the point is input into the one-to-one corresponding first target area classifier, so that each pixel point in the image to be detected has a corresponding first target area classifier for defect type classification, so that it can fully classify the defect type.
  • determining the defect type corresponding to the target object based on the first defect type probability and the second defect type probability includes:
  • the defect type corresponding to the target object is determined.
  • the target defect type probability includes the target attribution probability of each pixel in the image to be detected for each defect type.
  • the server when the server determines the defect type corresponding to the target object based on the first defect type probability and the second defect type probability, the server can add the first defect type probability and the second defect type probability to obtain the image to be detected.
  • the corresponding target defect type probability includes the target belonging probability of each pixel in the image to be detected for each defect type; then, based on the target belonging probability of each pixel in the image to be detected for each defect type, it can be determined The defect type corresponding to the target object.
  • the probability of the first defect type can be y p
  • the probability of the second defect type can be y c
  • the probability of the target defect type can be y
  • the technical solution of this embodiment is to obtain the target defect type probability corresponding to the image to be detected by adding the first defect type probability and the second defect type probability; wherein, the target defect type probability includes the target defect type probability for each pixel in the image to be detected.
  • the target attribution probability of the defect type determine the defect type corresponding to the target object based on the target attribution probability of each pixel in the image to be detected for each defect type; in this way, it can be prevented that when only the first defect type probability is used for defect type identification, due to Excessive attention is paid to the local information of the image to be detected.
  • the defect position cannot be effectively and completely located, resulting in the inability to accurately identify the defect type.
  • the defect type corresponding to the target object is determined based on the target attribution probability of each pixel in the image to be detected for each defect type, including:
  • the defect type corresponding to each maximum target attribution probability is used as the defect type corresponding to each pixel in the image to be detected;
  • the defect type corresponding to each pixel in the image to be detected is determined.
  • the server in the process of determining the defect type corresponding to the target object based on the target attribution probability of each pixel point in the image to be detected for each defect type, can target each pixel point in the image to be detected for each defect type.
  • the maximum target attribution probability corresponding to each pixel in the image to be detected is determined, so that the defect type corresponding to each maximum target attribution probability can be used as the defect type corresponding to each pixel in the image to be detected, and according to the image to be detected.
  • the defect type corresponding to each pixel in the image can be determined, and the defect type corresponding to each pixel that constitutes the target object in the image to be detected can be determined, and then the defect type of the target object can be determined.
  • the technical solution of this embodiment determines the maximum target belonging probability corresponding to each pixel in the image to be detected from the target belonging probability of each pixel in the image to be detected for each defect type; type, as the defect type corresponding to each pixel point in the image to be detected; according to the defect type corresponding to each pixel point in the image to be detected, determine the defect type corresponding to the target object; in this way, by dividing the maximum defect type corresponding to each pixel point in the image to be detected
  • the defect type to which the target attribution probability belongs is used as the defect type corresponding to each pixel in the image to be detected; thus, the defect type corresponding to the target object can be accurately determined based on the defect type corresponding to each pixel that constitutes the target object, which improves the accuracy of the target object. Identification accuracy of defect types.
  • the method before adding the first defect type probability and the second defect type probability to obtain the target defect type probability corresponding to the image to be detected, the method further includes:
  • each second sub-feature map into the one-to-one corresponding second target area classifier to obtain at least one third defect type probability corresponding to the image to be detected; at least one third defect type probability includes each pixel in the image to be detected Attribution probability for each defect type under the corresponding scale;
  • the target defect type probability is obtained by adding the first defect type probability, the second defect type probability and at least one third defect type probability.
  • the size of the second sub-feature map is larger than the size of the first sub-feature map and smaller than the size of the feature map.
  • the server when the server determines the probability of the target defect type corresponding to the image to be detected, the server can also divide the feature map into regions at least once at different scales to obtain the second sub-feature corresponding to the image to be detected at at least one scale. map, and the size of the second sub-feature map is larger than the size of the first sub-feature map and smaller than the size of the feature map.
  • each second sub-feature map Xi corresponding to at least one scale is [kh, kw, d].
  • each second sub-feature map into the one-to-one corresponding second target area classifier to obtain the defect type probability output by each corresponding second target area classifier at different scales.
  • at least one third defect type probability corresponding to the image to be detected is obtained.
  • the at least one third defect type probability includes the attribution probability of each pixel in the image to be detected for each defect type at the corresponding scale; finally, the server will
  • the target defect type probability corresponding to the image to be detected can be obtained by adding the probability of the first defect type, the probability of the second defect type and the corresponding probability of the third defect type at different scales.
  • the technical solution of this embodiment is to divide the feature map into regions at least once at different scales to obtain a second sub-feature map corresponding to the image to be detected at at least one scale; wherein the size of the second sub-feature map is larger than the first The size of the sub-feature map, and is smaller than the size of the feature map; input each second sub-feature map into the one-to-one corresponding second target area classifier to obtain at least one third defect type probability corresponding to the image to be detected; wherein, At least one third defect type probability includes the attribution probability of each pixel in the image to be detected for each defect type at the corresponding scale; the first defect type probability, the second defect type probability and the at least one third defect type probability are combined Add to obtain the target defect type probability; in this way, by dividing the feature map into multiple scales, and inputting the divided sub-feature maps into the one-to-one corresponding second target area classifier, through the The output result is the third defect type probability corresponding to the feature map at different scales, and the third defect type probability is combined
  • the probabilities are added to obtain the target defect type probability; thus the semantic content of the classifier can change according to the changes in the content of the feature maps in different areas, and the feature information on the feature maps of different sizes can be fully extracted and fused, enriching the image to be detected.
  • the characteristic details of the target object can be accurately identified through the target defect type probability corresponding to the image to be detected, which enhances the classifier's adaptive perception ability of different areas of each image to be detected.
  • the method before obtaining the feature map corresponding to the image to be detected, the method further includes:
  • the region classifier to be trained is trained through an exponential moving average algorithm to obtain a second region classifier that has been trained; and a target region classifier is obtained based on the first region classifier and/or the second region classifier.
  • the regional classifiers to be trained include local classifiers corresponding to different scales.
  • the target area classifier includes a first target area classifier, a target global classifier and a second target area classifier.
  • the server can also obtain the region classifier to be trained corresponding to the feature map at different scales, and train the region classifier to be trained through the gradient backpropagation algorithm to obtain the first region classifier that has been trained.
  • the first region classifier includes the d-dimensional feature vector representation corresponding to each defect type.
  • the first region classifier can be Define the region classifier to be trained as a learnable parameter, and let the network optimize itself to obtain the first region classifier that has been trained. In each training iteration, is updated through gradient back-propagation.
  • the server can also train the region classifier to be trained through the Exponential Moving Average algorithm to obtain the second region classifier that has been trained.
  • the second region classifier includes a d-dimensional feature vector representation corresponding to each defect type.
  • ti represents the defect type sample label corresponding to the i-th [kh, kw, d] area Xi in the sample feature map.
  • the size of ti is [kh, kw, n], including the one-hot vectors of n defect types (one mask );
  • M represents a feature processing function, which is used to process Xi into a feature vector of size [1, n, d] through mask pooling;
  • is the weight of the moving average, which can be 0.999.
  • the server may obtain a target area classifier based on the first area classifier and/or the second area classifier; the target area classifier includes a first target area classifier, a target global classifier, and a second target area classifier.
  • the first region classifier can be Get the target area classifier Pi, that is The target area classifier Pi can also be obtained based on the second area classifier, that is Preferably, the target area classifier Pi can be obtained through the first area classifier and the second area classifier, that is,
  • the target area classifier can be the last 1x1 convolutional layer or the fully connected layer in the target deep semantic segmentation model.
  • the technical solution of this embodiment is to obtain the region classifier to be trained corresponding to the feature map at different scales; to train the region classifier to be trained through the gradient backpropagation algorithm to obtain the first region classifier that has been trained; and /Or, train the region classifier to be trained through the exponential moving average algorithm to obtain the second region classifier that has been trained; obtain the target region classifier based on the first region classifier and/or the second region classifier; thus,
  • the target area classifier can be trained through a variety of methods. Combining the advantages of multiple methods can improve the accuracy of the target area classifier in identifying defect types, and at the same time make the methods of obtaining the target area classifier more diverse.
  • the target global classifier includes a vector similarity determination module and a defect type probability determination module; input the feature map to the target global classifier to obtain the second defect type probability corresponding to the image to be detected, including:
  • the dimension of the feature vector corresponding to each pixel point in the image to be detected is equal to the dimension of the feature vector corresponding to each defect type.
  • the second defect type probability is obtained by inputting the target similarity into the normalized multi-classification function through the defect type probability determination module.
  • the target global classifier includes a vector similarity determination module and a defect type probability determination module; in the process of inputting the feature map to the target global classifier to obtain the second defect type probability corresponding to the image to be detected, the server can Input the feature map into the vector similarity determination module to obtain the target similarity between the feature vector corresponding to each pixel point in the image to be detected and the feature vector corresponding to each defect type; where, the feature vector corresponding to each pixel point in the image to be detected is The dimensions of are equal to the dimensions of the feature vector corresponding to each defect type; and by inputting the target similarity into the defect type probability determination module, the second defect type probability is obtained; where, the second defect type probability is The target similarity is obtained by inputting the normalized multi-classification function.
  • the target global classifier can be C
  • the feature map can be X
  • the probability of the second defect type is y c , then:
  • f represents the similarity calculation function, which is used to determine the target similarity between the d-dimensional feature vector corresponding to each pixel point in the feature map and the d-dimensional feature vector corresponding to each defect type.
  • the similarity calculation method can be any vector similarity Degree calculation methods, such as direct dot product of vectors, cosine similarity between vectors, etc., are not limited here; softmax is a normalized multi-classification function.
  • the size of the target global classifier C is [n, d], representing each defect type in n defect types.
  • D-dimensional feature vector representation; the size of y c is [h, w, n], which represents the probability of belonging to each defect type for each pixel.
  • the technical solution of this embodiment is to obtain the target similarity between the feature vector corresponding to each pixel point in the image to be detected and the feature vector corresponding to each defect type by inputting the feature map into the vector similarity determination module; input the target similarity Go to the defect type probability determination module to obtain the second defect type probability; in this way, when the surface defect of the target object is large, the defect location can be effectively and completely located through the target global classifier, and the output of the second defect type probability can be accurately Identify the defect type corresponding to the target object.
  • an object defect detection method is provided. Taking the method as applied to a server as an example, the description includes the following steps:
  • Step S302 Obtain the feature map corresponding to the image to be detected.
  • Step S304 Divide the feature map into regions to obtain the first sub-feature map corresponding to each pixel in the image to be detected.
  • Step S306 Input each first sub-feature map into a one-to-one corresponding first target area classifier to obtain the defect type probability corresponding to each first sub-feature map.
  • Step S308 Splice the first sub-feature maps according to the spatial order of the first sub-feature maps to obtain the spliced first sub-feature map.
  • Step S310 Determine the defect type probability corresponding to the feature map based on the defect type probability corresponding to each spliced first sub-feature map.
  • Step S312 Determine the first defect type probability corresponding to the image to be detected based on the defect type probability corresponding to the feature map.
  • Step S314 input the feature map into the vector similarity determination module to obtain the target similarity between the feature vector corresponding to each pixel point in the image to be detected and the feature vector corresponding to each defect type.
  • Step S316 Input the target similarity to the defect type probability determination module to obtain the second defect type probability corresponding to the image to be detected.
  • Step S318 Add the first defect type probability and the second defect type probability to obtain the target attribution probability of each pixel in the image to be detected for each defect type.
  • Step S320 Determine the defect type corresponding to the target object based on the target attribution probability of each pixel point in the image to be detected for each defect type.
  • embodiments of the present application also provide an object defect detection device for implementing the above-mentioned object defect detection method.
  • the solution to the problem provided by this device is similar to the solution recorded in the above method. Therefore, the specific limitations in the embodiments of one or more object defect detection devices provided below can be found in the above article on an object defect detection device. The limitations of the method will not be repeated here.
  • an object defect detection device including:
  • the acquisition module 410 is used to obtain the feature map corresponding to the image to be detected; the image to be detected is an image obtained by photographing a target object; the target object is an object that requires defect detection;
  • the dividing module 420 is used to divide the feature map into regions to obtain the first sub-feature map corresponding to each pixel in the image to be detected;
  • the first input module 430 is used to input each of the first sub-feature maps into a one-to-one corresponding first target area classifier to obtain the first defect type probability corresponding to the image to be detected; the first defect The type probability includes the attribution probability of each pixel in the image to be detected for each defect type;
  • the second input module 440 is used to input the feature map to the target global classifier to obtain the second defect type probability corresponding to the image to be detected; the second defect type probability includes each pixel in the image to be detected The attribution probability of points for each described defect type;
  • the determination module 450 is configured to determine the defect type corresponding to the target object according to the first defect type probability and the second defect type probability.
  • the first input module 430 is specifically configured to input each of the first sub-feature maps into the one-to-one corresponding first target area classifier to obtain each of the first sub-feature maps.
  • the defect type probability corresponding to the feature map according to the spatial order of each of the first sub-feature maps, splicing each of the first sub-feature maps to obtain the spliced first sub-feature map; according to the spliced first sub-feature map.
  • the defect type probability corresponding to the first sub-feature map determines the defect type probability corresponding to the feature map; the defect type probability corresponding to the feature map includes the attribution probability of each pixel in the feature map for each defect type; According to the defect type probability corresponding to the feature map, the first defect type probability corresponding to the image to be detected is determined.
  • the determination module 450 is specifically configured to add the first defect type probability and the second defect type probability to obtain the target defect type probability corresponding to the image to be detected;
  • the target defect type probability includes the target belonging probability of each pixel in the image to be detected for each defect type; the target is determined according to the target belonging probability of each pixel in the image to be detected for each defect type.
  • the defect type corresponding to the object is specifically configured to add the first defect type probability and the second defect type probability to obtain the target defect type probability corresponding to the image to be detected;
  • the target defect type probability includes the target belonging probability of each pixel in the image to be detected for each defect type; the target is determined according to the target belonging probability of each pixel in the image to be detected for each defect type.
  • the determination module 450 is specifically configured to determine the target attribution probability of each pixel in the image to be detected for each defect type, and determine the corresponding probability of each pixel in the image to be detected.
  • Maximum target attribution probability use the defect type corresponding to each maximum target attribution probability as the defect type corresponding to each pixel in the image to be detected; determine the defect type corresponding to each pixel in the image to be detected. Describe the defect type corresponding to the target object.
  • the device further includes: a second sub-feature map acquisition module, configured to divide the feature map into regions at least once at different scales to obtain the corresponding region of the image to be detected at at least one scale.
  • the second sub-feature map ; the size of the second sub-feature map is greater than the size of the first sub-feature map and smaller than the size of the feature map;
  • a third input module is used to convert each of the second sub-feature maps
  • the feature map is input into the one-to-one corresponding second target area classifier to obtain at least one third defect type probability corresponding to the image to be detected; the at least one third defect type probability includes each pixel in the image to be detected Point at the corresponding scale, for each defect type attribution probability; an addition module, used to add the first defect type probability, the second defect type probability and the at least one third defect type probability. Add to obtain the target defect type probability corresponding to the image to be detected.
  • the device further includes: a classifier acquisition module, used to obtain a regional classifier to be trained; the regional classifier to be trained includes local classifiers corresponding to different scales; a first training module, Used to train the region classifier to be trained through a gradient backpropagation algorithm to obtain a first region classifier that has been trained; and/or, a second training module used to train the region classifier to be trained through an exponential moving average algorithm.
  • the trained area classifier is trained to obtain a second area classifier that has been trained; a target area classifier determination module is used to obtain the target area classification according to the first area classifier and/or the second area classifier.
  • the target area classifier includes the first target area classifier, the target global classifier and the second target area classifier.
  • the target global classifier includes a vector similarity determination module and a defect type probability determination module; the second input module 440 is specifically used to input the feature map to the vector similarity determination module. module, obtain the target similarity between the feature vector corresponding to each pixel point in the image to be detected and the feature vector corresponding to each defect type; wherein, the dimension of the feature vector corresponding to each pixel point in the image to be detected is equal to The dimensions of the feature vectors corresponding to each defect type are equal; the target similarity is input to the defect type probability determination module to obtain the second defect type probability corresponding to the image to be detected; the second defect type probability It is obtained by inputting the target similarity into the normalized multi-classification function through the defect type probability determination module.
  • Each module in the above-mentioned object defect detection device can be implemented in whole or in part by software, hardware and combinations thereof.
  • Each of the above modules may be embedded in or independent of the processor of the computer device in the form of hardware, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in Figure 5 .
  • the computer device includes a processor, memory, and network interfaces connected through a system bus. Wherein, the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores operating systems, computer programs and databases. This internal memory provides an environment for the execution of operating systems and computer programs in non-volatile storage media.
  • the database of the computer device is used to store image defect detection processing data to be detected.
  • the network interface of the computer device is used to communicate with external terminals through a network connection.
  • the computer program implements an object defect detection method when executed by a processor.
  • FIG. 5 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • the specific computer equipment can May include more or fewer parts than shown, or combine certain parts, or have a different arrangement of parts.
  • a computer device including a memory and a processor.
  • a computer program is stored in the memory.
  • the processor executes the computer program, it implements the steps in the above method embodiments.
  • a computer-readable storage medium is provided, with a computer program stored thereon.
  • the computer program is executed by a processor, the steps in the above method embodiments are implemented.
  • a computer program product including a computer program that implements the steps in each of the above method embodiments when executed by a processor.
  • the user information including but not limited to user equipment information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • the computer program can be stored in a non-volatile computer-readable storage.
  • the computer program when executed, may include the processes of the above method embodiments.
  • Any reference to memory, database or other media used in the embodiments provided in this application may include at least one of non-volatile and volatile memory.
  • Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive memory (ReRAM), magnetic variable memory (Magnetoresistive Random Access Memory (MRAM), ferroelectric memory (Ferroelectric Random Access Memory, FRAM), phase change memory (Phase Change Memory, PCM), graphene memory, etc.
  • Volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory, etc.
  • RAM Random Access Memory
  • RAM random access memory
  • RAM Random Access Memory
  • the databases involved in the various embodiments provided in this application may include at least one of a relational database and a non-relational database.
  • Non-relational databases may include blockchain-based distributed databases, etc., but are not limited thereto.
  • the processors involved in the various embodiments provided in this application may be general-purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based data processing logic devices, etc., and are not limited to this.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本申请涉及一种对象缺陷检测方法,包括:获取待检测图像对应的特征图;待检测图像为对目标对象进行拍摄得到的图像;目标对象为需要进行缺陷检测的对象;对特征图进行区域划分,得到待检测图像中各像素点对应的第一子特征图;将各第一子特征图输入至一一对应的第一目标区域分类器中,得到待检测图像对应的第一缺陷类型概率;将特征图输入至目标全局分类器,得到待检测图像对应的第二缺陷类型概率;第二缺陷类型概率包括待检测图像中各像素点针对各缺陷类型的归属概率;根据第一缺陷类型概率以及第二缺陷类型概率确定目标对象对应的缺陷类型。

Description

对象缺陷检测方法、装置、计算机设备和存储介质
相关申请的交叉引用
本申请要求于2022年04月18日提交中国专利局,申请号为2022104018282,名称为“对象缺陷检测方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及机器视觉技术领域,特别是涉及一种对象缺陷检测方法、装置、计算机设备和存储介质。
背景技术
机器视觉技术是一门涉及人工智能、计算机科学、图像处理、模式识别等诸多领域的交叉学科。机器视觉就是用机器代替人眼来做测量和判断,从具体的实物进行图像的采集处理、计算、最终进行实际检测、控制和应用,被广泛应用在制造行业的产品缺陷检测。
而产品在生产过程中出现的缺陷往往具有一定的随机性,即缺陷类型、形状大小各异,但是在传统的机器视觉检测中的缺陷检测模型都只是针对某种特定的产品或者特定的某类缺陷进行检测,无法准确识别出不同的产品缺陷类型。
因此,传统技术中存在着产品缺陷的识别精度差的问题。
发明内容
根据本申请的各种实施例,提供一种对象缺陷检测方法、装置、计算机设备、计算机可读存储介质和计算机程序产品。
第一方面,本申请实施例提供一种对象缺陷检测方法,包括:
获取待检测图像对应的特征图;所述待检测图像为对目标对象进行拍摄得到的图像;所述目标对象为需要进行缺陷检测的对象;
对所述特征图进行区域划分,得到所述待检测图像中各像素点对应的第一子特征图;
将各所述第一子特征图输入至一一对应的第一目标区域分类器中,得到所述待检测图像对应的第一缺陷类型概率;所述第一缺陷类型概率包括所述待检测图像中各像素点针对各缺陷类型的归属概率;
将所述特征图输入至目标全局分类器,得到所述待检测图像对应的第二缺陷类型概率;所述第二缺陷类型概率包括所述待检测图像中各像素点针对各所述缺陷类型的归属概率;
根据所述第一缺陷类型概率以及所述第二缺陷类型概率确定所述目标对象对应的缺陷类型。
第二方面,本申请实施例提供一种对象缺陷检测装置,包括:
获取模块,用于获取待检测图像对应的特征图;所述待检测图像为对目标对象进行拍摄得到的图像;所述目标对象为需要进行缺陷检测的对象;
划分模块,用于对所述特征图进行区域划分,得到所述待检测图像中各像素点对应的第一子特征图;
第一输入模块,用于将各所述第一子特征图输入至一一对应的第一目标区域分类器中,得到所述待检测图像对应的第一缺陷类型概率;所述第一缺陷类型概率包括所述待检测图像中各像素点针对各缺陷类型的归属概率;
第二输入模块,用于将所述特征图输入至目标全局分类器,得到所述待检测图像对应的第二缺陷类型概率;所述第二缺陷类型概率包括所述待检测图像中各像素点针对各所述缺陷类型的归属概率;
确定模块,用于根据所述第一缺陷类型概率以及所述第二缺陷类型概率确定所述目标对象对应的缺陷类型。
第三方面,本申请实施例提供一种计算机设备,所述计算机设备包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
获取待检测图像对应的特征图;所述待检测图像为对目标对象进行拍摄得到的图像;所述目标对象为需要进行缺陷检测的对象;
对所述特征图进行区域划分,得到所述待检测图像中各像素点对应的第一子特征图;
将各所述第一子特征图输入至一一对应的第一目标区域分类器中,得到所述待检测图像对应的第一缺陷类型概率;所述第一缺陷类型概率包括所述待检测图像中各像素点针对各缺陷类型的归属概率;
将所述特征图输入至目标全局分类器,得到所述待检测图像对应的第二缺陷类型概率;所述第二缺陷类型概率包括所述待检测图像中各像素点针对各所述缺陷类型的归属概率;
根据所述第一缺陷类型概率以及所述第二缺陷类型概率确定所述目标对象对应的缺陷类型。
第四方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
获取待检测图像对应的特征图;所述待检测图像为对目标对象进行拍摄得到的图像;所述目标对象为需要进行缺陷检测的对象;
对所述特征图进行区域划分,得到所述待检测图像中各像素点对应的第一子特征图;
将各所述第一子特征图输入至一一对应的第一目标区域分类器中,得到所述待检测图像对应的第一缺陷类型概率;所述第一缺陷类型概率包括所述待检测图像中各像素点针对各缺陷类型的归属概率;
将所述特征图输入至目标全局分类器,得到所述待检测图像对应的第二缺陷类型概率;所述第二缺陷类型概率包括所述待检测图像中各像素点针对各所述缺陷类型的归属概率;
根据所述第一缺陷类型概率以及所述第二缺陷类型概率确定所述目标对象对应的缺陷类型。
第五方面,本申请实施例提供一种计算机程序产品,所述计算机程序产品包括计算机程序,该计算机程序被处理器执行时实现以下步骤:
获取待检测图像对应的特征图;所述待检测图像为对目标对象进行拍摄得到的图像;所述目标对象为需要进行缺陷检测的对象;
对所述特征图进行区域划分,得到所述待检测图像中各像素点对应的第一子特征图;
将各所述第一子特征图输入至一一对应的第一目标区域分类器中,得到所述待检测图像对应的第一缺陷类型概率;所述第一缺陷类型概率包括所述待检测图像中各像素点针对各缺陷类型的归属概率;
将所述特征图输入至目标全局分类器,得到所述待检测图像对应的第二缺陷类型概率;所述第二缺陷类型概率包括所述待检测图像中各像素点针对各所述缺陷类型的归属概率;
根据所述第一缺陷类型概率以及所述第二缺陷类型概率确定所述目标对象对应的缺陷类型。
上述对象缺陷检测方法、装置、计算机设备、存储介质和计算机程序产品,可以应用于任意深度语义分割模型中,通过将待检测图像中各像素点对应的第一子特征图输入至一一对应的第一目标区域分类器中,使得待检测图像中每个像素点都有对应的一个第一目标区域分类器进行缺陷类型分类,从而可以充分且有效地利用待检测图像的局部细节信息对待检测图像中的目标对象进行缺陷类型识别,得到待检测图像对应的第一缺陷类型概率;通过将特征图直接输入至目标全局分类器,可以充分利用待检测图像的全局信息对目标对象进行缺陷类型识别,得到待检测图像对应的第二缺陷类型概率;最终,通过待检测图像对应的第一缺陷类型概率和第二缺陷类型概率确定目标对象对应的缺陷类型,可以防止仅使用第一缺陷类型概率进行缺陷类型识别时,由于过度关注待检测图像的局部信息,当目标对象表面缺陷较大的时候,无法有效完整地定位到缺陷位置,导致无法准确识别缺陷类型的情况发生;也可以 防止仅使用第二缺陷类型概率进行缺陷类型识别时,由于无法充分且有效地利用待检测图像的局部信息,导致无法准确识别目标对象缺陷类型的情况发生;进而充分结合上述两种情况的优点,提高了对目标对象进行缺陷类型识别的精度。
本发明的一个或多个实施例的细节在下面的附图和描述中提出。本发明的其他特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1为一些实施例中一种对象缺陷检测方法的流程示意图;
图2为一些实施例中得到第一缺陷类型概率步骤的流程示意图;
图3为另一些实施例中一种对象缺陷检测方法的流程示意图;
图4为一些实施例中一种对象缺陷检测装置的结构框图;
图5为一些实施例中计算机设备的内部结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
在一些实施例中,如图1所示,提供了一种对象缺陷检测方法,该方法可以应用于任意深度语义分割模型。本实施例以该方法应用于服务器进行举例说明,可以理解的是,该方法也可以应用于终端,还可以应用于包括终端和服务器的系统,并通过终端和服务器的交互实现。本实施例中,该方法包括以下步骤:
步骤S110,获取待检测图像对应的特征图。
其中,待检测图像为对目标对象进行拍摄得到的图像。
其中,目标对象为需要进行缺陷检测的对象。
具体实现中,服务器可以获取到对需要进行缺陷检测的目标对象进行拍摄得到的待检测图像,将待检测图像输入至训练完成的深度语义分割模型中,该深度语义分割模型中的特征提取模块通过特征提取函数对待检测图像进行特征提取,输出特征图。
实际应用,特征提取函数可以为G,若输入的待检测图像为I,特征提取模块输出的特征图为X,计算公式如下:
X=G  (I)
其中,特征图X的尺寸为[h,w,d],h表示特征图的高,w表示特征图的宽,d表示特征图的特征通道数。
步骤S120,对特征图进行区域划分,得到待检测图像中各像素点对应的第一子特征图。
具体实现中,服务器可以对特征图进行区域划分,得到到待检测图像中各像素点对应的第一子特征图。
实际应用中,若特征图X被划分为H*W个独立的区域,每一个区域对应的子特征图为Xi,i∈[1,2,…,H*W],每一个Xi的尺寸为[kh,kw,d],kh表示子特征图的高,kw表示子特征图的宽,d表示子特征图的特征通道数;则H=h/kh,W=w/kw,待检测图像中各像素点对 应的第一子特征图Xi中,kh=1,kw=1。
步骤S130,将各第一子特征图输入至一一对应的第一目标区域分类器中,得到待检测图像对应的第一缺陷类型概率。
其中,第一缺陷类型概率包括待检测图像中各像素点针对各缺陷类型的归属概率。
其中,实际应用中,第一目标区域分类器也可以命名为区域感知原型。
具体实现中,本方案的深度语义分割模型使用的目标分类器P,尺寸为[H*W,n,d],n表示待预测的缺陷类型数量,即有n种缺陷类型。特征图被划分为H*W个独立区域后,目标分类器P中每一个区域独有的目标区域分类器Pi(i∈{1,2,…,H*W})在特征图中所进行缺陷类型预测的区域是一个尺寸为[kh,kw,d]的区域;其中,Pi尺寸为[1,n,d]。
服务器将各第一子特征图输入至一一对应的第一目标区域分类器中,通过各第一目标区域分类器输出的各第一子特征图对应的缺陷类型概率,可以得到待检测图像对应的第一缺陷类型概率,该第一缺陷类型概率包括待检测图像中各像素点针对各缺陷类型的归属概率。
步骤S140,将特征图输入至目标全局分类器,得到待检测图像对应的第二缺陷类型概率。
其中,第二缺陷类型概率包括待检测图像中各像素点针对各缺陷类型的归属概率。
具体实现中,服务器可以将特征图直接输入至全局共享的目标全局分类器,得到目标全局分类器针对特征图输出的缺陷类型概率,从而可以基于该缺陷类型概率得到待检测图像对应的第二缺陷类型概率,该第二缺陷类型概率包括待检测图像中各像素点针对各缺陷类型的归属概率。
步骤S150,根据第一缺陷类型概率以及第二缺陷类型概率确定目标对象对应的缺陷类型。
具体实现中,服务器可以根据第一缺陷类型概率以及第二缺陷类型概率确定待检测图像对应的目标缺陷类型概率,该目标缺陷类型概率包括了待检测图像中各像素点针对各缺陷类型的目标归属概率,从而服务器可以基于待检测图像中各像素点针对各缺陷类型的目标归属概率,确定目标对象对应的缺陷类型。
具体的,目标对象对应的缺陷类型可以是目标对象的表面缺陷类型,如开裂、银纹、纹道、波纹、波痕和脆化等缺陷类型。
上述对象缺陷检测方法中,可以应用于任意深度语义分割模型中,通过获取待检测图像对应的特征图;其中,待检测图像为对需要进行缺陷检测的目标对象进行拍摄得到的图像;然后,对特征图进行区域划分,得到待检测图像中各像素点对应的第一子特征图;将各第一子特征图输入至一一对应的第一目标区域分类器中,得到待检测图像对应的第一缺陷类型概率;其中,第一缺陷类型概率包括待检测图像中各像素点针对各缺陷类型的归属概率;同时,将特征图输入至目标全局分类器,得到待检测图像对应的第二缺陷类型概率;其中,第二缺陷类型概率包括待检测图像中各像素点针对各缺陷类型的归属概率;根据第一缺陷类型概率以及第二缺陷类型概率确定目标对象对应的缺陷类型;如此,通过将待检测图像中各像素点对应的第一子特征图输入至一一对应的第一目标区域分类器中,使得待检测图像中每个像素点都有对应的一个第一目标区域分类器进行缺陷类型分类,从而可以充分且有效地利用待检测图像的局部细节信息对待检测图像中的目标对象进行缺陷类型识别,得到待检测图像对应的第一缺陷类型概率;通过将特征图直接输入至目标全局分类器,可以充分利用待检测图像的全局信息对目标对象进行缺陷类型识别,得到待检测图像对应的第二缺陷类型概率;最终,通过待检测图像对应的第一缺陷类型概率和第二缺陷类型概率确定目标对象对应的缺陷类型,可以防止仅使用第一缺陷类型概率进行缺陷类型识别时,由于过度关注待检测图像的局部信息,当目标对象表面缺陷较大的时候,无法有效完整地定位到缺陷位置,导致无法准确识别缺陷类型的情况发生;也可以防止仅使用第二缺陷类型概率进行缺陷类型识别时,由于无法充分且有效地利用待检测图像的局部信息,导致无法准确识别目标对象缺陷类型的情况发生;进而充分结合上述两种情况的优点,提高了对目标对象进行缺陷类型识别的精度。
在一些实施例中,如图2所示,步骤S130包括:
步骤S210,将各第一子特征图输入至一一对应的第一目标区域分类器中,得到各第一子 特征图对应的缺陷类型概率。
具体实现中,服务器在将各第一子特征图输入至一一对应的第一目标区域分类器中,得到待检测图像对应的第一缺陷类型概率的过程中,服务器可以将各第一子特征图输入至一一对应的第一目标区域分类器中,首先得到各第一目标区域分类器输出的各第一子特征图对应的缺陷类型概率。
实际应用中,各子特征图对应的缺陷类型概率可以为
Figure PCTCN2022108429-appb-000001
Figure PCTCN2022108429-appb-000002
其中,
Figure PCTCN2022108429-appb-000003
表示子特征图中各像素点(在本实施例中,为第一子特征图)针对各缺陷类型的归属概率,尺寸为[kh,kw,n],softmax为归一化多分类函数,f表示相似度计算函数,用于确定特征图划分后的各子特征图(本实施例中为各第一子特征图)对应的d维度特征向量与各缺陷类型对应的d维度特征向量间的相似度,相似度计算方式可以为任意向量相似度计算方式,如向量直接点乘、向量间的余弦相似度等,在此不做限制。
步骤S220,按照各第一子特征图的空间顺序,对各第一子特征图进行拼接,得到拼接后的第一子特征图。
具体实现中,服务器得到各第一目标区域分类器输出的各第一子特征图对应的缺陷类型概率后,服务器可以按照各第一子特征图的空间顺序,对各第一子特征图进行拼接,得到拼接后的第一子特征图。
步骤S230,根据各拼接后的第一子特征图对应的缺陷类型概率,确定特征图对应的缺陷类型概率。
其中,特征图对应的缺陷类型概率包括特征图中各像素点针对各缺陷类型的归属概率。
具体实现中,服务器在得到拼接后的第一子特征图后,可以根据各拼接后的第一子特征图对应的缺陷类型概率,确定特征图对应的缺陷类型概率,该特征图对应的缺陷类型概率包括了特征图中各像素点针对各缺陷类型的归属概率。
步骤S240,根据特征图对应的缺陷类型概率,确定待检测图像对应的第一缺陷类型概率。
具体实现中,由于深度语义分割模型输出的特征图与输入的原始图像(即待检测图像)宽高尺寸一致,因此,服务器在得到特征图对应的缺陷类型概率后,可以根据特征图中各像素点针对各缺陷类型的归属概率,确定待检测图像中各像素点针对各缺陷类型的归属概率,得到第一缺陷类型概率。
本实施例的技术方案,通过将各第一子特征图输入至一一对应的第一目标区域分类器中,得到各第一子特征图对应的缺陷类型概率;按照述第一子特征图的空间顺序,对各第一子特征图进行拼接,得到拼接后的第一子特征图;根据各拼接后的第一子特征图对应的缺陷类型概率,确定特征图对应的缺陷类型概率;其中,特征图对应的缺陷类型概率包括特征图中各像素点针对各缺陷类型的归属概率;最后,根据特征图对应的缺陷类型概率,确定第一缺陷类型概率;如此,通过将待检测图像中各像素点对应的第一子特征图输入至一一对应的第一目标区域分类器中,使得待检测图像中每个像素点都有对应的一个第一目标区域分类器进行缺陷类型分类,从而可以充分且有效地利用待检测图像的局部细节信息对待检测图像中的目标对象进行缺陷类型识别,提高目标对象缺陷类型识别过程中的局部细节捕捉能力。
在一些实施例中,根据第一缺陷类型概率以及第二缺陷类型概率确定目标对象对应的缺陷类型,包括:
将第一缺陷类型概率与第二缺陷类型概率相加,得到待检测图像对应的目标缺陷类型概率;
根据待检测图像中各像素点针对各缺陷类型的目标归属概率,确定目标对象对应的缺陷类型。
其中,目标缺陷类型概率包括待检测图像中各像素点针对各缺陷类型的目标归属概率。
具体实现中,服务器在根据第一缺陷类型概率以及第二缺陷类型概率确定目标对象对应的缺陷类型的过程中,服务器可以将第一缺陷类型概率与第二缺陷类型概率相加,得到待检测图像对应的目标缺陷类型概率,该目标缺陷类型概率包括待检测图像中各像素点针对各缺陷类型的目标归属概率;然后,根据待检测图像中各像素点针对各缺陷类型的目标归属概率,可以确定目标对象对应的缺陷类型。
实际应用中,第一缺陷类型概率可以为y p,第二缺陷类型概率可以为y c,目标缺陷类型概率可以为y,则:
y=y c+y p
本实施例的技术方案,通过将第一缺陷类型概率与第二缺陷类型概率相加,得到待检测图像对应的目标缺陷类型概率;其中,目标缺陷类型概率包括待检测图像中各像素点针对各缺陷类型的目标归属概率;根据待检测图像中各像素点针对各缺陷类型的目标归属概率,确定目标对象对应的缺陷类型;如此,可以防止仅使用第一缺陷类型概率进行缺陷类型识别时,由于过度关注待检测图像的局部信息,当目标对象表面缺陷较大的时候,无法有效完整地定位到缺陷位置,导致无法准确识别缺陷类型的情况发生;也可以防止仅使用第二缺陷类型概率进行缺陷类型识别时,由于无法充分且有效地利用待检测图像的局部信息,导致无法准确识别目标对象缺陷类型的情况发生;进而充分结合上述两种情况的优点,提高了对目标对象进行缺陷类型识别的精度。
在一些实施例中,根据待检测图像中各像素点针对各缺陷类型的目标归属概率,确定目标对象对应的缺陷类型,包括:
在待检测图像中各像素点针对各缺陷类型的目标归属概率中,确定待检测图像中各像素点对应的最大目标归属概率;
将各最大目标归属概率对应的缺陷类型,作为待检测图像中各像素点对应的缺陷类型;
根据待检测图像中各像素点对应的缺陷类型,确定目标对象对应的缺陷类型。
具体实现中,服务器在根据待检测图像中各像素点针对各缺陷类型的目标归属概率,确定目标对象对应的缺陷类型的过程中,服务器可以在待检测图像中各像素点针对各缺陷类型的目标归属概率中,确定待检测图像中各像素点对应的最大目标归属概率,从而可以将各最大目标归属概率对应的缺陷类型,作为待检测图像中各像素点对应的缺陷类型,并根据待检测图像中各像素点对应的缺陷类型,可以确定待检测图像中组成目标对象的各像素点对应的缺陷类型,进而可以确定目标对象的缺陷类型。
本实施例的技术方案,通过在待检测图像中各像素点针对各缺陷类型的目标归属概率中,确定待检测图像中各像素点对应的最大目标归属概率;将各最大目标归属概率对应的缺陷类型,作为待检测图像中各像素点对应的缺陷类型;根据待检测图像中各像素点对应的缺陷类型,确定目标对象对应的缺陷类型;如此,通过将待检测图像中各像素点对应的最大目标归属概率所归属的缺陷类型,作为待检测图像中各像素点对应的缺陷类型;从而可以根据组成目标对象的各像素点对应的缺陷类型,准确确定目标对象对应的缺陷类型,提高了目标对象缺陷类型的识别准确度。
在一些实施例中,将第一缺陷类型概率与第二缺陷类型概率相加,得到待检测图像对应的目标缺陷类型概率之前,方法还包括:
对特征图进行至少一次不同尺度下的区域划分,得到待检测图像在至少一个尺度下对应的第二子特征图;
将各第二子特征图输入至一一对应的第二目标区域分类器中,得到待检测图像对应的至少一个第三缺陷类型概率;至少一个第三缺陷类型概率包括待检测图像中各像素点在对应的尺度下,针对各缺陷类型的归属概率;
将第一缺陷类型概率、第二缺陷类型概率与至少一个第三缺陷类型概率相加,得到目标缺陷类型概率。
其中,第二子特征图的尺寸大于第一子特征图的尺寸,且小于特征图的尺寸。
具体实现中,服务器在确定待检测图像对应的目标缺陷类型概率过程中,服务器还可以对特征图进行至少一次不同尺度下的区域划分,得到待检测图像在至少一个尺度下对应的第二子特征图,且该第二子特征图的尺寸大于第一子特征图的尺寸,且小于特征图的尺寸。
实际应用中,至少一个尺度下对应的每一个第二子特征图Xi的尺寸为[kh,kw,d],kh,kw取值可以为2,4,8等数值,该取值需与项目实际情况相关联,需大于第一子特征图的尺寸(kh=1,kw=1)且小于特征图的尺寸(kh=h,kw=w),本方案对此不做具体限制。
然后,将各第二子特征图输入至一一对应的第二目标区域分类器中,得到不同尺度下对应的各第二目标区域分类器输出的缺陷类型概率
Figure PCTCN2022108429-appb-000004
从而得到待检测图像对应的至少一个第三缺陷类型概率,上述至少一个第三缺陷类型概率包括待检测图像中各像素点在对应的尺度下,针对各缺陷类型的归属概率;最后,服务器将第一缺陷类型概率、第二缺陷类型概率与不同尺度下对应的第三缺陷类型概率相加,可以得到待检测图像对应的目标缺陷类型概率。
本实施例的技术方案,通过对特征图进行至少一次不同尺度下的区域划分,得到待检测图像在至少一个尺度下对应的第二子特征图;其中,第二子特征图的尺寸大于第一子特征图的尺寸,且小于特征图的尺寸;将各第二子特征图输入至一一对应的第二目标区域分类器中,得到待检测图像对应的至少一个第三缺陷类型概率;其中,至少一个第三缺陷类型概率包括待检测图像中各像素点在对应的尺度下,针对各缺陷类型的归属概率;将第一缺陷类型概率、第二缺陷类型概率与至少一个第三缺陷类型概率相加,得到目标缺陷类型概率;如此,通过对特征图进行多尺度划分,并将划分后的子特征图输入至一一对应的第二目标区域分类器中,通过各第二目标区域分类器的输出结果得到特征图在不同尺度下对应的第三缺陷类型概率,并将第三缺陷类型概率与充分利用了特征图局部信息的第一缺陷类型概率和充分利用特征图全局信息的第二缺陷类型概率相加,得到目标缺陷类型概率;从而实现了分类器的语义内容可以根据不同区域的特征图内容的变化而变化,可以充分提取和融合不同尺寸特征图上的特征信息,丰富了待检测图像中目标对象的特征细节,进而可以通过待检测图像对应的目标缺陷类型概率,准确识别目标对象的缺陷类型,增强了分类器对每张待检测图像不同区域的自适应感知能力。
在一些实施例中,获取待检测图像对应的特征图之前,方法还包括:
获取待训练的区域分类器;
通过梯度反向传播算法对待训练的区域分类器进行训练,得到训练完成的第一区域分类器;
和/或,
通过指数滑动平均算法对待训练的区域分类器进行训练,得到训练完成的第二区域分类器;根据第一区域分类器和/或第二区域分类器,得到目标区域分类器。
其中,待训练的区域分类器包括不同尺度对应的局部分类器。
其中,目标区域分类器包括第一目标区域分类器、目标全局分类器以及第二目标区域分类器。
具体实现中,服务器还可以获取特征图在不同尺度下对应的待训练的区域分类器,通过梯度反向传播算法对所述待训练的区域分类器进行训练,得到训练完成的第一区域分类器,该第一区域分类器包括了各缺陷类型对应的d维特征向量表示。
实际应用中,第一区域分类器可以为
Figure PCTCN2022108429-appb-000005
将待训练的区域分类器定义为可学习的参数,让网络自行优化得到可以得到训练完成的第一区域分类器
Figure PCTCN2022108429-appb-000006
在每次训练迭代中,
Figure PCTCN2022108429-appb-000007
的更新是通过梯度反向传播(back-propagation)来更新。
服务器还可以通过指数滑动平均(Exponential Moving Average)算法对待训练的区域分类器进行训练,得到训练完成的第二区域分类器
Figure PCTCN2022108429-appb-000008
该第二区域分类器包括了各缺陷类型对应的d维特征向量表示。
实际应用中,
Figure PCTCN2022108429-appb-000009
的更新过程如下:
Figure PCTCN2022108429-appb-000010
其中,ti表示样本特征图中第i个[kh,kw,d]区域Xi对应的缺陷类型样本标签,ti尺寸为[kh,kw,n],包括n个缺陷类型的独热向量(one mask);M表示一个特征处理函数,它的作用为通过掩膜池化(mask pooling)将Xi处理成尺寸为[1,n,d]的特征向量;
Figure PCTCN2022108429-appb-000011
表示本次训练输出的预测值,
Figure PCTCN2022108429-appb-000012
表示样本特征图中第i个区域对应的通过指数滑动平均得到的区域分类器;γ为滑动平均的权重,可以为0.999。
最后,服务器可以根据第一区域分类器和/或第二区域分类器,得到目标区域分类器;该目标区域分类器包括第一目标区域分类器、目标全局分类器以及第二目标区域分类器。
实际应用中,可以根据第一区域分类器
Figure PCTCN2022108429-appb-000013
得到目标区域分类器Pi,即
Figure PCTCN2022108429-appb-000014
也可以根据第二区域分类器得到目标区域分类器Pi,即
Figure PCTCN2022108429-appb-000015
优选地,可以通过第一区域分类器和第二区域分类器得到目标区域分类器Pi,即
Figure PCTCN2022108429-appb-000016
实际应用中,目标区域分类器可以为目标深度语义分割模型中最后一层1x1卷积层或者是全连接层。
本实施例的技术方案,通过获取特征图在不同尺度下对应的待训练的区域分类器;通过梯度反向传播算法对待训练的区域分类器进行训练,得到训练完成的第一区域分类器;和/或,通过指数滑动平均算法对待训练的区域分类器进行训练,得到训练完成的第二区域分类器;根据第一区域分类器和/或第二区域分类器,得到目标区域分类器;如此,可以通过多种方法训练得到目标区域分类器,结合了多种方法的优点,可以提升目标区域分类器对缺陷类型识别的准确性,同时使得获取目标区域分类器的方式更加多样化。
在一些实施例中,目标全局分类器包括向量相似度确定模块和缺陷类型概率确定模块;将特征图输入至目标全局分类器,得到待检测图像对应的第二缺陷类型概率,包括:
将特征图输入至向量相似度确定模块,得到待检测图像中各像素点对应的特征向量与各缺陷类型对应的特征向量间的目标相似度;
将目标相似度输入至缺陷类型概率确定模块,得到待检测图像对应的第二缺陷类型概率。
其中,待检测图像中各像素点对应的特征向量的维度与各缺陷类型对应的特征向量的维度相等。
其中,第二缺陷类型概率为通过缺陷类型概率确定模块将目标相似度输入至归一化多分类函数得到的。
具体实现中,目标全局分类器包括向量相似度确定模块和缺陷类型概率确定模块;服务器在将特征图输入至目标全局分类器,得到待检测图像对应的第二缺陷类型概率的过程中,服务器可以将特征图输入至向量相似度确定模块,得到待检测图像中各像素点对应的特征向量与各缺陷类型对应的特征向量间的目标相似度;其中,待检测图像中各像素点对应的特征 向量的维度与各缺陷类型对应的特征向量的维度相等;并通过将目标相似度输入至缺陷类型概率确定模块,得到第二缺陷类型概率;其中,第二缺陷类型概率为通过缺陷类型概率确定模块将目标相似度输入至归一化多分类函数得到的。
实际应用中,目标全局分类器可以为C,特征图可以为X,第二缺陷类型概率为y c,则:
y c=Softmax(f(C*X))
其中,f表示相似度计算函数,用于确定特征图中每个像素点对应的d维特征向量与各缺陷类型对应的d维特征向量间的目标相似度,相似度计算方式可以为任意向量相似度计算方式,如向量直接点乘、向量间的余弦相似度等,在此不做限制;softmax为归一化多分类函数。
其中,当将目标深度语义分割模型的最后一层1x1卷积层视为分类器时,目标全局分类器C的尺寸为[n,d],代表n种缺陷类型中每种缺陷类型分别用一个d维的特征向量表示;y c的尺寸为[h,w,n],表示每个像素点针对各缺陷类型的归属概率。
本实施例的技术方案,通过将特征图输入至向量相似度确定模块,得到待检测图像中各像素点对应的特征向量与各缺陷类型对应的特征向量间的目标相似度;将目标相似度输入至缺陷类型概率确定模块,得到第二缺陷类型概率;如此,当目标对象的表面缺陷较大时,通过目标全局分类器可以有效完整地定位缺陷位置,并通过输出的第二缺陷类型概率,准确识别出目标对象对应的缺陷类型。
在另一些实施例中,如图3所示,提供了一种对象缺陷检测方法,以该方法应用于服务器为例进行说明,包括以下步骤:
步骤S302,获取待检测图像对应的特征图。
步骤S304,对特征图进行区域划分,得到待检测图像中各像素点对应的第一子特征图。
步骤S306,将各第一子特征图输入至一一对应的第一目标区域分类器中,得到各第一子特征图对应的缺陷类型概率。
步骤S308,按照各第一子特征图的空间顺序,对各第一子特征图进行拼接,得到拼接后的第一子特征图。
步骤S310,根据各拼接后的第一子特征图对应的缺陷类型概率,确定特征图对应的缺陷类型概率。
步骤S312,根据特征图对应的缺陷类型概率,确定待检测图像对应的第一缺陷类型概率。
步骤S314,将特征图输入至向量相似度确定模块,得到待检测图像中各像素点对应的特征向量与各缺陷类型对应的特征向量间的目标相似度。
步骤S316,将目标相似度输入至缺陷类型概率确定模块,得到待检测图像对应的第二缺陷类型概率。
步骤S318,将第一缺陷类型概率与第二缺陷类型概率相加,得到待检测图像中各像素点针对各缺陷类型的目标归属概率。
步骤S320,根据待检测图像中各像素点针对各缺陷类型的目标归属概率,确定目标对象对应的缺陷类型。
需要说明的是,上述步骤的具体限定可以参见上文对一种对象缺陷检测方法的具体限定。
应该理解的是,虽然如上所述的各实施例所涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,如上所述的各实施例所涉及的流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段 的至少一部分轮流或者交替地执行。
基于同样的发明构思,本申请实施例还提供了一种用于实现上述所涉及的对象缺陷检测方法的对象缺陷检测装置。该装置所提供的解决问题的实现方案与上述方法中所记载的实现方案相似,故下面所提供的一个或多个对象缺陷检测装置实施例中的具体限定可以参见上文中对于一种对象缺陷检测方法的限定,在此不再赘述。
在一些实施例中,如图4所示,提供了一种对象缺陷检测装置,包括:
获取模块410,用于获取待检测图像对应的特征图;所述待检测图像为对目标对象进行拍摄得到的图像;所述目标对象为需要进行缺陷检测的对象;
划分模块420,用于对所述特征图进行区域划分,得到所述待检测图像中各像素点对应的第一子特征图;
第一输入模块430,用于将各所述第一子特征图输入至一一对应的第一目标区域分类器中,得到所述待检测图像对应的第一缺陷类型概率;所述第一缺陷类型概率包括所述待检测图像中各像素点针对各缺陷类型的归属概率;
第二输入模块440,用于将所述特征图输入至目标全局分类器,得到所述待检测图像对应的第二缺陷类型概率;所述第二缺陷类型概率包括所述待检测图像中各像素点针对各所述缺陷类型的归属概率;
确定模块450,用于根据所述第一缺陷类型概率以及所述第二缺陷类型概率确定所述目标对象对应的缺陷类型。
在其中一些实施例中,所述第一输入模块430,具体用于将各所述第一子特征图输入至一一对应的所述第一目标区域分类器中,得到各所述第一子特征图对应的缺陷类型概率;按照各所述第一子特征图的空间顺序,对各所述第一子特征图进行拼接,得到拼接后的第一子特征图;根据各所述拼接后的第一子特征图对应的缺陷类型概率,确定所述特征图对应的缺陷类型概率;所述特征图对应的缺陷类型概率包括所述特征图中各像素点针对各所述缺陷类型的归属概率;根据所述特征图对应的缺陷类型概率,确定所述待检测图像对应的第一缺陷类型概率。
在其中一些实施例中,所述确定模块450,具体用于将所述第一缺陷类型概率与所述第二缺陷类型概率相加,得到所述待检测图像对应的目标缺陷类型概率;所述目标缺陷类型概率包括所述待检测图像中各像素点针对各所述缺陷类型的目标归属概率;根据所述待检测图像中各像素点针对各所述缺陷类型的目标归属概率,确定所述目标对象对应的缺陷类型。
在其中一些实施例中,所述确定模块450,具体用于在所述待检测图像中各像素点针对各所述缺陷类型的目标归属概率中,确定所述待检测图像中各像素点对应的最大目标归属概率;将各所述最大目标归属概率对应的缺陷类型,作为所述待检测图像中各像素点对应的缺陷类型;根据所述待检测图像中各像素点对应的缺陷类型,确定所述目标对象对应的缺陷类型。
在其中一些实施例中,所述装置还包括:第二子特征图获取模块,用于对所述特征图进行至少一次不同尺度下的区域划分,得到所述待检测图像在至少一个尺度下对应的第二子特征图;所述第二子特征图的尺寸大于所述第一子特征图的尺寸,且小于所述特征图的尺寸;第三输入模块,用于将各所述第二子特征图输入至一一对应的第二目标区域分类器中,得到所述待检测图像对应的至少一个第三缺陷类型概率;所述至少一个第三缺陷类型概率包括所述待检测图像中各像素点在对应的尺度下,针对各所述缺陷类型的归属概率;相加模块,用于将所述第一缺陷类型概率、所述第二缺陷类型概率与所述至少一个第三缺陷类型概率相加,得到所述待检测图像对应的目标缺陷类型概率。
在其中一些实施例中,所述装置还包括:分类器获取模块,用于获取待训练的区域分类器;所述待训练的区域分类器包括不同尺度对应的局部分类器;第一训练模块,用于通过梯度反向传播算法对所述待训练的区域分类器进行训练,得到训练完成的第一区域分类器;和/或,第二训练模块,用于通过指数滑动平均算法对所述待训练的区域分类器进行训练,得到 训练完成的第二区域分类器;目标区域分类器确定模块,用于根据所述第一区域分类器和/或所述第二区域分类器,得到目标区域分类器;所述目标区域分类器包括所述第一目标区域分类器、所述目标全局分类器以及所述第二目标区域分类器。
在其中一些实施例中,所述目标全局分类器包括向量相似度确定模块和缺陷类型概率确定模块;所述第二输入模块440,具体用于将所述特征图输入至所述向量相似度确定模块,得到所述待检测图像中各像素点对应的特征向量与各所述缺陷类型对应的特征向量间的目标相似度;其中,所述待检测图像中各像素点对应的特征向量的维度与各所述缺陷类型对应的特征向量的维度相等;将所述目标相似度输入至所述缺陷类型概率确定模块,得到所述待检测图像对应的第二缺陷类型概率;所述第二缺陷类型概率为通过所述缺陷类型概率确定模块将所述目标相似度输入至归一化多分类函数得到的。
上述一种对象缺陷检测装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一些实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图5所示。该计算机设备包括通过系统总线连接的处理器、存储器和网络接口。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质和内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储待检测图像缺陷检测处理数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种对象缺陷检测方法。
本领域技术人员可以理解,图5中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一些实施例中,还提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各方法实施例中的步骤。
在一些实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。
在一些实施例中,提供了一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。
需要说明的是,本申请所涉及的用户信息(包括但不限于用户设备信息、用户个人信息等)和数据(包括但不限于用于分析的数据、存储的数据、展示的数据等),均为经用户授权或者经过各方充分授权的信息和数据。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存、光存储器、高密度嵌入式非易失性存储器、阻变存储器(ReRAM)、磁变存储器(Magnetoresistive Random Access Memory,MRAM)、铁电存储器(Ferroelectric Random Access Memory,FRAM)、相变存储器(Phase Change Memory,PCM)、石墨烯存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器等。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。本申请所提供的各实施例中所涉及的数据库可包括关系型数据库和非关系型数据库中至少一种。非关系型数据库可包括基于区块链的分布式数据库等,不限于此。本申请所提供的各实施例中所涉及的处理器可为通用处理器、中央处理器、图形处理器、数字信号 处理器、可编程逻辑器、基于量子计算的数据处理逻辑器等,不限于此。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种对象缺陷检测方法,其特征在于,包括:
    获取待检测图像对应的特征图;所述待检测图像为对目标对象进行拍摄得到的图像;所述目标对象为需要进行缺陷检测的对象;
    对所述特征图进行区域划分,得到所述待检测图像中各像素点对应的第一子特征图;
    将各所述第一子特征图输入至一一对应的第一目标区域分类器中,得到所述待检测图像对应的第一缺陷类型概率;所述第一缺陷类型概率包括所述待检测图像中各像素点针对各缺陷类型的归属概率;
    将所述特征图输入至目标全局分类器,得到所述待检测图像对应的第二缺陷类型概率;所述第二缺陷类型概率包括所述待检测图像中各像素点针对各所述缺陷类型的归属概率;
    根据所述第一缺陷类型概率以及所述第二缺陷类型概率确定所述目标对象对应的缺陷类型。
  2. 根据权利要求1所述的方法,其特征在于,所述将各所述第一子特征图输入至一一对应的第一目标区域分类器中,得到所述待检测图像对应的第一缺陷类型概率,包括:
    将各所述第一子特征图输入至一一对应的第一目标区域分类器中,得到各所述第一子特征图对应的缺陷类型概率;
    按照各所述第一子特征图的空间顺序,对各所述第一子特征图进行拼接,得到拼接后的第一子特征图;
    根据各所述拼接后的第一子特征图对应的缺陷类型概率,确定所述特征图对应的缺陷类型概率;所述特征图对应的缺陷类型概率包括所述特征图中各像素点针对各所述缺陷类型的归属概率;
    根据所述特征图对应的缺陷类型概率,确定所述待检测图像对应的第一缺陷类型概率。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述第一缺陷类型概率以及所述第二缺陷类型概率确定所述目标对象对应的缺陷类型,包括:
    将所述第一缺陷类型概率与所述第二缺陷类型概率相加,得到所述待检测图像对应的目标缺陷类型概率;所述目标缺陷类型概率包括所述待检测图像中各像素点针对各所述缺陷类型的目标归属概率;
    根据所述待检测图像中各像素点针对各所述缺陷类型的目标归属概率,确定所述目标对象对应的缺陷类型。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述待检测图像中各像素点针对各所述缺陷类型的目标归属概率,确定所述目标对象对应的缺陷类型,包括:
    在所述待检测图像中各像素点针对各所述缺陷类型的目标归属概率中,确定所述待检测图像中各像素点对应的最大目标归属概率;
    将各所述最大目标归属概率对应的缺陷类型,作为所述待检测图像中各像素点对应的缺陷类型;
    根据所述待检测图像中各像素点对应的缺陷类型,确定所述目标对象对应的缺陷类型。
  5. 根据权利要求3所述的方法,其特征在于,所述将所述第一缺陷类型概率与所述第二缺陷类型概率相加,得到所述待检测图像对应的目标缺陷类型概率之前,所述方法还包括:
    对所述特征图进行至少一次不同尺度下的区域划分,得到所述待检测图像在至少一个尺度下对应的第二子特征图;所述第二子特征图的尺寸大于所述第一子特征图的尺寸,且小于所述特征图的尺寸;
    将各所述第二子特征图输入至一一对应的第二目标区域分类器中,得到所述待检测图像对应的至少一个第三缺陷类型概率;所述至少一个第三缺陷类型概率包括所述待检测图像中各像素点在对应的尺度下,针对各所述缺陷类型的归属概率;
    所述将所述第一缺陷类型概率与所述第二缺陷类型概率相加,得到所述待检测图像对应的目标缺陷类型概率,包括:
    将所述第一缺陷类型概率、所述第二缺陷类型概率与所述至少一个第三缺陷类型概率相加,得到所述待检测图像对应的目标缺陷类型概率。
  6. 根据权利要求5所述的方法,其特征在于,所述获取待检测图像对应的特征图之前,所述方法还包括:
    获取待训练的区域分类器;所述待训练的区域分类器包括不同尺度对应的局部分类器;
    通过梯度反向传播算法对所述待训练的区域分类器进行训练,得到训练完成的第一区域分类器;
    和/或,
    通过指数滑动平均算法对所述待训练的区域分类器进行训练,得到训练完成的第二区域分类器;
    根据所述第一区域分类器和/或所述第二区域分类器,得到目标区域分类器;所述目标区域分类器包括所述第一目标区域分类器、所述目标全局分类器以及所述第二目标区域分类器。
  7. 根据权利要求1所述的方法,其特征在于,所述目标全局分类器包括向量相似度确定模块和缺陷类型概率确定模块;所述将所述特征图输入至目标全局分类器,得到所述待检测图像对应的第二缺陷类型概率,包括:
    将所述特征图输入至所述向量相似度确定模块,得到所述待检测图像中各像素点对应的特征向量与各所述缺陷类型对应的特征向量间的目标相似度;其中,所述待检测图像中各像素点对应的特征向量的维度与各所述缺陷类型对应的特征向量的维度相等;
    将所述目标相似度输入至所述缺陷类型概率确定模块,得到所述待检测图像对应的第二缺陷类型概率;所述第二缺陷类型概率为通过所述缺陷类型概率确定模块将所述目标相似度输入至归一化多分类函数得到的。
  8. 根据权利要求1所述的方法,其特征在于,所述获取待检测图像对应的特征图之前,所述方法还包括:
    获取所述待检测图像;
    将所述待检测图像输入至训练完成的深度语义分割模型中的特征提取模块,得到所述待检测图像对应的特征图;其中,所述特征图为所述特征提取模块通过特征提取函数对所述待检测图像进行特征提取得到的。
  9. 根据权利要求2所述的方法,其特征在于,所述特征图与所述待检测图像的宽高尺寸相同;所述根据所述特征图对应的缺陷类型概率,确定所述待检测图像对应的第一缺陷类型概率,包括:
    根据所述特征图中各像素点针对各所述缺陷类型的归属概率,确定所述待检测图像中各像素点针对各所述缺陷类型的归属概率;
    根据所述待检测图像中各像素点针对各所述缺陷类型的归属概率,确定所述待检测图像对应的第一缺陷类型概率。
  10. 一种对象缺陷检测装置,其特征在于,包括:
    获取模块,用于获取待检测图像对应的特征图;所述待检测图像为对目标对象进行拍摄得到的图像;所述目标对象为需要进行缺陷检测的对象;
    划分模块,用于对所述特征图进行区域划分,得到所述待检测图像中各像素点对应的第一子特征图;
    第一输入模块,用于将各所述第一子特征图输入至一一对应的第一目标区域分类器中,得到所述待检测图像对应的第一缺陷类型概率;所述第一缺陷类型概率包括所述待检测图像中各像素点针对各缺陷类型的归属概率;
    第二输入模块,用于将所述特征图输入至目标全局分类器,得到所述待检测图像对应的第二缺陷类型概率;所述第二缺陷类型概率包括所述待检测图像中各像素点针对各所述缺陷类型的归属概率;
    确定模块,用于根据所述第一缺陷类型概率以及所述第二缺陷类型概率确定所述目标对 象对应的缺陷类型。
  11. 根据权利要求10所述的装置,其特征在于,所述第一输入模块具体用于:
    将各所述第一子特征图输入至一一对应的第一目标区域分类器中,得到各所述第一子特征图对应的缺陷类型概率;
    按照各所述第一子特征图的空间顺序,对各所述第一子特征图进行拼接,得到拼接后的第一子特征图;
    根据各所述拼接后的第一子特征图对应的缺陷类型概率,确定所述特征图对应的缺陷类型概率;所述特征图对应的缺陷类型概率包括所述特征图中各像素点针对各所述缺陷类型的归属概率;
    根据所述特征图对应的缺陷类型概率,确定所述待检测图像对应的第一缺陷类型概率。
  12. 根据权利要求10所述的装置,其特征在于,所述确定模块具体用于:
    将所述第一缺陷类型概率与所述第二缺陷类型概率相加,得到所述待检测图像对应的目标缺陷类型概率;所述目标缺陷类型概率包括所述待检测图像中各像素点针对各所述缺陷类型的目标归属概率;
    根据所述待检测图像中各像素点针对各所述缺陷类型的目标归属概率,确定所述目标对象对应的缺陷类型。
  13. 根据权利要求12所述的装置,其特征在于,所述确定模块具体用于:
    在所述待检测图像中各像素点针对各所述缺陷类型的目标归属概率中,确定所述待检测图像中各像素点对应的最大目标归属概率;
    将各所述最大目标归属概率对应的缺陷类型,作为所述待检测图像中各像素点对应的缺陷类型;
    根据所述待检测图像中各像素点对应的缺陷类型,确定所述目标对象对应的缺陷类型。
  14. 根据权利要求12所述的装置,其特征在于,所述装置还包括:
    第二子特征图获取模块,用于对所述特征图进行至少一次不同尺度下的区域划分,得到所述待检测图像在至少一个尺度下对应的第二子特征图;所述第二子特征图的尺寸大于所述第一子特征图的尺寸,且小于所述特征图的尺寸;
    第三输入模块,用于将各所述第二子特征图输入至一一对应的第二目标区域分类器中,得到所述待检测图像对应的至少一个第三缺陷类型概率;所述至少一个第三缺陷类型概率包括所述待检测图像中各像素点在对应的尺度下,针对各所述缺陷类型的归属概率;
    所述确定模块具体用于:将所述第一缺陷类型概率、所述第二缺陷类型概率与所述至少一个第三缺陷类型概率相加,得到所述待检测图像对应的目标缺陷类型概率。
  15. 根据权利要求14所述的装置,其特征在于,所述装置还包括:
    分类器获取模块,用于获取待训练的区域分类器;所述待训练的区域分类器包括不同尺度对应的局部分类器;
    第一训练模块,用于通过梯度反向传播算法对所述待训练的区域分类器进行训练,得到训练完成的第一区域分类器;
    和/或,
    第二训练模块,用于通过指数滑动平均算法对所述待训练的区域分类器进行训练,得到训练完成的第二区域分类器;
    目标区域分类器确定模块,用于根据所述第一区域分类器和/或所述第二区域分类器,得到目标区域分类器;所述目标区域分类器包括所述第一目标区域分类器、所述目标全局分类器以及所述第二目标区域分类器。
  16. 根据权利要求10所述的装置,其特征在于,所述目标全局分类器包括向量相似度确定模块和缺陷类型概率确定模块;所述第二输入模块具体用于:
    将所述特征图输入至所述向量相似度确定模块,得到所述待检测图像中各像素点对应的特征向量与各所述缺陷类型对应的特征向量间的目标相似度;其中,所述待检测图像中各像 素点对应的特征向量的维度与各所述缺陷类型对应的特征向量的维度相等;
    将所述目标相似度输入至所述缺陷类型概率确定模块,得到所述待检测图像对应的第二缺陷类型概率;所述第二缺陷类型概率为通过所述缺陷类型概率确定模块将所述目标相似度输入至归一化多分类函数得到的。
  17. 根据权利要求10所述的装置,其特征在于,所述装置还包括:
    图像获取模块,用于获取所述待检测图像;
    第四输入模块,用于将所述待检测图像输入至训练完成的深度语义分割模型中的特征提取模块,得到所述待检测图像对应的特征图;其中,所述特征图为所述特征提取模块通过特征提取函数对所述待检测图像进行特征提取得到的。
  18. 一种计算机设备,包括存储器和处理器,其特征在于,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现权利要求1至9中任一项所述的方法的步骤。
  19. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至9中任一项所述的方法的步骤。
  20. 一种计算机程序产品,包括计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至9中任一项所述的方法的步骤。
PCT/CN2022/108429 2022-04-18 2022-07-28 对象缺陷检测方法、装置、计算机设备和存储介质 WO2023201924A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210401828.2 2022-04-18
CN202210401828.2A CN114494260B (zh) 2022-04-18 2022-04-18 对象缺陷检测方法、装置、计算机设备和存储介质

Publications (1)

Publication Number Publication Date
WO2023201924A1 true WO2023201924A1 (zh) 2023-10-26

Family

ID=81489343

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/108429 WO2023201924A1 (zh) 2022-04-18 2022-07-28 对象缺陷检测方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN114494260B (zh)
WO (1) WO2023201924A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494260B (zh) * 2022-04-18 2022-07-19 深圳思谋信息科技有限公司 对象缺陷检测方法、装置、计算机设备和存储介质
CN114782445B (zh) * 2022-06-22 2022-10-11 深圳思谋信息科技有限公司 对象缺陷检测方法、装置、计算机设备和存储介质
CN114842008B (zh) * 2022-07-04 2022-10-21 南通三信塑胶装备科技股份有限公司 基于计算机视觉的注塑件色差检测方法
CN115496976B (zh) * 2022-08-29 2023-08-11 锋睿领创(珠海)科技有限公司 多源异构数据融合的视觉处理方法、装置、设备及介质
CN115965856B (zh) * 2023-02-23 2023-05-30 深圳思谋信息科技有限公司 图像检测模型构建方法、装置、计算机设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842131A (zh) * 2012-07-10 2012-12-26 中联重科股份有限公司 一种监测目标物体缺陷的方法及设备
CN110111297A (zh) * 2019-03-15 2019-08-09 浙江大学 一种基于迁移学习的注塑制品表面图像缺陷识别方法
US20200234420A1 (en) * 2019-01-17 2020-07-23 Beijing Boe Optoelectronics Technology Co., Ltd. Method and apparatus for detecting image defects, computing device, and computer readable storage medium
CN111612763A (zh) * 2020-05-20 2020-09-01 重庆邮电大学 手机屏幕缺陷检测方法、装置及系统、计算机设备及介质
CN114494260A (zh) * 2022-04-18 2022-05-13 深圳思谋信息科技有限公司 对象缺陷检测方法、装置、计算机设备和存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102103853B1 (ko) * 2016-09-27 2020-04-24 주식회사 히타치하이테크 결함 검사 장치 및 결함 검사 방법
CN112384946A (zh) * 2018-07-13 2021-02-19 华为技术有限公司 一种图像坏点检测方法及装置
US10825149B2 (en) * 2018-08-23 2020-11-03 Siemens Healthcare Gmbh Defective pixel correction using adversarial networks
CN111582294B (zh) * 2019-03-05 2024-02-27 慧泉智能科技(苏州)有限公司 一种构建用于表面缺陷检测的卷积神经网络模型的方法及其利用
SE1930421A1 (en) * 2019-12-30 2021-07-01 Unibap Ab Method and means for detection of imperfections in products
CN113344857B (zh) * 2021-05-13 2022-05-03 深圳市华汉伟业科技有限公司 缺陷检测网络的训练方法、缺陷检测方法和存储介质
CN113657383B (zh) * 2021-08-24 2024-05-24 凌云光技术股份有限公司 一种基于轻量化分割模型的缺陷区域检测方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842131A (zh) * 2012-07-10 2012-12-26 中联重科股份有限公司 一种监测目标物体缺陷的方法及设备
US20200234420A1 (en) * 2019-01-17 2020-07-23 Beijing Boe Optoelectronics Technology Co., Ltd. Method and apparatus for detecting image defects, computing device, and computer readable storage medium
CN110111297A (zh) * 2019-03-15 2019-08-09 浙江大学 一种基于迁移学习的注塑制品表面图像缺陷识别方法
CN111612763A (zh) * 2020-05-20 2020-09-01 重庆邮电大学 手机屏幕缺陷检测方法、装置及系统、计算机设备及介质
CN114494260A (zh) * 2022-04-18 2022-05-13 深圳思谋信息科技有限公司 对象缺陷检测方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN114494260B (zh) 2022-07-19
CN114494260A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
WO2023201924A1 (zh) 对象缺陷检测方法、装置、计算机设备和存储介质
CN109086811B (zh) 多标签图像分类方法、装置及电子设备
CN107784288B (zh) 一种基于深度神经网络的迭代定位式人脸检测方法
CN108334805B (zh) 检测文档阅读顺序的方法和装置
CN111914908B (zh) 一种图像识别模型训练方法、图像识别方法及相关设备
CN113705297A (zh) 检测模型的训练方法、装置、计算机设备和存储介质
CN106250918B (zh) 一种基于改进的推土距离的混合高斯模型匹配方法
CN115115825B (zh) 图像中的对象检测方法、装置、计算机设备和存储介质
CN113298146A (zh) 一种基于特征检测的图像匹配方法、装置、设备及介质
CN114419406A (zh) 图像变化检测方法、训练方法、装置和计算机设备
CN115147599A (zh) 一种面向遮挡和截断场景的多几何特征学习的物体六自由度位姿估计方法
CN116310656A (zh) 训练样本确定方法、装置和计算机设备
CN109857895B (zh) 基于多环路视图卷积神经网络的立体视觉检索方法与系统
CN114359228A (zh) 物体表面缺陷检测方法、装置、计算机设备和存储介质
Suzuki et al. Superpixel convolution for segmentation
CN116051959A (zh) 一种目标检测方法、装置
Ammous et al. Improved YOLOv3-tiny for silhouette detection using regularisation techniques.
CN116524296A (zh) 设备缺陷检测模型的训练方法、装置和设备缺陷检测方法
CN117011219A (zh) 物品质量检测方法、装置、设备、存储介质和程序产品
CN116129176A (zh) 一种基于强关联动态学习的少样本目标检测方法
CN115861284A (zh) 印刷制品线状缺陷检测方法、装置、电子设备和存储介质
CN115713769A (zh) 文本检测模型的训练方法、装置、计算机设备和存储介质
CN115063473A (zh) 物体高度的检测方法、装置、计算机设备、存储介质
CN112069981A (zh) 图像分类方法、装置、电子设备及存储介质
CN115062180B (zh) 对象查询的方法、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22938149

Country of ref document: EP

Kind code of ref document: A1