CN116958113A - Product detection method, device, equipment and storage medium - Google Patents

Product detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN116958113A
CN116958113A CN202310955758.XA CN202310955758A CN116958113A CN 116958113 A CN116958113 A CN 116958113A CN 202310955758 A CN202310955758 A CN 202310955758A CN 116958113 A CN116958113 A CN 116958113A
Authority
CN
China
Prior art keywords
image
product
detection result
category
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310955758.XA
Other languages
Chinese (zh)
Inventor
唐勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mingsheng Pinzhi Artificial Intelligence Technology Co ltd
Original Assignee
Shanghai Mingsheng Pinzhi Artificial Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mingsheng Pinzhi Artificial Intelligence Technology Co ltd filed Critical Shanghai Mingsheng Pinzhi Artificial Intelligence Technology Co ltd
Priority to CN202310955758.XA priority Critical patent/CN116958113A/en
Publication of CN116958113A publication Critical patent/CN116958113A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a product detection method, a device, equipment and a storage medium, wherein the product detection method comprises the following steps: inputting a product image of a target product into a pre-trained image classification model, classifying and predicting the detection result category of the product image through the image classification model, and outputting to obtain a coarse-granularity classification prediction result aiming at the target product; acquiring an intermediate feature image generated by the image classification model in an intermediate stage of classifying and predicting the product image; calculating a fine-granularity classification prediction result of the target product based on the standard image corresponding to each detection result category of the intermediate feature image; and determining a final product detection result of the target product based on the coarse-granularity classification prediction result and the fine-granularity classification prediction result. Therefore, on the basis of being capable of detecting the fine granularity of the target product, the method does not need to additionally introduce an image segmentation model, and effectively reduces the labeling cost of the early model training stage for image labeling.

Description

Product detection method, device, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a product detection method, apparatus, device, and storage medium.
Background
At present, when quality inspection is carried out on products, the products can be automatically identified and detected through the image data of the products acquired by the camera. However, for products requiring fine granularity detection (i.e., the final judgment of the product detection result needs to detect the detail area of the product image), the existing product detection mode generally performs image segmentation on the product image based on an image segmentation model so as to obtain a plurality of product detail images to be identified; at this time, compared with the image classification model, the image segmentation model needs to carry out pixel-level labeling on the sample image in the model training stage, so that a large amount of image labeling work is generated in the early model training stage according to the existing product detection mode, so that the labeling cost is too high, and the actual product detection requirement cannot be met.
Disclosure of Invention
In view of the above, the present application aims to provide a product detection method, device, equipment and storage medium, which can obtain a product detection result of a target product by means of an image classification model without additionally introducing an image segmentation model on the basis of being capable of realizing fine-grained detection of the target product, thereby effectively reducing the labeling cost of an early model training stage for image labeling, and being beneficial to better meeting the real requirements of product detection.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
In a first aspect, an embodiment of the present application provides a product detection method, where the product detection method includes:
inputting a product image of a target product into a pre-trained image classification model, carrying out classification prediction on a detection result category to which the product image belongs through the image classification model, and outputting to obtain a coarse-granularity classification prediction result aiming at the target product;
acquiring an intermediate feature map generated by the image classification model in an intermediate stage of classifying and predicting the product image; wherein the intermediate feature map characterizes pixel regions computed/of interest by the image classification model;
calculating a fine-granularity classification prediction result of the target product based on the standard image corresponding to each detection result category of the intermediate feature image;
and determining a final product detection result of the target product based on the coarse-granularity classification prediction result and the fine-granularity classification prediction result.
In a second aspect, an embodiment of the present application provides a product detection apparatus, including:
The first prediction module is used for inputting a product image of a target product into a pre-trained image classification model, performing classification prediction on a detection result category to which the product image belongs through the image classification model, and outputting a coarse-granularity classification prediction result aiming at the target product;
the acquisition module is used for acquiring an intermediate feature image generated by the image classification model in an intermediate stage of classifying and predicting the product image; wherein the intermediate feature map characterizes pixel regions computed/of interest by the image classification model;
the second prediction module is used for calculating a fine granularity classification prediction result of the target product based on the standard image corresponding to each detection result category of the intermediate feature image;
and the result determining module is used for jointly determining a final product detection result of the target product based on the coarse-granularity classification prediction result and the fine-granularity classification prediction result.
In a third aspect, an embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the steps of the product detection method described above when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor performs the steps of the product detection method described above.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
the product image of the target product is input into a pre-trained image classification model, the classification prediction is carried out on the detection result category to which the product image belongs through the image classification model, and a coarse-granularity classification prediction result aiming at the target product is output; acquiring an intermediate feature image generated by the image classification model in an intermediate stage of classifying and predicting the product image; calculating a fine-granularity classification prediction result of the target product based on the standard image corresponding to each detection result category of the intermediate feature image; and determining a final product detection result of the target product based on the coarse-granularity classification prediction result and the fine-granularity classification prediction result. Therefore, on the basis of being capable of detecting the fine granularity of the target product, the method does not need to additionally introduce an image segmentation model, and effectively reduces the labeling cost of the early model training stage for image labeling.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a product detection method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an intermediate feature map provided by an embodiment of the present application;
FIG. 3 is a schematic flow chart of a model training method of an image classification model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an effective thermodynamic diagram provided by an embodiment of the present application;
FIG. 5 is a flowchart of a first method for calculating a fine-grained classification prediction result according to an embodiment of the application;
FIG. 6 is a flow chart of a second method for calculating a fine-grained classification prediction result according to an embodiment of the application;
fig. 7 is a schematic structural diagram of a product detection device according to an embodiment of the present application;
Fig. 8 is a schematic structural diagram of an electronic device 800 according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for the purpose of illustration and description only and are not intended to limit the scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this disclosure, illustrates operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to or removed from the flow diagrams by those skilled in the art under the direction of the present disclosure.
In addition, the described embodiments are only some, but not all, embodiments of the application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that the term "comprising" will be used in embodiments of the application to indicate the presence of the features stated hereafter, but not to exclude the addition of other features.
At present, when quality inspection is carried out on products, the products can be automatically identified and detected through the image data of the products acquired by the camera. However, for products requiring fine granularity detection (i.e., the final judgment of the product detection result needs to detect the detail area of the product image), the existing product detection mode generally performs image segmentation on the product image based on an image segmentation model so as to obtain a plurality of product detail images to be identified; at this time, compared with the image classification model, the image segmentation model needs to carry out pixel-level labeling on the sample image in the model training stage, so that a large amount of image labeling work is generated in the early model training stage according to the existing product detection mode, so that the labeling cost is too high, and the actual product detection requirement cannot be met.
Based on the above, the embodiment of the application provides a product detection method, device, equipment and storage medium, which can obtain the product detection result of a target product by means of an image classification model without additionally introducing an image segmentation model on the basis of being capable of realizing fine-grained detection of the target product, thereby effectively reducing the labeling cost of an early model training stage for image labeling and being beneficial to better meeting the real requirements of product detection.
The following describes in detail a product detection method, device, equipment and storage medium provided by the embodiment of the application.
Referring to fig. 1, fig. 1 shows a flow chart of a product detection method according to an embodiment of the present application, where the product detection method includes steps S101 to S104; specific:
s101, inputting a product image of a target product into a pre-trained image classification model, carrying out classification prediction on a detection result category to which the product image belongs through the image classification model, and outputting to obtain a coarse-granularity classification prediction result aiming at the target product.
Here, the target product may be used to characterize an industrially produced component or mechanical product, and may also be used to characterize various foods (such as dumplings, cakes, etc.) produced by food processing, and the specific product type characterized by the target product is not limited in any way.
Specifically, the application mainly solves the problem of fine granularity detection on a target product, so that in an actual application scene, a product which needs to be detected or segmented in a detail area of the product during product detection can be preferentially selected as the target product; for example, the target product may be a composite part requiring attention to a specific part of the part, or may be a printed product requiring attention to printing details, or the like.
It should be noted that, before executing the step S101, a product image of the target product may be acquired from the image acquisition device; the image capturing device is used for capturing image data of a target product, and the embodiment of the application is not limited in any way for specific equipment types (for example, cameras with different types, cameras, etc.) to which the image capturing device belongs and image quality of the product image (for example, resolution, definition, etc.).
Here, it is considered that, in the actual product detection, a larger time delay may be generated if a server storing a pre-trained image classification model (i.e., a model training part of the image classification model is completed on the server side) is used to classify and predict the product image, as compared with an edge device that actually performs product detection on the product image (i.e., an electronic device on the data source side that is close to the product image), which may be disadvantageous for improving the product detection efficiency; based on the above, model conversion modes such as TensorRT (a software development kit for high-performance deep learning model reasoning, which contains an application program runtime environment and a deep learning model reasoning optimizer) or ONNX (an open file format designed for machine learning and used for storing a trained model) can be adopted, and the trained image classification model is constructed to be an reasoning engine to be deployed on edge equipment used in actual product detection.
In the embodiment of the application, the image classification model can carry out classification prediction on the input product image, and predicts the probability that the product image belongs to each detection result category, so that the probability that the product image output by the image classification model belongs to each detection result category is used as a coarse-granularity classification prediction result for a target product; the specific type and the specific number of the detection result types can be determined according to the actual product detection requirements aiming at the target product, and the embodiment of the application is not limited in any way.
For example, when the target product belongs to a food product, taking the case that the product detection requirement of the food product is to detect the degree of doneness of the food, the above detection result categories may include: g1-g5, 5 detection result categories; wherein, g1-g5 are arranged according to the order of the degree of the food from raw to cooked (namely, the g1 detection result category indicates that the food is fully cooked, the g5 detection result category indicates that the food is fully cooked, and the g2-g4 detection result category indicates that the degree of the food is between fully cooked and fully cooked); when the target product belongs to the printed product, taking the case that the product detection requirement of the printed product is the printing definition of the detected product as an example, the detection result category may include: h1-h3, namely 3 detection result categories; the h1 detection result category indicates that the printing definition of the product is not qualified (i.e. lower than a certain preset definition range), the h2 detection result category indicates that the printing definition of the product is qualified (i.e. within a certain preset definition range), and the h3 detection result category indicates that the printing definition of the product is excellent (i.e. higher than a certain preset definition range).
S102, obtaining an intermediate feature map generated by the image classification model in an intermediate stage of classifying and predicting the product image.
Here, in the intermediate stage of the image classification model for classifying and predicting the input product image, the number of convolution layers in the intermediate stage is multiple based on the image classification model, so that a plurality of intermediate feature images corresponding to the product image exist in the intermediate stage; in this embodiment of the present application, the above-mentioned intermediate feature map characterizes the pixel region calculated/focused by the image classification model (i.e., the intermediate feature map is used to display which part of the input product image has an effect on the final image classification prediction judgment).
Here, in the above intermediate feature map, different pixel regions carry different labels, and these labels are used to represent the influence degree of one pixel region on the classification prediction result output by the image classification model (that is, the coarse-granularity classification prediction result in the above step S101) (which is equivalent to determining, according to the specific label carried by each pixel region, the specific attention degree of the image classification model for each pixel region when classifying and predicting the product image).
Specifically, in an embodiment of the present application, as an optional embodiment, the intermediate feature map may be a thermodynamic diagram directly generated by the image classification model in the intermediate stage; at this time, when step S102 is performed, the thermodynamic diagram generated in the intermediate stage may be directly acquired from the image classification model as an intermediate feature diagram; in the thermodynamic diagram, when classifying and predicting the product image according to the image classification model, specific attention degrees of different pixel areas in the product image are marked by using different colors; that is, different colors are used in the thermodynamic diagram to represent the interest of the image classification model for different pixel regions.
For example, taking pizza as a target product as a food product, fig. 2 is a schematic diagram of an intermediate feature map provided in an embodiment of the present application when the intermediate feature map belongs to a thermodynamic diagram that can be directly obtained from the intermediate stage; as shown in fig. 2, thermodynamic diagram 200 may be obtained directly as an intermediate feature diagram; in the thermodynamic diagram 200, the pixel region 201 marked with a dark color represents a pixel region (i.e., the attention degree is higher than a preset threshold) of the image classification model, which is focused on when classifying and predicting the product image of pizza (i.e., the target product), and the pixel region 202 marked with a light color represents a pixel region (i.e., the attention degree is lower than a preset threshold) of the image classification model, which is focused on when classifying and predicting the product image of pizza (i.e., the target product).
And S103, calculating a fine-granularity classification prediction result of the target product based on the standard image corresponding to each detection result category of the intermediate feature map.
Here, each detection result category corresponds to a standard image (i.e., the standard image) of the product belonging to the detection result category; the definition of "standard" in the standard image may be different based on different service scenarios (i.e. different product detection requirements), and the embodiment of the present application is not limited in any way for the specific image definition of the standard image.
Specifically, as an alternative embodiment, the definition of "standard" in the standard image may be determined based on the image color level, where the standard image corresponding to one detection result category may be a standard color chart corresponding to a product belonging to the detection result category.
Specifically, as another alternative embodiment, the definition of "standard" in the standard image may also be determined based on the integrity of the product in the image, where the standard image corresponding to one detection result type may be the standard product image corresponding to the product of the detection result type (e.g., the product is displayed completely and the definition meets the requirement).
Here, in step S103, according to the actual product detection requirement, a service feature suitable for the currently detected target product may be selected from preset service features of multiple pixel levels, so as to calculate, under the same service feature dimension of the pixel level, the probability that the intermediate feature map belongs to each detection result category based on the service feature difference between the intermediate feature map and the standard image corresponding to each detection result category, and use the calculation result as a fine-grained classification prediction result for the target product; at this time, because the intermediate feature image compared with the standard image can be directly obtained from the image classification model, the embodiment of the application can ensure that the fine granularity detection of the target product can be realized, and the image segmentation model is not required to be additionally introduced, so that the image segmentation is carried out on the details of the target product in the product image, and then the segmented product detail image is compared with the standard image.
S104, determining a final product detection result of the target product based on the coarse-granularity classification prediction result and the fine-granularity classification prediction result.
Here, since the image classification model focuses on more feature areas in the product image (i.e., focuses on the image areas that particularly stand for the current level of features) when performing classification prediction, the above coarse-granularity classification prediction result output based on the image classification model cannot give consideration to the global information of the product image well; in contrast, the fine-grained classification prediction result obtained in the step S103 is used to compare the standard image corresponding to each detection result category (which is equivalent to comparing the whole standard product image) with the middle feature map showing the details of the target product, so that the fine-grained classification prediction result can reflect more global information of the whole target product; based on this, in the embodiment of the present application, the above coarse-granularity classification prediction result and the above fine-granularity classification prediction result are used together to determine the final product detection result of one target product.
Specifically, as an alternative implementation, the above step S104 may be performed in the following manner from step a1 to step a 4:
and a1, respectively acquiring the first probability that the product image belongs to each detection result category from the coarse-granularity classification prediction result.
For example, taking the case that the product detection requirement of the food product is to detect the degree of ripeness of the food, the above detection result categories may include: g1-g5, 5 detection result categories; wherein g1-g5 are arranged according to the order of the degree of ripeness of the food from raw to ripeness; at this time, the coarse-granularity classification prediction result output by the image classification model is: the first probability that the product image belongs to the g1 category is 0.1, the first probability that the product image belongs to the g2 category is 0.2, the first probability that the product image belongs to the g3 category is 0.4, the first probability that the product image belongs to the g4 category is 0.2, and the first probability that the product image belongs to the g5 category is 0.1.
In the coarse-grain classification prediction result, the sum of the first probabilities that the product image belongs to each detection result category is 1.
And a2, respectively obtaining second probabilities of the intermediate feature map belonging to each detection result category from the fine-granularity classification prediction result.
For example, taking the 5 detection result categories of G1-G5 as examples, since the fine-granularity classification prediction result is the probability that the intermediate feature map belongs to each detection result category, according to the intermediate feature map and the standard image G1 of the G1 category, the second probability that the intermediate feature map belongs to the G1 category can be calculated to be 0.2; according to the intermediate feature map and the standard image G2 of the G2 class, the second probability that the intermediate feature map belongs to the G2 class can be calculated to be 0.1; according to the intermediate feature map and the standard image G3 of the G3 class, the second probability that the intermediate feature map belongs to the G3 class can be calculated to be 0.5; according to the intermediate feature map and the standard image G4 of the G4 class, the second probability that the intermediate feature map belongs to the G4 class can be calculated to be 0.1; from the intermediate feature map and the standard image G5 of the G5 class, it can be calculated that the second probability that the intermediate feature map belongs to the G5 class is 0.1.
In the fine-grained classification prediction result, the sum of the second probabilities that the product image belongs to each detection result category is also 1.
And a3, calculating the product of the first probability and the second probability corresponding to the detection result category aiming at the same detection result category to obtain the final probability corresponding to the detection result category.
Taking the g1 category of the detection result categories as an example, obtaining a first probability that the product image belongs to the g1 category from coarse-granularity classification prediction results is 0.1; obtaining a second probability that the product image belongs to the g1 class from the fine-granularity classification prediction result to be 0.2; from this, it can be calculated that the final probability that the product image belongs to the g1 class is 0.02.
And a4, determining the highest final probability and the detection result category corresponding to the highest final probability from the final probabilities corresponding to the detection result categories as the final product detection result.
For example, taking the above detection result category g1-g5 as an example, it can be obtained by calculation: the final probability of the product image belonging to the g1 category is 0.02, the final probability of the product image belonging to the g2 category is 0.02, the final probability of the product image belonging to the g3 category is 0.2, the final probability of the product image belonging to the g4 category is 0.02, and the final probability of the product image belonging to the g5 category is 0.01; from this it can be determined that the final product detection result is: the probability that the target product belongs to the g3 class is highest, and the final probability that the target product belongs to the g3 class is 0.2.
The following details are respectively given for the specific implementation process of each step in the embodiment of the present application:
for the image classification model used in the step S101, as shown in fig. 3, fig. 3 shows a flow chart of a model training method of the image classification model provided by the embodiment of the application, where the model training method includes steps S301 to S303; specific:
s301, obtaining product images of the same sample products under each detection result category as sample images, and marking the detection result categories of the sample products in each sample image to obtain classification labels of each sample image.
Here, unlike the image segmentation model that requires pixel-level labeling of each sample image (for example, an a pixel region belongs to a background region, a b pixel region belongs to a product main region, and a b1 pixel region belongs to a key detail region requiring fine granularity detection in a product in the sample image), when the image classification model is trained, only one labeling is required for a specific detection result category corresponding to each sample image; for example, if the detection result category of the sample product in the sample image x1 is g1 category, the classification label of the sample image x1 is g1.
It should be noted that, unlike the conventional image classification model, the classification label is different from the classification label for the entity type (for example, the classification label of the person image is a person, the classification label of the object image is an object, etc.) of the entity in the image, because the embodiment of the application performs product detection on the same kind of target product, that is, the classification in product detection is essentially equivalent to classifying different product grades of the same kind of product, the model training data selected by the embodiment of the application is the product image of the same kind of sample product under different detection result categories, rather than the product image of each of the different kinds of products.
S302, inputting each sample image into an initial classification model, performing classification prediction on the detection result category to which each sample image belongs through the initial classification model, and outputting to obtain an initial classification prediction result for each sample image.
Here, the initial classification model may be an image classification model constructed based on a res net network, or an image classification model constructed based on an EfficientNet network; the embodiment of the present application is not limited in any way with respect to the specific model structure of the initial classification model.
S303, adjusting model parameters of the initial classification model based on the prediction result of the initial classification of each sample image and the prediction loss between the classification labels of each sample image, and obtaining the initial classification model including the adjusted model parameters as the image classification model.
Specifically, when executing steps S302 to S303, inputting each sample image into the initial classification model, so as to obtain an initial classification prediction result of the initial classification model on each input sample image; thus, according to the loss value between the classification label of each sample image (corresponding to the real detection result category of the sample product in each sample image) and the initial classification prediction result, the model parameters of the initial classification model are adjusted to obtain an initial classification model including the adjusted model parameters as a trained image classification model (i.e. the initial classification model converges).
When model training is performed on the initial classification model, the loss between the classification label and the initial classification prediction result may be calculated using a cross entropy loss function, or the loss function available in other image classification models such as a focal loss may be used; the embodiment of the present application is not limited in any way.
For the specific implementation process of the step S102, besides directly obtaining the thermodynamic diagram generated in the intermediate stage as an intermediate feature diagram according to the alternative embodiment given in the step S102, the thermodynamic diagram may be optimized according to the method described in the following steps b1-b2, so that the thermodynamic diagram after the optimization is used as the intermediate feature diagram, specifically:
and b1, acquiring a thermodynamic diagram generated in the middle stage from the image classification model, and removing a pixel region represented by a first color in the thermodynamic diagram to obtain an effective thermodynamic diagram composed of the rest pixel regions.
Here, the first color is used to mark a pixel area of the image classification model with a focus degree lower than a preset focus threshold; the specific value of the preset attention threshold can be set according to the actual product detection requirement, and the embodiment of the application is not limited in any way.
For example, taking the thermodynamic diagram 200 shown in fig. 2 as an example, when the intermediate feature diagram is an optimized image processing result obtained after the thermodynamic diagram is required to be optimized, fig. 4 is a schematic diagram showing an effective thermodynamic diagram according to an embodiment of the present application; as shown in fig. 4, in this case, the first color corresponds to the light color in the thermodynamic diagram 200, and the pixel region represented by the first color is the pixel region 202 marked with the light color in the thermodynamic diagram 200, so that the pixel region 202 marked with the light color is removed from the thermodynamic diagram 200, and the obtained pixel region 201 marked with the dark color (i.e., the remaining pixel region) is the effective thermodynamic diagram.
And b2, removing outlier pixel points from the effective thermodynamic diagram in a clustering mode to obtain a residual main pixel area serving as the intermediate feature diagram.
Here, the available clustering algorithm is not unique, for example, a distance-based clustering algorithm may be used to cluster the pixels in the effective thermodynamic diagram; a density-based clustering algorithm can also be adopted to cluster the pixel points in the effective thermodynamic diagram; the embodiment of the present application is not limited in any way with respect to the specific type of clustering algorithm used in the above step b 2.
Specifically, the outlier pixel points represent pixel points, of which the distance between the clustered effective thermodynamic diagram and the cluster center of each cluster exceeds a preset distance threshold value; the specific value of the preset distance threshold can be set according to the actual product detection requirement, and the embodiment of the application is not limited in any way.
For the specific implementation process of the step S103, in conjunction with the description content about the standard image in the step S103, it is known that in the actual product detection scenario, according to different standard definitions (defined according to the image color level or defined according to the integrity of the product in the image), the standard image may be briefly classified into two types of standard color cards and standard product images, and at this time, for different types of standard images, different types of service features (all of which are pixel-level service features) may be selected, and the fine-grained classification prediction result may be calculated in different manners.
Specifically, when the standard image belongs to the standard color chart, as shown in fig. 5, fig. 5 shows a flow chart of a first method for calculating a fine-granularity classification prediction result according to an embodiment of the present application, and when step S103 is executed, the method includes steps S501-S504; specific:
s501, under the first service feature dimension of the pixel level, respectively calculating the first service feature of the intermediate feature map and the first service feature of each standard image.
Here, the first service feature characterizes a mean or variance of the image; that is, when the standard image belongs to the standard color chart defined based on the image color hierarchy, the standard image of each detection result category corresponds to a mean value (or a variance), and after the mean value (or variance) of the intermediate feature chart (e.g. thermodynamic diagram) of the current target product is calculated, the detection result category of the current target product closest to the detection result category with the smallest difference can be determined by comparing the difference between the calculated mean value (or variance) and the mean value (or variance) of the standard color chart.
S502, calculating a characteristic difference value between the first service characteristic of the intermediate characteristic diagram and the first service characteristic of each standard image, and adding each calculated characteristic difference value to obtain a total characteristic difference value.
For example, taking the detection result category including the g1-g5 category as an example, if the first service feature represents the average value of the image, it may be calculated that the feature difference between the average value of the image of the intermediate feature image and the average value of the image of the standard image of the g1 category is c1, the feature difference between the average value of the image of the standard image of the g2 category is c2, the feature difference between the average value of the image of the standard image of the g3 category is c3, the feature difference between the average value of the image of the standard image of the g4 category is c4, and the feature difference between the average value of the image of the standard image of the g5 category is c5; at this time, the total feature difference may be obtained as: c1+c2+c3+c4+c5.
S503, calculating the percentage of the feature difference value between the first business feature of the intermediate feature map and the first business feature of the standard image of the detection result category in the total feature difference value as the probability that the intermediate feature map belongs to the detection result category.
For example, taking the g1 class in the above detection result class as an example, if the feature difference between the image average value of the intermediate feature map and the image average value of the standard image of the g1 class is c1, the probability that the intermediate feature map belongs to the g1 class may be obtained by calculating the specific gravity of the feature difference c1 in the total feature difference: c1/c1+c2+c3+c4+c5.
And S504, taking the calculated probability that the intermediate feature map belongs to each detection result category as a fine-grained classification prediction result of the target product.
Here, the specific embodiment of step S504 is the same as that of step S103, and the repetition is not repeated here.
Specifically, when the standard image belongs to the standard product image, as shown in fig. 6, fig. 6 shows a flowchart of a second method for calculating the fine-granularity classification prediction result according to the embodiment of the present application, and when step S103 is executed, the method includes steps S601-S603; specific:
s601, respectively calculating the second business feature of the intermediate feature map and the second business feature of each standard image under the second business feature dimension of the pixel level.
Here, the second traffic feature characterizes a gray histogram or a color histogram of the image; the color histogram may be an RGB color histogram or an HSV color histogram; the embodiment of the present application is not limited in any way as to the specific type of color histogram.
Specifically, when the standard image belongs to a standard product image defined based on the integrity of the product in the image, the standard image of each detection result category corresponds to a histogram (gray level histogram or color histogram), and after calculating the histogram (gray level histogram or color histogram) of the intermediate feature map (e.g., thermodynamic diagram) of the current target product, the detection result category of the current target product closest to the detection result category with the highest similarity between the histograms can be determined by comparing the calculated histogram with the histogram of the standard image.
S602, calculating the similarity between the second business feature of the intermediate feature map and the second business feature of the standard image of the detection result category as the probability that the intermediate feature map belongs to the detection result category.
Specifically, as an alternative embodiment, a cosine similarity calculation manner may be used to calculate the similarity between the second service feature of the intermediate feature map and the second service feature of the standard image of the detection result class.
Specifically, as another alternative embodiment, a calculation manner of the distance similarity (for example, calculating the euclidean distance or the mahalanobis distance) may also be used to calculate the similarity between the second service feature of the intermediate feature map and the second service feature of the standard image of the detection result class.
Based on this, the embodiment of the present application is not limited in any way with respect to the specific similarity calculation method adopted in step S602.
And S603, taking the calculated probability that the intermediate feature map belongs to each detection result category as a fine-grained classification prediction result of the target product.
Here, the specific embodiment of step S603 is the same as that of step S103, and the repetition is not repeated here.
Based on the product detection method provided by the embodiment of the application, the product image of the target product is input into a pre-trained image classification model, the classification prediction is carried out on the detection result category to which the product image belongs through the image classification model, and the coarse-granularity classification prediction result aiming at the target product is output; acquiring an intermediate feature image generated by the image classification model in an intermediate stage of classifying and predicting the product image; calculating a fine-granularity classification prediction result of the target product based on the standard image corresponding to each detection result category of the intermediate feature image; and determining a final product detection result of the target product based on the coarse-granularity classification prediction result and the fine-granularity classification prediction result. Therefore, on the basis of being capable of realizing fine-granularity detection on a target product, the method does not need to additionally introduce an image segmentation model, effectively reduces the labeling cost of image labeling in the early model training stage, and is beneficial to better meeting the real requirements of product detection.
Based on the same inventive concept, the application also provides a product detection device corresponding to the product detection method, and because the principle of the product detection device in the embodiment of the application for solving the problem is similar to that of the product detection method in the embodiment of the application, the implementation of the product detection device can refer to the implementation of the product detection method, and the repetition is omitted herein.
Referring to fig. 7, fig. 7 shows a schematic structural diagram of a product detection device according to an embodiment of the present application, where the product detection device includes:
the first prediction module 701 is configured to input a product image of a target product into a pre-trained image classification model, perform classification prediction on a detection result category to which the product image belongs through the image classification model, and output a coarse-granularity classification prediction result for the target product;
an obtaining module 702, configured to obtain an intermediate feature map generated by the image classification model in an intermediate stage of classifying and predicting the product image; wherein the intermediate feature map characterizes pixel regions computed/of interest by the image classification model;
a second prediction module 703, configured to calculate a fine-grained classification prediction result of the target product based on the standard image corresponding to each detection result category and the intermediate feature map;
the result determining module 704 is configured to determine a final product detection result of the target product based on the coarse-granularity classification prediction result and the fine-granularity classification prediction result.
In an alternative embodiment, when the acquiring the intermediate feature map generated by the image classification model in the intermediate stage of classifying and predicting the product image, the acquiring module 702 is configured to:
Acquiring a thermodynamic diagram generated in the intermediate stage from the image classification model as the intermediate feature diagram; wherein different colors are used in the thermodynamic diagram to represent the degree of interest of the image classification model for different pixel regions.
In an alternative embodiment, when the acquiring the intermediate feature map generated by the image classification model in the intermediate stage of classifying and predicting the product image, the acquiring module 702 is configured to:
obtaining a thermodynamic diagram generated in the middle stage from the image classification model, and removing pixel areas represented by a first color from the thermodynamic diagram to obtain an effective thermodynamic diagram composed of the residual pixel areas; the first color is used for marking a pixel area, the attention of which is lower than a preset attention threshold, of the image classification model;
removing outlier pixel points from the effective thermodynamic diagram in a clustering mode to obtain a residual main pixel area serving as the intermediate feature diagram; and the outlier pixel points represent pixel points, of which the distance between the clustered effective thermodynamic diagram and the cluster center of each cluster exceeds a preset distance threshold, in the clustered effective thermodynamic diagram.
In an alternative embodiment, when calculating the fine-grained classification prediction result of the target product based on the standard image corresponding to each detection result category of the intermediate feature map, the second prediction module 703 is configured to:
Respectively calculating the first business feature of the intermediate feature map and the first business feature of each standard image under the first business feature dimension of the pixel level; wherein the first business feature characterizes the mean or variance of the image;
calculating a characteristic difference value between the first service characteristic of the intermediate characteristic diagram and the first service characteristic of each standard image, and adding each calculated characteristic difference value to obtain a total characteristic difference value;
for each detection result category, calculating the percentage of the feature difference value between the first business feature of the intermediate feature map and the first business feature of the standard image of the detection result category in the total feature difference value as the probability that the intermediate feature map belongs to the detection result category;
and taking the calculated probability that the intermediate feature map belongs to each detection result category as a fine-granularity classification prediction result of the target product.
In an alternative embodiment, when calculating the fine-grained classification prediction result of the target product based on the standard image corresponding to each detection result category of the intermediate feature map, the second prediction module 703 is configured to:
Respectively calculating the second business feature of the intermediate feature map and the second business feature of each standard image under the second business feature dimension of the pixel level; wherein the second business feature characterizes a gray histogram or a color histogram of the image;
for each detection result category, calculating the similarity between the second business feature of the intermediate feature map and the second business feature of the standard image of the detection result category as the probability that the intermediate feature map belongs to the detection result category;
and taking the calculated probability that the intermediate feature map belongs to each detection result category as a fine-granularity classification prediction result of the target product.
In an alternative embodiment, in the determining the final product detection result of the target product based on the coarse-granularity classification prediction result and the fine-granularity classification prediction result, the result determining module 704 is configured to:
respectively acquiring first probabilities that the product image belongs to each detection result category from the coarse-granularity classification prediction results;
respectively acquiring second probabilities of the intermediate feature map belonging to each detection result category from the fine-granularity classification prediction results;
For the same detection result category, calculating the product of the first probability and the second probability corresponding to the detection result category to obtain the final probability corresponding to the detection result category;
and determining the highest final probability and the detection result category corresponding to the highest final probability from the final probabilities corresponding to each detection result category as the final product detection result.
In an alternative embodiment, the first prediction module 701 is configured to train to obtain the image classification model by:
obtaining a product image of the same sample product under each detection result category as a sample image, and marking the detection result category of the sample product in each sample image to obtain a classification label of each sample image;
inputting each sample image into an initial classification model, carrying out classification prediction on the detection result category to which each sample image belongs through the initial classification model, and outputting to obtain an initial classification prediction result for each sample image;
and adjusting model parameters of the initial classification model based on the prediction loss between the initial classification prediction result of each sample image and the classification label of each sample image, and obtaining the initial classification model including the adjusted model parameters as the image classification model.
Based on the same inventive concept, as shown in fig. 8, an embodiment of the present application provides an electronic device 800 for performing the product detection method in the present application, where the device includes a memory 801, a processor 802, and a computer program stored in the memory 801 and executable on the processor 802, where the processor 802 implements the steps of the product detection method when executing the computer program.
Specifically, the above-mentioned memory 801 and the processor 802 may be general-purpose memories and processors, and are not particularly limited herein, and the above-mentioned product detection method can be executed when the processor 802 runs a computer program stored in the memory 801.
Corresponding to the product detection method of the present application, the embodiment of the present application further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, performs the steps of the product detection method described above.
In particular, the storage medium can be a general-purpose storage medium, such as a removable disk, a hard disk, or the like, on which a computer program can be executed to perform the above-described product detection method.
In the embodiments provided herein, it should be understood that the disclosed systems and methods may be implemented in other ways. The system embodiments described above are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions in actual implementation, and e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, system or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments provided in the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that: like reference numerals and letters in the following figures denote like items, and thus once an item is defined in one figure, no further definition or explanation of it is required in the following figures, and furthermore, the terms "first," "second," "third," etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present application, and are not intended to limit the scope of the present application, but it should be understood by those skilled in the art that the present application is not limited thereto, and that the present application is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the corresponding technical solutions. Are intended to be encompassed within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of product detection, the method comprising:
inputting a product image of a target product into a pre-trained image classification model, carrying out classification prediction on a detection result category to which the product image belongs through the image classification model, and outputting to obtain a coarse-granularity classification prediction result aiming at the target product;
acquiring an intermediate feature map generated by the image classification model in an intermediate stage of classifying and predicting the product image; wherein the intermediate feature map characterizes pixel regions computed/of interest by the image classification model;
calculating a fine-granularity classification prediction result of the target product based on the standard image corresponding to each detection result category of the intermediate feature image;
and determining a final product detection result of the target product based on the coarse-granularity classification prediction result and the fine-granularity classification prediction result.
2. The method of claim 1, wherein the acquiring the intermediate feature map generated by the image classification model at an intermediate stage of classification prediction of the product image comprises:
acquiring a thermodynamic diagram generated in the intermediate stage from the image classification model as the intermediate feature diagram; wherein different colors are used in the thermodynamic diagram to represent the degree of interest of the image classification model for different pixel regions.
3. The method of claim 1, wherein the acquiring the intermediate feature map generated by the image classification model at an intermediate stage of classification prediction of the product image comprises:
obtaining a thermodynamic diagram generated in the middle stage from the image classification model, and removing pixel areas represented by a first color from the thermodynamic diagram to obtain an effective thermodynamic diagram composed of the residual pixel areas; the first color is used for marking a pixel area, the attention of which is lower than a preset attention threshold, of the image classification model;
removing outlier pixel points from the effective thermodynamic diagram in a clustering mode to obtain a residual main pixel area serving as the intermediate feature diagram; and the outlier pixel points represent pixel points, of which the distance between the clustered effective thermodynamic diagram and the cluster center of each cluster exceeds a preset distance threshold, in the clustered effective thermodynamic diagram.
4. The product detection method according to claim 1, wherein the calculating the fine-grained classification prediction result of the target product based on the standard image of the intermediate feature map corresponding to each of the detection result categories includes:
respectively calculating the first business feature of the intermediate feature map and the first business feature of each standard image under the first business feature dimension of the pixel level; wherein the first business feature characterizes the mean or variance of the image;
Calculating a characteristic difference value between the first service characteristic of the intermediate characteristic diagram and the first service characteristic of each standard image, and adding each calculated characteristic difference value to obtain a total characteristic difference value;
for each detection result category, calculating the percentage of the feature difference value between the first business feature of the intermediate feature map and the first business feature of the standard image of the detection result category in the total feature difference value as the probability that the intermediate feature map belongs to the detection result category;
and taking the calculated probability that the intermediate feature map belongs to each detection result category as a fine-granularity classification prediction result of the target product.
5. The product detection method according to claim 1, wherein the calculating the fine-grained classification prediction result of the target product based on the standard image of the intermediate feature map corresponding to each of the detection result categories includes:
respectively calculating the second business feature of the intermediate feature map and the second business feature of each standard image under the second business feature dimension of the pixel level; wherein the second business feature characterizes a gray histogram or a color histogram of the image;
For each detection result category, calculating the similarity between the second business feature of the intermediate feature map and the second business feature of the standard image of the detection result category as the probability that the intermediate feature map belongs to the detection result category;
and taking the calculated probability that the intermediate feature map belongs to each detection result category as a fine-granularity classification prediction result of the target product.
6. The product detection method of claim 1, wherein the determining the final product detection result of the target product based on the coarse-grain classification prediction result and the fine-grain classification prediction result includes:
respectively acquiring first probabilities that the product image belongs to each detection result category from the coarse-granularity classification prediction results;
respectively acquiring second probabilities of the intermediate feature map belonging to each detection result category from the fine-granularity classification prediction results;
for the same detection result category, calculating the product of the first probability and the second probability corresponding to the detection result category to obtain the final probability corresponding to the detection result category;
And determining the highest final probability and the detection result category corresponding to the highest final probability from the final probabilities corresponding to each detection result category as the final product detection result.
7. The product detection method of claim 1, wherein the image classification model is trained by:
obtaining a product image of the same sample product under each detection result category as a sample image, and marking the detection result category of the sample product in each sample image to obtain a classification label of each sample image;
inputting each sample image into an initial classification model, carrying out classification prediction on the detection result category to which each sample image belongs through the initial classification model, and outputting to obtain an initial classification prediction result for each sample image;
and adjusting model parameters of the initial classification model based on the prediction loss between the initial classification prediction result of each sample image and the classification label of each sample image, and obtaining the initial classification model including the adjusted model parameters as the image classification model.
8. A product detection device, the product detection device comprising:
the first prediction module is used for inputting a product image of a target product into a pre-trained image classification model, performing classification prediction on a detection result category to which the product image belongs through the image classification model, and outputting a coarse-granularity classification prediction result aiming at the target product;
the acquisition module is used for acquiring an intermediate feature image generated by the image classification model in an intermediate stage of classifying and predicting the product image; wherein the intermediate feature map characterizes pixel regions computed/of interest by the image classification model;
the second prediction module is used for calculating a fine granularity classification prediction result of the target product based on the standard image corresponding to each detection result category of the intermediate feature image;
and the result determining module is used for jointly determining a final product detection result of the target product based on the coarse-granularity classification prediction result and the fine-granularity classification prediction result.
9. An electronic device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating over the bus when the electronic device is running, said machine readable instructions when executed by said processor performing the steps of the product detection method according to any of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the product detection method according to any of claims 1 to 7.
CN202310955758.XA 2023-07-31 2023-07-31 Product detection method, device, equipment and storage medium Pending CN116958113A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310955758.XA CN116958113A (en) 2023-07-31 2023-07-31 Product detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310955758.XA CN116958113A (en) 2023-07-31 2023-07-31 Product detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116958113A true CN116958113A (en) 2023-10-27

Family

ID=88444362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310955758.XA Pending CN116958113A (en) 2023-07-31 2023-07-31 Product detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116958113A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117933828A (en) * 2024-03-20 2024-04-26 上海强华实业股份有限公司 Closed loop quality feedback and process parameter self-adaptive adjustment method for fine burning process

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117933828A (en) * 2024-03-20 2024-04-26 上海强华实业股份有限公司 Closed loop quality feedback and process parameter self-adaptive adjustment method for fine burning process

Similar Documents

Publication Publication Date Title
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
Sajid et al. Universal multimode background subtraction
CN109684922B (en) Multi-model finished dish identification method based on convolutional neural network
AU2006252252B2 (en) Image processing method and apparatus
EP1168247A2 (en) Method for varying an image processing path based on image emphasis and appeal
US9305208B2 (en) System and method for recognizing offensive images
CN102576461A (en) Estimating aesthetic quality of digital images
GB2595558A (en) Exposure defects classification of images using a neural network
CN111723693A (en) Crowd counting method based on small sample learning
CN110580499B (en) Deep learning target detection method and system based on crowdsourcing repeated labels
CN116958113A (en) Product detection method, device, equipment and storage medium
CN117011563B (en) Road damage inspection cross-domain detection method and system based on semi-supervised federal learning
CN112149476A (en) Target detection method, device, equipment and storage medium
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN113706523A (en) Method for monitoring belt deviation and abnormal operation state based on artificial intelligence technology
Cheng et al. A hierarchical airlight estimation method for image fog removal
CN116343008A (en) Glaucoma recognition training method and training device based on multiple features
Wang et al. Local defect detection and print quality assessment
CN114882204A (en) Automatic ship name recognition method
CN108769543B (en) Method and device for determining exposure time
Vasamsetti et al. 3D local spatio-temporal ternary patterns for moving object detection in complex scenes
CN110415816B (en) Skin disease clinical image multi-classification method based on transfer learning
CN111754491A (en) Picture definition judging method and device
CN116612355A (en) Training method and device for face fake recognition model, face recognition method and device
CN114677670B (en) Method for automatically identifying and positioning identity card tampering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination