CN111582359B - Image identification method and device, electronic equipment and medium - Google Patents

Image identification method and device, electronic equipment and medium Download PDF

Info

Publication number
CN111582359B
CN111582359B CN202010368651.1A CN202010368651A CN111582359B CN 111582359 B CN111582359 B CN 111582359B CN 202010368651 A CN202010368651 A CN 202010368651A CN 111582359 B CN111582359 B CN 111582359B
Authority
CN
China
Prior art keywords
image
identified
product
recognized
image block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010368651.1A
Other languages
Chinese (zh)
Other versions
CN111582359A (en
Inventor
钟宇
徐燕
刘德祥
王宏强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinjiang Uygur Autonomous Region Tobacco Co
Original Assignee
Xinjiang Uygur Autonomous Region Tobacco Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinjiang Uygur Autonomous Region Tobacco Co filed Critical Xinjiang Uygur Autonomous Region Tobacco Co
Priority to CN202010368651.1A priority Critical patent/CN111582359B/en
Publication of CN111582359A publication Critical patent/CN111582359A/en
Application granted granted Critical
Publication of CN111582359B publication Critical patent/CN111582359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Multimedia (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides an image identification method, an image identification device, electronic equipment and a medium, relates to the technical field of computer vision, and can improve the accuracy of identifying the authenticity of a product. The technical scheme of the embodiment of the invention comprises the following steps: and acquiring an image to be identified, wherein the image to be identified is an image of a product to be identified. And then extracting the characteristic vector of the image to be identified, and inputting the characteristic vector into the classification model to obtain the authenticity of the product to be identified output by the classification model. The classification model is obtained by training a neural network model based on a sample image set, the sample image set comprises a positive sample image and a negative sample image, the positive sample image is a genuine product image of a product to be identified, and the negative sample image is a counterfeit product image of the product to be identified.

Description

Image identification method and device, electronic equipment and medium
Technical Field
The present invention relates to the field of computer vision technologies, and in particular, to an image recognition method, an image recognition device, an electronic apparatus, and a medium.
Background
At present, a large number of imitations of genuine products exist in the market, the imitations do not pass through the normal product safety detection, and the product quality is difficult to ensure. Especially for imitation of food, medicine, cosmetics, etc., not only the property loss of consumers is caused, but also the health of consumers is harmed. Therefore, the method is particularly important for identifying the authenticity of the product.
In the prior art, the method for distinguishing the authenticity of the product mainly judges the authenticity of the test sample by manually comparing the processing technology, the printing technology and the like of the test sample and the genuine product, however, the method mainly depends on the experience of inspectors, and different inspectors have different sensitivities to the color, the size and the like of the product, so that the subjectivity of the judgment result is stronger. Therefore, the method for judging the authenticity of the product manually has lower judging accuracy.
Disclosure of Invention
The embodiment of the invention aims to provide an image identification method, an image identification device, electronic equipment and a medium, so as to improve the accuracy of identifying the authenticity of a product. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides an image recognition method, where the method includes:
acquiring an image to be identified, wherein the image to be identified is an image of a product to be identified;
extracting a feature vector of the image to be recognized;
inputting the feature vector into a classification model, and obtaining the authenticity of the product to be recognized output by the classification model, wherein the classification model is obtained by training a neural network model based on a sample image set, the sample image set comprises a positive sample image and a negative sample image, the positive sample image is a genuine product image of the product to be recognized, and the negative sample image is a counterfeit product image of the product to be recognized.
Optionally, the classification model is obtained by training through the following steps:
step one, obtaining the sample image set, wherein the sample images in the sample image set have the same size, and the products to be identified have the same position in the sample images;
acquiring a plurality of image block training sets, wherein each image block training set comprises image blocks in the same area in each sample image and a product corresponding to each image block;
step three, aiming at each image block training set, training an initial classifier through the image block training set to obtain a classifier corresponding to the image block training set;
step four, inputting the feature vectors of the image blocks included in the image block training set into a classifier corresponding to the image block training set aiming at each image block training set, and determining the accuracy score of the classifier based on the classification result of each image block output by the classifier, wherein the classification result is used for representing the authenticity of a product to be identified in the image block;
step five, aiming at each image block training set, if the accuracy score of the classifier corresponding to the image block training set is greater than a preset score threshold, determining the classifier corresponding to the image block training set as a qualified classifier;
step six, if the number of the qualified classifiers reaches the preset number, taking the qualified classifiers with the preset number as the classifiers of the classification model; and if the number of the qualified classifiers does not reach the preset number, returning to the step two.
Optionally, the size of the image to be identified is the same as that of the sample image in the sample image set, and the position of the product to be identified in the image to be identified is the same as that in the sample image; the extracting the feature vector of the image to be identified comprises the following steps:
extracting the characteristic vectors of the image blocks of the areas to be identified in the images to be identified, wherein the coordinates of the areas to be identified are as follows: the image blocks used for training the classifier comprised by the classification model are the coordinates of the regions in the sample image.
Optionally, the inputting the feature vector into a classification model to obtain the authenticity of the product to be identified output by the classification model includes:
inputting the feature vectors of the image blocks of the areas to be recognized in the images to be recognized into the classification model, so that each classifier included in the classification model recognizes the feature vectors of the image blocks of the corresponding areas to be recognized, and obtains the classification result of the image blocks of the areas to be recognized, and the classification model outputs the authenticity of the products to be recognized based on the classification result of the image blocks of the areas to be recognized;
and acquiring the authenticity of the product to be identified output by the classification model.
Optionally, the extracting the feature vectors of the image blocks of each to-be-identified region in the to-be-identified image includes:
aiming at each area to be identified in the image to be identified, obtaining a gray scale image obtained after image gray scale conversion of an image block of the area to be identified;
and extracting a feature vector of the image block according to the gray-scale image, wherein the feature vector comprises a plurality of elements, each element corresponds to a specified brightness, and each element is the number of pixel points of the specified brightness corresponding to the element in the gray-scale image.
In a second aspect, an embodiment of the present invention provides an image recognition apparatus, including:
the system comprises an acquisition module, a recognition module and a recognition module, wherein the acquisition module is used for acquiring an image to be recognized, and the image to be recognized is an image of a product to be recognized;
the extraction module is used for extracting the characteristic vector of the image to be identified, which is acquired by the acquisition module;
the classification module is used for inputting the feature vectors extracted by the extraction module into a classification model to obtain the authenticity of the product to be identified output by the classification model, the classification model is obtained by training a neural network model based on a sample image set, the sample image set comprises a positive sample image and a negative sample image, the positive sample image is the genuine image of the product to be identified, and the negative sample image is the counterfeit image of the product to be identified.
Optionally, the apparatus further includes a training module, where the training module is configured to perform:
step one, obtaining the sample image set, wherein the sample images in the sample image set have the same size, and the products to be identified have the same position in the sample images;
acquiring a plurality of image block training sets, wherein each image block training set comprises image blocks in the same area in each sample image and the authenticity of a product corresponding to each image block;
step three, aiming at each image block training set, training an initial classifier through the image block training set to obtain a classifier corresponding to the image block training set;
inputting the feature vectors of the image blocks included in the image block training set into a classifier corresponding to the image block training set aiming at each image block training set, and determining the accuracy score of the classifier based on the classification result of each image block output by the classifier, wherein the classification result is used for representing the authenticity of a product to be identified in the image block;
step five, aiming at each image block training set, if the accuracy score of the classifier corresponding to the image block training set is greater than a preset score threshold, determining the classifier corresponding to the image block training set as a qualified classifier;
step six, if the number of the qualified classifiers reaches the preset number, taking the qualified classifiers with the preset number as the classifiers of the classification model; and if the number of the qualified classifiers does not reach the preset number, returning to the step two.
Optionally, the size of the image to be identified is the same as the size of the sample image in the sample image set, and the position of the product to be identified in the image to be identified is the same as the position in the sample image; the extraction module is specifically configured to:
extracting the feature vectors of the image blocks of the areas to be identified in the images to be identified, wherein the coordinates of the areas to be identified are as follows: the image blocks used for training the classifier comprised by the classification model are the coordinates of the regions in the sample image.
Optionally, the classification module is specifically configured to:
inputting the feature vectors of the image blocks of the areas to be recognized in the images to be recognized into the classification model, so that each classifier included in the classification model recognizes the feature vectors of the image blocks of the corresponding areas to be recognized, and obtains the classification result of the image blocks of the areas to be recognized, and the classification model outputs the authenticity of the products to be recognized based on the classification result of the image blocks of the areas to be recognized;
and acquiring the authenticity of the product to be identified output by the classification model.
Optionally, the extracting module is specifically configured to:
aiming at each to-be-identified area in the to-be-identified image, obtaining a gray scale image obtained after image gray scale conversion is carried out on an image block of the to-be-identified area;
and extracting a feature vector of the image block according to the gray-scale image, wherein the feature vector comprises a plurality of elements, each element corresponds to a specified brightness, and each element is the number of pixel points of the specified brightness corresponding to the element in the gray-scale image.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the memory and the processor through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the steps of any image identification method when executing the program stored in the memory.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements any of the above steps of determining an image recognition method.
In a fifth aspect, embodiments of the present invention also provide a computer program product including instructions, which when run on a computer, cause the computer to perform any of the image recognition methods described above.
The technical scheme of the embodiment of the invention can at least bring the following beneficial effects: the invention can identify the authenticity of the product to be identified by utilizing the classification model according to the image of the product to be identified. Therefore, compared with a mode of manually identifying the authenticity of the product, the embodiment of the invention can automatically identify the authenticity of the product to be identified, does not depend on the subjective judgment of manpower, and improves the accuracy of identifying the authenticity of the product.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an image recognition method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for determining a classification model according to an embodiment of the present invention;
FIG. 3a is an exemplary diagram of a sample image according to an embodiment of the present invention;
FIG. 3b is an exemplary diagram of another sample image provided by an embodiment of the invention;
FIG. 3c is an exemplary diagram of another sample image provided by an embodiment of the present invention;
fig. 4 is a flowchart of a method for determining authenticity of a product to be identified according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image recognition apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to improve the accuracy of identifying the authenticity of a product, the embodiment of the invention provides an image identification method, which can be applied to electronic equipment, wherein the electronic equipment can be equipment with an image processing function, such as a mobile phone, a computer, a tablet computer and the like. Referring to fig. 1, the method includes the following steps.
Step 101, an image to be identified is obtained. The image to be identified is an image of a product to be identified.
And 102, extracting the characteristic vector of the image to be identified.
And 103, inputting the feature vectors into the classification model, and acquiring the authenticity of the product to be identified output by the classification model.
The classification model is obtained by training a neural network model based on a sample image set, the sample image set comprises a positive sample image and a negative sample image, the positive sample image is a genuine product image of a product to be identified, and the negative sample image is a counterfeit product image of the product to be identified.
The technical scheme of the embodiment of the invention can at least bring the following beneficial effects: the invention can identify the authenticity of the product to be identified by utilizing the classification model according to the image of the product to be identified. Therefore, compared with a mode of manually identifying the authenticity of the product, the embodiment of the invention can automatically identify the authenticity of the product to be identified without depending on manual subjective judgment, thereby improving the accuracy of identifying the authenticity of the product.
Optionally, the image to be identified acquired in the embodiment of the present invention may be a scanned image of the product to be identified, or may also be a photographed image of the product to be identified.
Illustratively, the product to be identified may be a cigarette, white spirit, cosmetics, or the like. Taking the product to be identified as a cigarette as an example, the image to be identified may be a scanned image of the unfolded cigarette package.
In an embodiment, before the feature vector of the image to be recognized is extracted in step 102, image preprocessing may be performed on the image to be recognized, and then the feature vector of the image to be recognized after the image preprocessing is extracted.
In the embodiment of the invention, the sample image can also be subjected to image preprocessing and then the feature vector is extracted, the size of the image to be identified after the image preprocessing is the same as that of the sample image after the preprocessing, and the position of the product to be identified in the image to be identified after the preprocessing is the same as that in the sample image after the preprocessing.
The processing steps included in the image preprocessing process can be determined according to actual needs. For example: the steps of image contour approximation, image perspective change and target area acquisition can be sequentially executed on the image to be recognized, and the image to be recognized after image preprocessing is obtained.
It is understood that the image of the product may include a foreground region and a background region, the foreground region corresponding to the product and the background region being a blank region. The image contour is approximated to the contour of a foreground region obtained from an image, the perspective change of the image is the contour of the foreground region obtained based on the image contour approximation, the foreground region is obtained from the image, the rotation angle and the size of the obtained foreground region are adjusted, the target region is obtained by intercepting the target region from the foreground region, the rotation angle and the size of each image after image preprocessing are the same, and the included region is the target region. Wherein the target area may be an area specified in the image. The target area may be a trademark area, for example.
Further, before the image contour approximation is carried out on the image, any one or more processing steps of image enhancement, image filtering and image binarization can be carried out on the image to be recognized.
The technical scheme of the embodiment of the invention can also bring the following beneficial effects: the embodiment of the invention can carry out image preprocessing on the image to be recognized, reduce interference factors such as the rotation angle and the size of the image, influence on the recognition accuracy of the classification model and improve the accuracy of determining the authenticity of the product.
In one embodiment of the present invention, to implement the method flow shown in fig. 1, a classification model is determined, wherein the classification model includes a plurality of classifiers. Referring to fig. 2, the method of training a classification model includes the following steps.
Step 201, obtaining a sample image set.
The sample image set comprises sample images with the same size, and the positions of products to be identified in the sample images are the same.
In the embodiment of the invention, the sample images can be preprocessed, so that the preprocessed sample images have the same size and the products to be recognized have the same position in the sample images. The method for preprocessing the sample image is the same as the method for preprocessing the image to be recognized, and the preprocessing process of the image to be recognized can be referred to, which is not described herein again.
It can be understood that the sample image before preprocessing includes a foreground region and a background region, the foreground region corresponds to the product to be identified, and the background region is a blank region. The pre-processed sample image may comprise only foreground regions, or the pre-processed image may comprise target regions in the foreground regions.
Step 202, a plurality of image block training sets are obtained, wherein each image block training set comprises image blocks in the same area in each sample image.
In the embodiment of the invention, the sample image comprises an image of a product to be identified, and the image block of the sample image is a partial image of the product to be identified.
For example, the sample image set includes 8 sample images, each sample image is divided into 4 regions, fig. 3a shows 4 regions included in one sample image, and the 8 sample images are divided into regions in the manner shown in fig. 3 a. Each image block training set includes image blocks of the same area in the 8 sample images. If the image patch training set 1 comprises 8 image patches of the area a, the image patch training set 2 comprises 8 image patches of the area B.
In one embodiment, a plurality of coordinate points may be determined, then, for each coordinate point, image blocks of an area with a specified shape and size and centered on the coordinate point in each sample image are obtained, and then the obtained image blocks are combined into an image block training set. And obtaining a plurality of image block training sets.
The designated shape may be a rectangle, a circle, a polygon, and the like, which is not particularly limited in this embodiment of the present invention.
For example, as shown in fig. 3b, the preset coordinate points are a, b, c, d, e, and f, and each of the dashed boxes in fig. 3b represents a rectangular image block with a size of M × M centered on one of the preset coordinate points.
In the embodiment of the present invention, the plurality of coordinate points may be randomly generated coordinate points, or preset coordinate points, or include randomly generated coordinate points and preset coordinate points.
It can be understood that, due to the influence of the processing technology, the difficulty of the processing steps such as printing or modeling of different areas on the product package is different, for example, the processing difficulty of the trademark is greater, the trademark on the counterfeit package is greatly different from the trademark on the genuine package, the trademark center point can be set as a preset coordinate point, and the image block taking the coordinate point as the center is the image including the trademark area.
In addition, since the degree of discrimination of the appearance such as color, size, and the like by human is limited, some regions which greatly contribute to discrimination of the authenticity of the product are not specified by human. Therefore, the coordinate point can be randomly generated, so that the degree of contribution of the area taking the coordinate point as the center to distinguishing the authenticity of the product can be searched by the classifier.
Optionally, the image blocks included in each sample image may be further divided into an image training set and an image test set, where there is no intersection between the two sets. And training by using an image training set to obtain a classification model, and testing the identification accuracy of the classification model by using an image test set.
Step 203, aiming at each image block training set, training the initial classifier through the image block training set to obtain a classifier corresponding to the image block training set.
In one embodiment, for each training set of image blocks, the feature vectors of the image blocks included in the training set of image blocks may be input to the initial classifier, so as to obtain a classification result of each image block output by the initial classifier. And calculating a loss function by using the classification result of each image block output by the initial classifier and the authenticity of a product to be identified included in the sample image to which each image block belongs. And judging whether the initial classifier is converged or not based on the loss function, and when the initial classifier is not converged, adjusting model parameters of the initial classifier and carrying out next training. And when the initial classifier is converged, obtaining a classifier corresponding to the image block training set.
Optionally, the classifier in the embodiment of the present invention may be a Naive Bayes model (Naive Bayes model), a Decision Tree model (Decision Tree), a Random forest (Random forest) or a logistic regression model, and the like, and the form of the classifier is not specifically limited in the embodiment of the present invention.
And 204, inputting the feature vectors of the image blocks included in the training set of the image blocks into a classifier corresponding to the training set of the image blocks aiming at each training set of the image blocks, and determining the accuracy score of the classifier based on the classification result of each image block output by the classifier.
And the classification result is used for representing the authenticity of the product to be identified in the image block.
In the embodiment of the present invention, the output result of the classifier is a matrix including 0 and/or 1, and each element in the matrix represents the classification result of the image block corresponding to an input feature vector. Wherein 0 represents that the product to be identified in the image block is identified as a counterfeit product, and 1 represents that the product to be identified in the image block is identified as a genuine product.
For example, the classification result output by the classifier is [0,1, 0], which indicates that, among the four input feature vectors, products to be identified in image blocks corresponding to two feature vectors are genuine products, and products to be identified in image blocks corresponding to the other two feature vectors are counterfeit products.
In one embodiment, the initial classifier may be cross-verified based on the classification result of each image block output by the classifier and the authenticity of the product to be identified in each sample image, and a cross-verified f1 score (f 1_ score) may be calculated.
Step 205, for each image block training set, if the accuracy score of the classifier corresponding to the image block training set is greater than a preset score threshold, determining that the classifier corresponding to the image block training set is a qualified classifier.
Alternatively, the accuracy score may be f1_ score, or may also be another score capable of evaluating the accuracy of the classifier identification result, and the preset score threshold may be determined according to actual situations. For example, the preset score threshold is 0.9.
And step 206, judging whether the number of the qualified classifiers reaches a preset number. If the number of qualified classifiers reaches the preset number, execute step 207; if the number of qualified classifiers does not reach the preset number, the process returns to step 202.
In one embodiment, the above steps 203-205 may be performed separately for each training set of image blocks in order.
In another embodiment, the above steps 203 to 205 may be executed in parallel for each image block training set, and since the time consumed by the training process and the prediction process of each classifier is different, when a preset number of qualified classifiers are obtained, training of other classifiers and determination on whether the qualified classifiers are obtained may be stopped.
Optionally, in order to improve accuracy of model identification, a situation that image blocks corresponding to classifiers included in the classification model are concentrated in the same image area is reduced. The sample image can be divided into a plurality of image areas, each image area can include at least one image block, qualified classifiers included in the classification model are obtained from qualified classifiers corresponding to image blocks, the maximum number of which can be determined to be a preset upper limit number, in each image area, and the sum of the numbers of the qualified classifiers corresponding to the image blocks determined to be in each image area is equal to a preset number.
Therefore, when it is determined that image blocks corresponding to a preset upper limit number of qualified classifiers belong to the same image area, if a qualified classifier corresponding to an image block belonging to the image area is determined again, the qualified classifier is not used as a classifier of a classification model.
For example, fig. 3c is a sample image, and each square dotted-line box in fig. 3c represents an image block of size M × M centered on a preset coordinate point. The sample image is divided into 6 image areas, namely an upper left corner area 1, an upper right corner area 2, a middle left area 3, a middle right area 4, a lower left corner area 5 and a lower right corner area 6. The upper left corner region 1 includes an image block with a as a center point, the upper right corner region 2 includes an image block with b as a center point and an image block with c as a center point, the middle left region 3 includes an image block with d as a center point and an image block with e as a center point, the middle right region 4 includes an image block with f as a center point and an image block with g as a center point, the lower left corner region 5 includes an image block with h as a center point, an image block with i as a center point, an image block with j as a center point and an image block with k as a center point, and the lower right corner region 6 includes an image block with l as a center point and an image block with m as a center point. Assuming that the preset upper limit number is 3, namely, qualified classifiers corresponding to at most 3 image blocks in each image region can be determined as the classifiers included in the classification model. Assuming that all the classifiers corresponding to the image block with h as the central point, the image block with i as the central point, and the image block with j as the central point are qualified classifiers, the image blocks corresponding to the 3 qualified classifiers belong to the lower left corner region of the sample image. If the classifier corresponding to the image block with k as the center point is determined to be the qualified classifier, the image block with k as the center point still belongs to the lower left corner area of the sample image, and the qualified classifier is not used as the classifier of the classification model.
Optionally, the preset score threshold may be determined according to actual conditions, for example, in an iteration process of obtaining a classification model, the step 202 is returned to because the preset number of qualified classifiers are not obtained for many times, at this time, the preset score threshold may be reduced, that is, the standard of the qualified classifiers is reduced, so that more classifiers may be determined as the qualified classifiers.
And step 207, taking a preset number of qualified classifiers as classifiers of the classification model.
The technical scheme of the embodiment of the invention can also bring the following beneficial effects: the embodiment of the invention can train one classifier aiming at each different area in the sample image and store the classifier with higher recognition accuracy, so that the accuracy of determining the authenticity of the product to be recognized in the image to be recognized by utilizing the classifier is higher. After the image identification method provided by the embodiment of the invention is tested by the test set, the accuracy rate of determining the authenticity of the product to be identified by the image identification method provided by the embodiment of the invention can reach more than 95%.
With reference to fig. 2, the method for extracting the feature vector of the image to be recognized according to the embodiment of the present invention includes: and extracting the characteristic vectors of the image blocks of the areas to be identified in the images to be identified. Wherein, the coordinates of the area to be identified are: the image blocks used to train the classifier comprised by the classification model, the coordinates of the regions in the sample image.
In one embodiment, the method for extracting the feature vector of the image block of each to-be-identified area in the to-be-identified image may include the following steps.
Step one, aiming at each area to be identified in the image to be identified, obtaining a gray-scale image obtained after image gray-scale conversion is carried out on an image block of the area to be identified.
In the embodiment of the invention, the image to be recognized can be stored as a three-dimensional matrix, and the three dimensions correspond to red, yellow and blue color channels respectively.
In one embodiment, the gray scale conversion may be performed on the image to be recognized, so as to obtain a gray scale image of the image to be recognized. And then aiming at each area to be identified in the image to be identified, obtaining the image block of the area to be identified.
In the embodiment of the invention, the central point of the area of the image block of the training classification model in the sample image can be recorded while the classification model is obtained. When extracting the feature vector of the image to be recognized, it may be determined that, for each area center point, an area with a specified size is a region to be recognized with the area center point as a center, and the feature vector of the image block of the region to be recognized is extracted.
And the size of the image block of the area to be recognized is the same as that of the image block in the image training set. The image to be recognized comprises an image of a product to be recognized, and the image block of the area to be recognized is a partial image of the product to be recognized.
Optionally, before performing image gray scale conversion on the image to be recognized, image size conversion may be performed on the image to be recognized, then image filtering and denoising are performed on the image to be recognized after the image size conversion, and then image gray scale conversion is performed on the image to be recognized after the image filtering and denoising are performed.
The image size conversion means converting the size of an image to be identified into a specified size, and the image filtering and denoising means inhibiting image noise and reducing the influence of the image noise on the authenticity identification result of a product to be identified under the condition of keeping the detail characteristics of the image.
And step two, extracting the feature vector of the image block according to the gray-scale image.
The feature vector comprises a plurality of elements, each element corresponds to one designated brightness, and each element is the number of pixels with the designated brightness corresponding to the element in the gray-scale image.
It can be understood that, the brightness of different colors after image grayscale conversion is different, the specified brightness in the embodiment of the present invention may be determined according to actual needs, and this is not specifically limited in the embodiment of the present invention.
For example, the specified luminances include 0 to 255, and the feature vector includes 266 elements, one for each specified luminance. The feature vector may be [ L 0 ,L 1 ,…,L 254 ,L 255 ]Representing the total number of pixels with a brightness of 0-255 in the gray-scale image, such as L 0 The total number of pixels with brightness of 0 in the gray-scale image.
As another example, the specified luminances include 100 and 150, and the feature vector includes 2 elements, one for each specified luminance. The feature vector may be [ S ] 100 ,S 150 ]Wherein S is 100 Representing the total number of pixels with a luminance of 100 in the gray-scale map, S 150 Representing the total number of pixels with a luminance of 150 in the gray-scale map.
The technical scheme of the embodiment of the invention can also bring the following beneficial effects: in the related technology, because different inspectors have different color sensitivities, and the inspectors are easy to have visual fatigue when comparing true products and false products, the subjectivity of manually identifying the true products and the false products is strong.
In the embodiment of the present invention, because the luminance of different colors after image grayscale conversion is different, and the feature vector of the image block is extracted from the grayscale image of the image block, the feature vector of the image block can represent the color of the image block. The embodiment of the invention can automatically identify the authenticity of the product to be identified based on the color of the image block included in the image to be identified, so that the identification result is more objective and more accurate.
In an embodiment of the present invention, referring to fig. 4, the manner for acquiring the authenticity of the product to be identified in step 103 includes:
step 401, the electronic device inputs the feature vectors of the image blocks of each to-be-identified area in the to-be-identified image into the classification model.
In the embodiment of the present invention, the image blocks of the to-be-recognized region correspond to the image blocks of the classifier included in the training classification model. For example, when the image blocks used for training the classifier included in the classification model are 4 image blocks as shown in fig. 3a, the image to be recognized is also divided into 4 regions to be recognized in the manner shown in fig. 3 a. When the image blocks of the classifier included in the training classification model are 6 regions as shown in fig. 3b, the image to be recognized is also divided into 6 regions to be recognized according to the mode of fig. 3 b.
And 402, identifying the feature vectors of the image blocks of the corresponding to-be-identified areas by each classifier included in the classification model to obtain the classification result of the image blocks of the to-be-identified areas, and outputting the authenticity of the to-be-identified product by the classification model based on the classification result of the image blocks of the to-be-identified areas.
And the classification model outputs the authenticity of the product to be recognized based on the classification result of the image block of each area to be recognized.
In one embodiment, if the number of image blocks of the product to be recognized, which is represented by the classification result, is true is greater than a specified number, or the proportion of image blocks of the product to be recognized, which is represented by the classification result, is greater than a preset proportion, the classification model outputs that the product to be recognized is true. And if the number of the image blocks of the product to be identified, which is represented by the classification result, is true is not more than the specified number, or the proportion of the image blocks of the product to be identified, which is true, is not more than the preset proportion, the classification model outputs the product to be identified as a fake product.
For example, the specified number may be half of the number of classifiers included in the classification model, or the specified number may also be determined according to actual needs, which is not specifically limited in the embodiment of the present invention.
Or the authenticity of the product to be identified can be determined according to the classification result of the image block of the area to be identified in other modes. For example, the areas to be identified may also be provided with weights, and the authenticity of the product to be identified is determined according to the respective weights of the areas to be identified and the classification results of the image blocks of the areas to be identified. The embodiment of the present invention is not particularly limited thereto.
In one embodiment, the output result of the classifier may be a one-dimensional matrix including 0 and/or 1 for indicating the authenticity of the product to be recognized in the image block of the area to be recognized. For example, 1 represents that the product to be identified in the image block is identified as a genuine product, and 0 represents that the product to be identified in the image block is identified as a counterfeit product. The sum of elements included in the one-dimensional matrix can be calculated, and when the calculated sum is greater than N/2, wherein N is the total number of the areas to be identified, which indicates that more than half of the products to be identified in the image blocks of the areas to be identified are identified as genuine products, the classification model outputs the products to be identified as genuine products. And when the calculated sum is not more than N/2, the products to be identified in less than half of the image blocks of the area to be identified are identified as genuine products, and the classification model outputs the products to be identified as fake products.
Optionally, to avoid the situation that the total number of computations is equal to N/2, the number of classifiers included in the classification model, that is, the number of regions to be identified, may be set, that is, the preset number may be set to be an odd number.
For example, the classification result of the image block of each to-be-identified area in the to-be-identified image output by the classification model using the included classifier is as follows: [0, 1] the classification result indicates that, among the image blocks of the area to be recognized, the product to be recognized in 2 image blocks is recognized as a counterfeit product, and the product to be recognized in 5 image blocks is recognized as a genuine product. 5> < 7/2 >, thus determining that the product to be identified is genuine.
And step 403, the electronic equipment acquires the authenticity of the product to be identified output by the classification model.
The technical scheme of the embodiment of the invention can also bring the following beneficial effects: when the new product to be identified is determined to be true or false, the positive and negative sample images of the new product to be identified can be utilized to train and obtain a classification model, and then the classification model is utilized to identify the true or false of the product to be identified in the image to be identified. Therefore, when new product identification is carried out, the flow of the identification method is not changed, and only the sample image needs to be obtained again and the classification model needs to be obtained through training, so that the image identification method provided by the embodiment of the application has good model migration capability.
And the embodiment of the invention can automatically determine the authenticity of the product to be identified, thereby reducing the labor cost consumed by identifying the authenticity of the product to be identified.
In addition, the related art can also utilize a physical detection method for distinguishing the authenticity of the product. Taking a product to be identified as a cigarette as an example, extracting tobacco in the cigarette by using a physical detection method, detecting components of the tobacco, comparing the detected components with components of genuine tobacco, and determining that the detected product is genuine when the detected components are the same as the components of the genuine tobacco. This method is complicated to implement and requires a relatively high amount of time and cost to test the tobacco components.
In the related technology, when the authenticity of the product is distinguished, a near infrared spectrum method can be used for establishing a near infrared standard spectrum library of raw materials in advance, and a near infrared analyzer is used for obtaining the spectrum of the product, so that the authenticity of the product is distinguished. However, this method for analyzing the authenticity of a product using an infrared analyzer is complicated.
The embodiment of the invention can automatically detect the authenticity of the product to be identified based on the image of the cigarette outer package, does not need to detect tobacco components, does not need to scan the product by using an infrared analyzer, saves the time and cost consumed by determining the authenticity of the product, and is more suitable for the conditions of large quantity of products to be identified, large variety of products and complex product image.
Based on the same inventive concept, corresponding to the above method embodiment, an embodiment of the present invention provides an image recognition apparatus, as shown in fig. 5, the apparatus including: an acquisition module 501, an extraction module 502 and a classification module 503;
the acquiring module 501 is configured to acquire an image to be identified, where the image to be identified is an image of a product to be identified;
an extracting module 502, configured to extract the feature vector of the image to be identified, acquired by the acquiring module 501;
the classification module 503 is configured to input the feature vector extracted by the extraction module 502 into a classification model, and obtain authenticity of the product to be identified output by the classification model, where the classification model is a model obtained by training a neural network model based on a sample image set, the sample image set includes a positive sample image and a negative sample image, the positive sample image is a genuine image of the product to be identified, and the negative sample image is a counterfeit image of the product to be identified.
Optionally, the apparatus may further include a training module 504, where the training module 504 is configured to perform:
the method comprises the following steps of firstly, obtaining a sample image set, wherein the sample image set comprises sample images with the same size, and products to be identified have the same position in the sample images;
acquiring a plurality of image block training sets, wherein each image block training set comprises image blocks in the same area in each sample image and the authenticity of a product corresponding to each image block;
step three, aiming at each image block training set, training an initial classifier through the image block training set to obtain a classifier corresponding to the image block training set;
step four, inputting the feature vectors of the image blocks included in the image block training set into a classifier corresponding to the image block training set aiming at each image block training set, and determining the accuracy score of the classifier based on the classification result of each image block output by the classifier, wherein the classification result is used for representing the authenticity of a product to be identified in the image block;
step five, aiming at each image block training set, if the accuracy score of the classifier corresponding to the image block training set is greater than a preset score threshold, determining the classifier corresponding to the image block training set as a qualified classifier;
step six, if the number of the qualified classifiers reaches the preset number, taking the qualified classifiers with the preset number as classifiers of the classification model; and if the number of the qualified classifiers does not reach the preset number, returning to the step two.
Optionally, the size of the image to be recognized is the same as that of the sample image in the sample image set, and the position of the product to be recognized in the image to be recognized is the same as that in the sample image; an extraction module specifically configured to:
extracting the characteristic vectors of the image blocks of the areas to be identified in the images to be identified, wherein the coordinates of the areas to be identified are as follows: the image blocks used to train the classifier comprised by the classification model, the coordinates of the regions in the sample image.
Optionally, the classifying module 503 may be specifically configured to:
inputting the feature vectors of the image blocks of the areas to be identified in the image to be identified into a classification model, so that each classifier included in the classification model identifies the feature vectors of the image blocks of the corresponding areas to be identified to obtain the classification result of the image blocks of the areas to be identified, and the classification model outputs the authenticity of the product to be identified based on the classification result of the image blocks of the areas to be identified;
and acquiring the authenticity of the product to be identified output by the classification model.
Optionally, the extracting module 502 may be specifically configured to:
aiming at each to-be-identified area in the to-be-identified image, obtaining a gray scale image obtained after image gray scale conversion is carried out on an image block of the to-be-identified area;
and extracting a feature vector of the image block according to the gray-scale image, wherein the feature vector comprises a plurality of elements, each element corresponds to one designated brightness, and each element is the number of pixels with the designated brightness corresponding to the element in the gray-scale image.
An embodiment of the present invention further provides an electronic device, as shown in fig. 6, including a processor 601, a communication interface 602, a memory 603, and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 complete mutual communication through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is configured to implement the method steps in the above method embodiments when executing the program stored in the memory 603.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program realizes the steps of any one of the above image recognition methods when executed by a processor.
In yet another embodiment provided by the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the image recognition methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to be performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on differences from other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. An image recognition method, characterized in that the method comprises:
acquiring an image to be identified, wherein the image to be identified is an image of a product to be identified;
extracting a feature vector of the image to be identified;
inputting the feature vector into a classification model to obtain the authenticity of the product to be recognized output by the classification model, wherein the classification model is obtained by training a neural network model based on a sample image set, the sample image set comprises a positive sample image and a negative sample image, the positive sample image is a genuine product image of the product to be recognized, and the negative sample image is a counterfeit product image of the product to be recognized;
the classification model is obtained by training the following steps:
step one, obtaining a sample image set, wherein the sample images in the sample image set have the same size, and the products to be identified have the same position in the sample images;
acquiring a plurality of image block training sets, wherein each image block training set comprises image blocks in the same area in each sample image and a product corresponding to each image block;
step three, aiming at each image block training set, training an initial classifier through the image block training set to obtain a classifier corresponding to the image block training set;
step four, inputting the feature vectors of the image blocks included in the image block training set into a classifier corresponding to the image block training set aiming at each image block training set, and determining the accuracy score of the classifier based on the classification result of each image block output by the classifier, wherein the classification result is used for representing the authenticity of a product to be identified in the image block;
step five, aiming at each image block training set, if the accuracy score of the classifier corresponding to the image block training set is greater than a preset score threshold, determining the classifier corresponding to the image block training set as a qualified classifier;
step six, if the number of the qualified classifiers reaches the preset number, taking the qualified classifiers with the preset number as the classifiers of the classification model; and if the number of the qualified classifiers does not reach the preset number, returning to the step two.
2. The method according to claim 1, characterized in that the size of the image to be identified is the same as the size of the sample images in the sample image set, and the position of the product to be identified in the image to be identified is the same as the position in the sample images; the extracting the feature vector of the image to be identified comprises the following steps:
extracting the characteristic vectors of the image blocks of the areas to be identified in the images to be identified, wherein the coordinates of the areas to be identified are as follows: the image blocks used for training the classifier comprised by the classification model are the coordinates of the regions in the sample image.
3. The method according to claim 2, wherein the inputting the feature vector into a classification model to obtain the authenticity of the product to be recognized output by the classification model comprises:
inputting the feature vectors of the image blocks of the areas to be recognized in the image to be recognized into the classification model, so that each classifier included in the classification model recognizes the feature vectors of the image blocks of the corresponding areas to be recognized, and obtains the classification result of the image blocks of the areas to be recognized, and the classification model outputs the authenticity of the product to be recognized based on the classification result of the image blocks of the areas to be recognized;
and acquiring the authenticity of the product to be identified output by the classification model.
4. The method according to claim 2, wherein the extracting the feature vectors of the image blocks of each to-be-identified area in the to-be-identified image comprises:
aiming at each area to be identified in the image to be identified, obtaining a gray scale image obtained after image gray scale conversion of an image block of the area to be identified;
and extracting a feature vector of the image block according to the gray-scale image, wherein the feature vector comprises a plurality of elements, each element corresponds to a specified brightness, and each element is the number of pixel points of the specified brightness corresponding to the element in the gray-scale image.
5. An image recognition apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition module, a recognition module and a recognition module, wherein the acquisition module is used for acquiring an image to be recognized, and the image to be recognized is an image of a product to be recognized;
the extraction module is used for extracting the characteristic vector of the image to be identified, which is acquired by the acquisition module;
the classification module is used for inputting the feature vectors extracted by the extraction module into a classification model to obtain the authenticity of the product to be identified output by the classification model, the classification model is obtained by training a neural network model based on a sample image set, the sample image set comprises a positive sample image and a negative sample image, the positive sample image is the genuine image of the product to be identified, and the negative sample image is the counterfeit image of the product to be identified;
the apparatus further comprises a training module to perform:
step one, obtaining a sample image set, wherein the sample images in the sample image set have the same size, and the products to be identified have the same position in the sample images;
acquiring a plurality of image block training sets, wherein each image block training set comprises image blocks in the same area in each sample image and a product corresponding to each image block;
step three, aiming at each image block training set, training an initial classifier through the image block training set to obtain a classifier corresponding to the image block training set;
step four, inputting the feature vectors of the image blocks included in the image block training set into a classifier corresponding to the image block training set aiming at each image block training set, and determining the accuracy score of the classifier based on the classification result of each image block output by the classifier, wherein the classification result is used for representing the authenticity of a product to be identified in the image block;
step five, aiming at each image block training set, if the accuracy score of the classifier corresponding to the image block training set is greater than a preset score threshold, determining the classifier corresponding to the image block training set as a qualified classifier;
step six, if the number of the qualified classifiers reaches a preset number, taking the qualified classifiers with the preset number as the classifiers of the classification model; and if the number of the qualified classifiers does not reach the preset number, returning to the step two.
6. The apparatus of claim 5, wherein the size of the image to be identified is the same as the size of the sample images in the sample image set, and the position of the product to be identified in the image to be identified is the same as the position in the sample images; the extraction module is specifically configured to:
extracting the feature vectors of the image blocks of the areas to be identified in the images to be identified, wherein the coordinates of the areas to be identified are as follows: the image blocks used for training the classifier comprised by the classification model are the coordinates of the regions in the sample image.
7. The apparatus according to claim 6, wherein the classification module is specifically configured to:
inputting the feature vectors of the image blocks of the areas to be recognized in the image to be recognized into the classification model, so that each classifier included in the classification model recognizes the feature vectors of the image blocks of the corresponding areas to be recognized, and obtains the classification result of the image blocks of the areas to be recognized, and the classification model outputs the authenticity of the product to be recognized based on the classification result of the image blocks of the areas to be recognized;
and acquiring the authenticity of the product to be identified output by the classification model.
8. The apparatus according to claim 6, wherein the extraction module is specifically configured to:
aiming at each to-be-identified area in the to-be-identified image, obtaining a gray scale image obtained after image gray scale conversion is carried out on an image block of the to-be-identified area;
and extracting a feature vector of the image block according to the gray-scale image, wherein the feature vector comprises a plurality of elements, each element corresponds to a specified brightness, and each element is the number of pixel points of the specified brightness corresponding to the element in the gray-scale image.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 4 when executing a program stored in the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 4.
CN202010368651.1A 2020-04-28 2020-04-28 Image identification method and device, electronic equipment and medium Active CN111582359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010368651.1A CN111582359B (en) 2020-04-28 2020-04-28 Image identification method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010368651.1A CN111582359B (en) 2020-04-28 2020-04-28 Image identification method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111582359A CN111582359A (en) 2020-08-25
CN111582359B true CN111582359B (en) 2023-04-07

Family

ID=72111932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010368651.1A Active CN111582359B (en) 2020-04-28 2020-04-28 Image identification method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111582359B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598025B (en) * 2020-05-20 2022-10-14 腾讯科技(深圳)有限公司 Training method and device for image recognition model
CN112861979B (en) * 2021-02-20 2024-01-30 数贸科技(北京)有限公司 Trademark identification method, trademark identification device, computing equipment and computer storage medium
CN113076309B (en) * 2021-03-26 2023-05-09 四川中烟工业有限责任公司 System and method for predicting water addition amount of raw tobacco
CN113516486A (en) * 2021-04-07 2021-10-19 阿里巴巴新加坡控股有限公司 Image recognition method, device, equipment and storage medium
CN113920416A (en) * 2021-10-08 2022-01-11 深圳爱莫科技有限公司 Cigar identity identification method and system based on image recognition and storage medium
CN114494765B (en) * 2021-12-21 2023-08-18 北京瑞莱智慧科技有限公司 Method and device for identifying true and false smoke discrimination points, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825224A (en) * 2016-03-10 2016-08-03 浙江生辉照明有限公司 Obtaining method and apparatus of classifier family
CN106803090A (en) * 2016-12-05 2017-06-06 中国银联股份有限公司 A kind of image-recognizing method and device
CN110008987A (en) * 2019-02-20 2019-07-12 深圳大学 Test method, device, terminal and the storage medium of classifier robustness

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9530082B2 (en) * 2015-04-24 2016-12-27 Facebook, Inc. Objectionable content detector
KR101997479B1 (en) * 2015-10-20 2019-10-01 삼성전자주식회사 Detecting method and apparatus of biometrics region for user authentication
CN106951924B (en) * 2017-03-27 2020-01-07 东北石油大学 Seismic coherence body image fault automatic identification method and system based on AdaBoost algorithm
JP6992475B2 (en) * 2017-12-14 2022-01-13 オムロン株式会社 Information processing equipment, identification system, setting method and program
CN108288073A (en) * 2018-01-30 2018-07-17 北京小米移动软件有限公司 Picture authenticity identification method and device, computer readable storage medium
CN108520196B (en) * 2018-02-01 2021-08-31 平安科技(深圳)有限公司 Luxury discrimination method, electronic device, and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825224A (en) * 2016-03-10 2016-08-03 浙江生辉照明有限公司 Obtaining method and apparatus of classifier family
CN106803090A (en) * 2016-12-05 2017-06-06 中国银联股份有限公司 A kind of image-recognizing method and device
CN110008987A (en) * 2019-02-20 2019-07-12 深圳大学 Test method, device, terminal and the storage medium of classifier robustness

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Yu Su 等.Patch-Based Gabor Fisher Classifier for Face Recognition.《18th International Conference on Pattern Recognition (ICPR'06)》.2006, 528-531. *
苗荣慧 等.基于图像分块及重构的菠菜重叠叶片与杂草识别.《农业工程学报》.2020,第36卷(第04期),178-184. *

Also Published As

Publication number Publication date
CN111582359A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN111582359B (en) Image identification method and device, electronic equipment and medium
CN110060237B (en) Fault detection method, device, equipment and system
Agrawal et al. Grape leaf disease detection and classification using multi-class support vector machine
CN105574550A (en) Vehicle identification method and device
TWI765442B (en) Method for defect level determination and computer readable storage medium thereof
CN113221881B (en) Multi-level smart phone screen defect detection method
CN108197636A (en) A kind of paddy detection and sorting technique based on depth multiple views feature
CN111415339B (en) Image defect detection method for complex texture industrial product
CN111161237A (en) Fruit and vegetable surface quality detection method, storage medium and sorting device thereof
CN110570442A (en) Contour detection method under complex background, terminal device and storage medium
CN110619619A (en) Defect detection method and device and electronic equipment
CN107563427A (en) The method and corresponding use that copyright for oil painting is identified
Triantoro et al. Image based water gauge reading developed with ANN Kohonen
CN116559111A (en) Sorghum variety identification method based on hyperspectral imaging technology
CN109376782B (en) Support vector machine cataract classification method and device based on eye image features
CN106997590A (en) A kind of image procossing and detecting system based on detection product performance
CN206897873U (en) A kind of image procossing and detecting system based on detection product performance
CN113920434A (en) Image reproduction detection method, device and medium based on target
CN111523605B (en) Image identification method and device, electronic equipment and medium
CN107024480A (en) A kind of stereoscopic image acquisition device
CN113705587A (en) Image quality scoring method, device, storage medium and electronic equipment
CN113870210A (en) Image quality evaluation method, device, equipment and storage medium
CN207181307U (en) A kind of stereoscopic image acquisition device
US10902584B2 (en) Detection of surface irregularities in coins
Swargiary et al. Classification of basmati rice grains using image processing techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant