CN117173545B - License original identification method based on computer graphics - Google Patents
License original identification method based on computer graphics Download PDFInfo
- Publication number
- CN117173545B CN117173545B CN202311454459.4A CN202311454459A CN117173545B CN 117173545 B CN117173545 B CN 117173545B CN 202311454459 A CN202311454459 A CN 202311454459A CN 117173545 B CN117173545 B CN 117173545B
- Authority
- CN
- China
- Prior art keywords
- image
- edge
- strong
- distance
- acquired
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000003062 neural network model Methods 0.000 claims abstract description 33
- 238000003708 edge detection Methods 0.000 claims abstract description 11
- 238000012549 training Methods 0.000 claims abstract description 10
- 239000011159 matrix material Substances 0.000 claims description 26
- 239000013598 vector Substances 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 14
- 238000011156 evaluation Methods 0.000 claims description 7
- 238000010606 normalization Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 abstract description 4
- 230000000694 effects Effects 0.000 abstract description 2
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Abstract
The application relates to the field of image processing, in particular to a license original identification method based on computer graphics, which comprises the following steps: different labels are set for the license image and the non-license image in the history acquisition image, and denoising processing is carried out on the history acquisition image, so that a license data set is generated; acquiring an acquired image edge of the denoised historical acquired image according to an edge detection algorithm, wherein the acquired image edge comprises an acquired image strong edge and an acquired image weak edge; calculating the similarity between the image edge and the preset template image edge, and setting a loss function of a preset neural network model; training a neural network model by using the license data set and evaluating to obtain an optimal model; and generating a recognition result in response to the image to be recognized being acquired. According to the method and the device, whether the input image is the license original or not is recognized through the trained neural network model, and the effect of improving recognition accuracy is achieved.
Description
Technical Field
The application relates to the field of image processing, in particular to a license original identification method based on computer graphics.
Background
Along with the rapid development of artificial intelligence technology, more and more fields apply artificial intelligence technology, such as identifying an original license by using an artificial intelligence method, in daily life, a scene that a user needs to shoot the license and upload authentication often exists, when a service end receives a license picture uploaded by the user, the service end needs to judge whether the uploaded picture is the original license, a shooting picture of a non-original cannot handle the service through verification, and the service is handled in real time, so that whether the license is the original license needs to be rapidly identified.
In the related art, patent document CN104573647B discloses a method and a device for identifying an illegal identity document, wherein the method obtains a deviation value of each pixel point by means of mean square error of RGB (Red, green, blue) components of each pixel point of a photo of a person on the identity document to be identified, and judges whether the original is based on the magnitude of the deviation value.
The method of the related art carries out identification of the license original through a threshold value setting method, has stronger subjectivity and lower identification accuracy.
Disclosure of Invention
In order to identify whether an input image is a license original through a trained neural network model and improve the identification accuracy, the application provides a license original identification method based on computer graphics, which adopts the following technical scheme:
a license original identification method based on computer graphics comprises the following steps:
setting a first label for the license image in the history acquisition image, setting a second label for the non-license image in the history image, denoising the history acquisition image, and generating a license data set; acquiring an acquired image edge of the denoised historical acquired image according to an edge detection algorithm, wherein the acquired image edge comprises an acquired image strong edge and an acquired image weak edge; calculating the similarity between an image edge and a preset template image edge, and setting a loss function of a preset neural network model, wherein the template image edge comprises a template image strong edge and a template image weak edge; training a neural network model by using the license data set and evaluating to obtain an optimal model; and generating a recognition result in response to the image to be recognized being acquired.
Optionally, according to an edge detection algorithm, obtaining the license image edge of the denoised historical acquired image includes the steps of: converting the denoised historical acquisition image into a gray image; performing edge detection on the gray level image to obtain a strong edge acquisition image, a weak edge acquisition image, a strong edge of the acquisition image and a weak edge of the acquisition image; calculating a hessian matrix of each edge pixel point in the strong edge and the weak edge of the acquired image; calculating the eigenvalue and eigenvector of the hessian matrix to obtain the maximum eigenvalue sequence of the strong edge of the acquired image; calculating the distance between the strong edge of the acquired image and the strong edge of the template image to obtain a first distance; and calculating the distance between the weak edge of the acquired image and the weak edge of the template image to obtain a second distance.
Optionally, the calculating of the first distance includes the steps of: according to the maximum characteristic value sequence of the strong edges of the acquired image and the maximum characteristic value sequence of the preset strong edges of the template image, the distance between the strong edges of the acquired image and the strong edges of the template image is calculated, and the distance calculation formula is as follows:wherein->Representing the distance between the ith pixel point of the acquired image edge and the jth pixel point of the template image edge,/for the image>Heisen matrix maximum eigenvalue representing the ith pixel point of the sequence of strong edge pixel values of the acquired image,/>Maximum eigenvalue of hessian matrix representing jth pixel point of strong edge pixel value sequence of template image,/->Feature vector representing the largest feature value of the hessian matrix for the ith pixel point of the sequence of pixel values of the strong edge of the acquired image,/>Maximum eigenvalue pair of hessian matrix representing jth pixel point of strong edge pixel value sequence of template imageA corresponding feature vector; selecting each step of distance by using a dynamic normalization method, wherein the selection formula of each step of distance is as follows: />Wherein d is the pixel point distance, takingMinimum value->For collecting the distance between the (i+1) th pixel point of the strong edge of the image and the (j) th pixel point of the strong edge of the template image,/for the (i+1) th pixel point of the strong edge of the image>For collecting the distance between the ith pixel point of the strong edge of the image and the (j+1) th pixel point of the strong edge of the template image,/for the image, the image is obtained by the method>The method comprises the steps of acquiring the distance between the (i+1) th pixel point of the strong image edge and the (j+1) th pixel point of the strong template image edge; and adding the selected distances to obtain a first distance between the strong edge of the acquired image and the strong edge of the template image.
Optionally, calculating the similarity between the collected image edge and the preset template image edge, and setting a loss function of the preset neural network model, including the steps of: counting the number of pixel points with pixel values larger than zero in the strong edge acquisition image to obtain the number of the strong edges of the acquisition image; counting the number of pixel points with pixel values larger than zero in the weak edge acquisition image to obtain the number of the weak edges of the acquisition image; the similarity between the edge of the acquired image and the edge of the template image is calculated, and the calculation formula is as follows:wherein->For the similarity, ∈>For acquiring the number of strong edges of an image, < > the number of strong edges of an image>For acquiring the number of weak edges of an image, +.>For template image strong edge number, +.>For the number of weak edges of the template image, < > for>For acquiring the distance between the strong edge of the image and the strong edge of the template image, < >>The distance between the weak edge of the acquired image and the weak edge of the template image is; the loss function is set, and the calculation formula is as follows:,/>as a loss function, y' represents the probability that the neural network model predicts a certain class, +.>Tag representing the ith image in the license data set,/->A penalty factor representing the i-th sample classification error.
Optionally, training and evaluating the neural network model by using the license data set to obtain an optimal model, wherein an evaluation index calculation formula of the neural network model is as follows:wherein P is the precision of the neural network model, R is the recall ratio of the neural network model, and F is the evaluation index of the neural network model.
The application has the following technical effects:
1. the method comprises the steps of acquiring characteristic values and characteristic vectors of the image strong edge and the template image strong edge hessian matrix, calculating a first distance by using a dynamic normalization method, similarly calculating a second distance between the acquired image weak edge and the template image weak edge hessian matrix, calculating the similarity between the acquired image and the template image according to the first distance and the second distance, constructing a loss function of a neural network model according to the similarity, training the neural network model, and identifying a newly input picture by using the trained model, so that the aim of identifying whether the input image is an original license through the trained neural network model is fulfilled, and the identification accuracy is improved.
2. Calculating the hessian matrix of the edge in the strong edge graph of the acquired image to obtain the hessian matrix of each edge pixel point, and calculating the characteristic value of the hessian matrix corresponding to each edge pixel point and the characteristic vector corresponding to the characteristic value, wherein the maximum characteristic value represents the curvature intensity of the edge point, and the characteristic vector corresponding to the maximum characteristic value represents the curvature direction. Because the curvature and curvature direction of the image edge indicate the characteristics of the image edge, the distance of the image edge characteristics can be used for reflecting the distance of the two image edges, and when the distance of the two image edges is calculated, the distance of the two image edges is calculated by using a dynamic regular method because the number of pixels of the two image edges is different.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present application are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
fig. 1 is a flowchart of a method for identifying a license original based on computer graphics in the embodiment of the application in steps S1-S5.
Fig. 2 is a flowchart of a method for identifying a license original based on computer graphics in the embodiment of the application, which includes steps S20-S25.
Fig. 3 is a flowchart of a method for identifying a license original based on computer graphics in steps S30-S33 in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be understood that when the terms "first," "second," and the like are used in the claims, specification, and drawings of this application, they are used merely for distinguishing between different objects and not for describing a particular sequential order. The terms "comprises" and "comprising," when used in the specification and claims of this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The embodiment of the application discloses a license original identification method based on computer graphics, which comprises the following specific scenes: identifying the original of the license by image processing, referring to fig. 1, includes the steps of:
s1: setting a first label for the license image in the history acquisition image, setting a second label for the non-license image in the history acquisition image, denoising the history acquisition image, and generating a license data set.
Specifically, a history acquisition image to be identified is acquired. The history collection image comprises a license image and a non-license image, the license image is an original image of the license, and other images are non-license images.
S2: and obtaining the collected image edge of the denoised historical collected image according to an edge detection algorithm, wherein the collected image edge comprises a collected image strong edge and a collected image weak edge. Referring to fig. 2, step S2 includes steps S20 to S25:
s20: and converting the denoised historical acquisition image into a gray image.
S21: and carrying out edge detection on the gray level image to obtain a strong edge acquisition image, a weak edge acquisition image, a strong edge of the acquisition image and a weak edge of the acquisition image.
And extracting image edge characteristics of the gray level image by using a Canny edge detection algorithm, wherein the Canny edge detection algorithm extracts strong edges and weak edges of the gray level image, and in order to ensure better edge continuity, if one point in the eight adjacent areas of the weak edges is a strong edge, the weak edges are considered to be the strong edges.
Setting the pixel value of the pixel point of the strong edge as 1, setting the pixel values of the other pixel points as 0, obtaining a strong edge 0/1 diagram of the gray level image, multiplying the strong edge 0/1 diagram of the gray level image by the gray level image to obtain a strong edge acquisition image, further obtaining the strong edge of the acquisition image, wherein the strong edge of the acquisition image refers to obvious edge characteristics or boundaries in the strong edge acquisition image, and the weak edge acquisition image and the weak edge of the acquisition image can be obtained in the same way.
S22: and calculating the hessian matrix of each edge pixel point in the strong edge and the weak edge of the acquired image.
S23: and calculating the eigenvalues and eigenvectors of the hessian matrix to obtain the maximum eigenvalue sequence of the strong edge of the acquired image.
S24: and calculating the distance between the strong edge of the acquired image and the strong edge of the template image to obtain a first distance.
And calculating a hessian matrix of the strong edge of the acquired image to obtain a hessian matrix of each edge pixel point in the acquired strong edge image, and calculating a characteristic value of each edge pixel point hessian matrix and a characteristic vector corresponding to the maximum characteristic value, wherein the maximum characteristic value represents the curvature intensity of the edge pixel point, and the characteristic vector corresponding to the maximum characteristic value represents the curvature direction. Since the curvature and curvature direction of the edge indicate the characteristics of the edge, the distance between the edge characteristics can reflect the distance between the two edges, so when the distance between the strong edge of the acquired image and the strong edge of the template image is calculated, the distance between the strong edge of the acquired image and the strong edge of the template image cannot be calculated by directly using Euclidean distance and the like because the number of pixels of the strong edge of the acquired image and the strong edge of the template image is different, and therefore, the distance between the strong edge of the acquired image and the strong edge of the template image is calculated by using a dynamic regular method. Referring to fig. 3, the calculation process includes steps S240 to S242, specifically as follows:
s240: and calculating the distance between the strong edge of the acquired image and the strong edge of the template image according to the maximum characteristic value sequence of the strong edge of the acquired image and the maximum characteristic value sequence of the preset strong edge of the template image.
For example: the maximum eigenvalue sequence corresponding to the strong edge of the acquired image is (A, B, C, D, E), and the eigenvector corresponding to (A, B, C, D, E) is (a, B, C, D, E). The maximum eigenvalue sequence corresponding to the strong edge of the template image is (U, V, W, X, Y, Z), and the eigenvector corresponding to (U, V, W, X, Y, Z) is (U, V, W, X, Y, Z), referring to table 1:
table 1:
in the tableRepresenting the distance between the ith pixel point of the acquired image edge and the jth pixel point of the template image edge, and finding out the distance from +.>The shortest distance to EZ is the distance between the strong edge of the acquired image and the strong edge of the template image.
Each distance calculation formula is:
wherein,representing the distance between the ith pixel point of the acquired image edge and the jth pixel point of the template image edge.And representing the maximum eigenvalue of the hessian matrix of the ith pixel point of the pixel value sequence of the strong edge of the acquired image. />And the maximum eigenvalue of the hessian matrix of the j-th pixel point of the strong edge pixel value sequence of the template image is represented. />And the feature vector representing the maximum feature value of the hessian matrix of the ith pixel point of the acquired image strong edge pixel value sequence. />And representing the feature vector corresponding to the maximum feature value of the hessian matrix of the jth pixel point of the template image strong edge pixel value sequence.
Is->Is->The cosine value of the included angle of the directions of the two feature vectors is multiplied by the cosine value of the included angle of the directions of the two feature vectors to represent the projection value of the feature value on the feature vectors, and the calculated distance between the two edges, namely the distance between the ith pixel point of the collected image edge and the jth pixel point of the template image edge, not only considers the distance of the feature value, but also considers the distance of the directions of the feature vectors, and ensures that the more similar the distance between the strong edge of the collected image and the strong edge of the template image is, the closer. Similarly, the distance between the weak edge of the acquired image and the weak edge of the template image can be obtainedAnd (5) separating.
S241: the distance per step is selected using a dynamic normalization method.
The selection formula of each step of distance is as follows:
wherein d is the pixel point distance, takingMinimum value->For collecting the distance between the (i+1) th pixel point of the strong edge of the image and the (j) th pixel point of the strong edge of the template image,/for the (i+1) th pixel point of the strong edge of the image>For collecting the distance between the ith pixel point of the strong edge of the image and the (j+1) th pixel point of the strong edge of the template image,/for the image, the image is obtained by the method>The method is used for acquiring the distance between the (i+1) th pixel point of the strong edge of the image and the (j+1) th pixel point of the strong edge of the template image.
S242: and adding the selected distances to obtain a first distance between the strong edge of the acquired image and the strong edge of the template image.
S25: and calculating the distance between the weak edge of the acquired image and the weak edge of the template image to obtain a second distance.
The calculation method of the second distance is the same as that of the first distance, the collected image strong edge is replaced by the collected image weak edge, the template image strong edge is replaced by the template image weak edge, and the distance between the collected image weak edge and the template image weak edge is obtained without repeated description.
S3: and calculating the similarity between the image edge and a preset template image edge, setting a loss function of a preset neural network model, wherein the template image edge comprises a template image strong edge and a template image weak edge.
S30: counting the number of pixel points with pixel values larger than zero in the strong edge acquisition image to obtain the number of the strong edges of the acquisition image;
s31: and counting the number of pixel points with pixel values larger than zero in the weak edge acquisition image to obtain the number of the weak edges of the acquisition image.
S32: the comprehensive distance between the edge of the acquired image and the edge of the template image, namely the similarity, is calculated, and the calculation formula is as follows:
wherein,for similarity, ->For acquiring the number of strong edges of an image, < > the number of strong edges of an image>For acquiring the number of weak edges of an image, +.>For template image strong edge number, +.>For the number of weak edges of the template image, < > for>For acquiring the distance between the strong edge of the image and the strong edge of the template image, < >>The distance between the weak edge of the acquired image and the weak edge of the template image is;
s33: setting a loss function, wherein the calculation formula of the loss function is as follows:
as a loss function, y' represents the probability that the neural network model predicts a certain class, +.>Tag representing the ith image in the license data set,/->A penalty factor representing the i-th sample classification error. />The smaller the loss function the larger the penalty representing an image classification error that will have less similarity of the acquired image edge to the template image edge.
S4: and training the neural network model by using the license data set and evaluating to obtain an optimal model.
The license dataset was read as per 4:1 randomly dividing a training set and a testing set according to the proportion, optimizing a neural network model by adopting the loss function and adopting a gradient descent method, and stopping training and evaluating the model when the training frequency of the model reaches the set maximum training frequency or the loss is smaller than a preset threshold value.
The evaluation index calculation formula of the neural network model is as follows:
wherein P is the precision of the neural network model, R is the recall ratio of the neural network model, and F is the evaluation index of the neural network model. The precision ratio measures how many of the models are true positive examples in the samples predicted as positive examples, the recall ratio measures the proportion of the models which are correctly predicted as positive examples in all actual positive examples, and the optimal model is selected according to the model evaluation index F.
S5: and generating a recognition result in response to the image to be recognized being acquired.
And shooting an image of the object to be detected, inputting the image into an optimal model, and outputting the identification result of whether the photo is a license original or not.
While various embodiments of the present application have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Many modifications, changes, and substitutions will now occur to those skilled in the art without departing from the spirit and spirit of the application. It should be understood that various alternatives to the embodiments of the present application described herein may be employed in practicing the application.
The foregoing are all preferred embodiments of the present application, and are not intended to limit the scope of the present application in any way, therefore: all equivalent changes in structure, shape and principle of this application should be covered in the protection scope of this application.
Claims (2)
1. The license original identification method based on computer graphics is characterized by comprising the following steps:
setting a first label for the license image in the history acquisition image, setting a second label for the non-license image in the history image, denoising the history acquisition image, and generating a license data set;
acquiring an acquired image edge of a denoised historical acquired image according to an edge detection algorithm, wherein the acquired image edge comprises an acquired image strong edge and an acquired image weak edge;
calculating the similarity between an image edge and a preset template image edge, and setting a loss function of a preset neural network model, wherein the template image edge comprises a template image strong edge and a template image weak edge;
training a neural network model by using the license data set and evaluating to obtain an optimal model;
generating a recognition result in response to the image to be recognized being acquired;
according to an edge detection algorithm, obtaining the license image edge of the denoised historical acquired image, comprising the following steps:
converting the denoised historical acquisition image into a gray image;
performing edge detection on the gray level image to obtain a strong edge acquisition image, a weak edge acquisition image, a strong edge of the acquisition image and a weak edge of the acquisition image;
calculating a hessian matrix of each edge pixel point in the strong edge and the weak edge of the acquired image;
calculating the eigenvalue and eigenvector of the hessian matrix to obtain the maximum eigenvalue sequence of the strong edge of the acquired image;
calculating the distance between the strong edge of the acquired image and the strong edge of the template image to obtain a first distance;
calculating the distance between the weak edge of the acquired image and the weak edge of the template image to obtain a second distance;
the calculating of the first distance comprises the following steps:
according to the maximum characteristic value sequence of the strong edges of the acquired image and the maximum characteristic value sequence of the preset strong edges of the template image, the distance between the strong edges of the acquired image and the strong edges of the template image is calculated, and the distance calculation formula is as follows:
wherein,representing the distance between the ith pixel point of the acquired image edge and the jth pixel point of the template image edge,
representing the maximum eigenvalue of the hessian matrix of the ith pixel point of the strong edge pixel value sequence of the acquired image,
representing the maximum eigenvalue of the hessian matrix of the jth pixel point of the strong edge pixel value sequence of the template image,
a feature vector representing the maximum feature value of the hessian matrix for the ith pixel point of the sequence of pixel values of the strong edge of the acquired image,
representing a feature vector corresponding to the maximum feature value of the hessian matrix of the jth pixel point of the template image strong edge pixel value sequence;
selecting each step of distance by using a dynamic normalization method, wherein the selection formula of each step of distance is as follows:
wherein d is the pixel point distance, takingMinimum value->For collecting the distance between the (i+1) th pixel point of the strong edge of the image and the (j) th pixel point of the strong edge of the template image,/for the (i+1) th pixel point of the strong edge of the image>For collecting the distance between the ith pixel point of the strong edge of the image and the (j+1) th pixel point of the strong edge of the template image,/for the image, the image is obtained by the method>The method comprises the steps of acquiring the distance between the (i+1) th pixel point of the strong image edge and the (j+1) th pixel point of the strong template image edge;
adding the selected distances to obtain a first distance between the strong edge of the acquired image and the strong edge of the template image;
calculating the similarity between the edge of the acquired image and the edge of the preset template image, and setting a loss function of a preset neural network model, wherein the method comprises the following steps:
counting the number of pixel points with pixel values larger than zero in the strong edge acquisition image to obtain the number of the strong edges of the acquisition image;
counting the number of pixel points with pixel values larger than zero in the weak edge acquisition image to obtain the number of the weak edges of the acquisition image;
the similarity between the edge of the acquired image and the edge of the template image is calculated, and the calculation formula is as follows:
wherein,for the similarity, ∈>For acquiring the number of strong edges of an image, < > the number of strong edges of an image>For acquiring the number of weak edges of an image, +.>For template image strong edge number, +.>For the number of weak edges of the template image, < > for>For acquiring the distance between the strong edge of the image and the strong edge of the template image, < >>The distance between the weak edge of the acquired image and the weak edge of the template image is;
the loss function is set, and the calculation formula is as follows:
as a loss function, y' represents the probability that the neural network model predicts a certain class, +.>Tag representing the ith image in the license data set,/->Penalty factor indicating i-th sample classification error, < ->The smaller the loss function the larger the penalty representing an image classification error that will have less similarity of the acquired image edge to the template image edge.
2. The license original recognition method based on computer graphics as claimed in claim 1, wherein the neural network model is trained and evaluated by using the license data set to obtain an optimal model, and an evaluation index calculation formula of the neural network model is:
wherein P is the precision of the neural network model, R is the recall ratio of the neural network model, and F is the evaluation index of the neural network model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311454459.4A CN117173545B (en) | 2023-11-03 | 2023-11-03 | License original identification method based on computer graphics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311454459.4A CN117173545B (en) | 2023-11-03 | 2023-11-03 | License original identification method based on computer graphics |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117173545A CN117173545A (en) | 2023-12-05 |
CN117173545B true CN117173545B (en) | 2024-01-30 |
Family
ID=88947324
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311454459.4A Active CN117173545B (en) | 2023-11-03 | 2023-11-03 | License original identification method based on computer graphics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117173545B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015026117A (en) * | 2013-07-24 | 2015-02-05 | キヤノン株式会社 | Image processing method, image processing apparatus, program, and recording medium |
US10089383B1 (en) * | 2017-09-25 | 2018-10-02 | Maana, Inc. | Machine-assisted exemplar based similarity discovery |
CN109165674A (en) * | 2018-07-19 | 2019-01-08 | 南京富士通南大软件技术有限公司 | A kind of certificate photo classification method based on multi-tag depth convolutional network |
CN111191539A (en) * | 2019-12-20 | 2020-05-22 | 江苏常熟农村商业银行股份有限公司 | Certificate authenticity verification method and device, computer equipment and storage medium |
CN111766244A (en) * | 2019-04-01 | 2020-10-13 | 陈膺任 | Anti-counterfeiting element verification method and system |
WO2021189856A1 (en) * | 2020-09-24 | 2021-09-30 | 平安科技(深圳)有限公司 | Certificate check method and apparatus, and electronic device and medium |
CN113837026A (en) * | 2021-09-03 | 2021-12-24 | 支付宝(杭州)信息技术有限公司 | Method and device for detecting authenticity of certificate |
CN116363122A (en) * | 2022-12-05 | 2023-06-30 | 南通海驹钢结构有限公司 | Steel weld crack detection method and system based on artificial intelligence |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10346723B2 (en) * | 2016-11-01 | 2019-07-09 | Snap Inc. | Neural network for object detection in images |
US11449960B2 (en) * | 2019-03-27 | 2022-09-20 | Uber Technologies, Inc. | Neural network based identification document processing system |
-
2023
- 2023-11-03 CN CN202311454459.4A patent/CN117173545B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015026117A (en) * | 2013-07-24 | 2015-02-05 | キヤノン株式会社 | Image processing method, image processing apparatus, program, and recording medium |
US10089383B1 (en) * | 2017-09-25 | 2018-10-02 | Maana, Inc. | Machine-assisted exemplar based similarity discovery |
CN109165674A (en) * | 2018-07-19 | 2019-01-08 | 南京富士通南大软件技术有限公司 | A kind of certificate photo classification method based on multi-tag depth convolutional network |
CN111766244A (en) * | 2019-04-01 | 2020-10-13 | 陈膺任 | Anti-counterfeiting element verification method and system |
CN111191539A (en) * | 2019-12-20 | 2020-05-22 | 江苏常熟农村商业银行股份有限公司 | Certificate authenticity verification method and device, computer equipment and storage medium |
WO2021189856A1 (en) * | 2020-09-24 | 2021-09-30 | 平安科技(深圳)有限公司 | Certificate check method and apparatus, and electronic device and medium |
CN113837026A (en) * | 2021-09-03 | 2021-12-24 | 支付宝(杭州)信息技术有限公司 | Method and device for detecting authenticity of certificate |
CN116363122A (en) * | 2022-12-05 | 2023-06-30 | 南通海驹钢结构有限公司 | Steel weld crack detection method and system based on artificial intelligence |
Non-Patent Citations (2)
Title |
---|
A Proposed Framework for Identity Verification in Passport Management Using Model Scaling and Semantic Similarity;Rishita Khurana 等;《http://iieta.org/journals/isi》;全文 * |
基于 DCNN 的证件照人脸验证及应用研究;李硕 等;《计算机与现代化》;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117173545A (en) | 2023-12-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110348319B (en) | Face anti-counterfeiting method based on face depth information and edge image fusion | |
CN109299720B (en) | Target identification method based on contour segment spatial relationship | |
EP1233374B1 (en) | Apparatus and method for extracting objects based on feature matching between segmented regions in images | |
CN110546651B (en) | Method, system and computer readable medium for identifying objects | |
CN111709313B (en) | Pedestrian re-identification method based on local and channel combination characteristics | |
KR101237469B1 (en) | Method and apparatus for automatically detecting pornographic image, and computer readable medium thereof | |
AU2011207120B2 (en) | Identifying matching images | |
Yuan et al. | Learning to count buildings in diverse aerial scenes | |
CN102236675A (en) | Method for processing matched pairs of characteristic points of images, image retrieval method and image retrieval equipment | |
CN110197185B (en) | Method and system for monitoring space under bridge based on scale invariant feature transform algorithm | |
CN102346854A (en) | Method and device for carrying out detection on foreground objects | |
CN109902576B (en) | Training method and application of head and shoulder image classifier | |
CN111738349A (en) | Detection effect evaluation method and device of target detection algorithm, storage medium and equipment | |
CN111259756A (en) | Pedestrian re-identification method based on local high-frequency features and mixed metric learning | |
CN112288758B (en) | Infrared and visible light image registration method for power equipment | |
CN116703895B (en) | Small sample 3D visual detection method and system based on generation countermeasure network | |
Jubair et al. | Scale invariant feature transform based method for objects matching | |
Kovacs et al. | Orientation based building outline extraction in aerial images | |
CN109544614B (en) | Method for identifying matched image pair based on image low-frequency information similarity | |
CN117173545B (en) | License original identification method based on computer graphics | |
CN110738229A (en) | fine-grained image classification method and device and electronic equipment | |
CN114627424A (en) | Gait recognition method and system based on visual angle transformation | |
Wu et al. | An accurate feature point matching algorithm for automatic remote sensing image registration | |
CN108154107B (en) | Method for determining scene category to which remote sensing image belongs | |
CN115359346B (en) | Small micro-space identification method and device based on street view picture and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |