CN110647805B - Reticulate pattern image recognition method and device and terminal equipment - Google Patents

Reticulate pattern image recognition method and device and terminal equipment Download PDF

Info

Publication number
CN110647805B
CN110647805B CN201910736543.2A CN201910736543A CN110647805B CN 110647805 B CN110647805 B CN 110647805B CN 201910736543 A CN201910736543 A CN 201910736543A CN 110647805 B CN110647805 B CN 110647805B
Authority
CN
China
Prior art keywords
image
reticulate pattern
reticulate
value
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910736543.2A
Other languages
Chinese (zh)
Other versions
CN110647805A (en
Inventor
徐玲玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910736543.2A priority Critical patent/CN110647805B/en
Priority to PCT/CN2019/118652 priority patent/WO2021027163A1/en
Publication of CN110647805A publication Critical patent/CN110647805A/en
Application granted granted Critical
Publication of CN110647805B publication Critical patent/CN110647805B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

The invention provides a reticulate pattern image recognition method, a reticulate pattern image recognition device and terminal equipment, which are applicable to the technical field of data processing, wherein the method comprises the following steps: inputting an image to be processed into a pre-trained reticulate pattern removing model to obtain an image without reticulate pattern to be processed; carrying out gray value difference calculation on the image to be processed and the non-reticulate pattern image to be processed, and carrying out reticulate pattern reconstruction based on the gray value difference obtained by calculation and a preset difference threshold value to obtain a corresponding reticulate pattern; counting the number of pixel points contained in the reticulate pattern, and calculating the proportion value of the number of the obtained pixel points to the total number of the pixel points of the image to be processed; performing pattern matching on the reticulate pattern based on a preset reticulate pattern library; if the number of the reticulate pattern pixels is larger than a preset number threshold value, the proportion value is larger than a preset proportion threshold value, and the reticulate pattern matching is successful, the image to be processed is judged to be the reticulate pattern image. The embodiment of the invention ensures the accuracy and reliability of the identification of the reticulate pattern image.

Description

Reticulate pattern image recognition method and device and terminal equipment
Technical Field
The invention belongs to the technical field of data processing, and particularly relates to a reticulate pattern image recognition method and terminal equipment.
Background
The credentials with the public security system are provided with reticulate patterns, and the reticulate patterns can greatly influence the accuracy of identification when the processing such as face recognition is carried out, so that the credentials with the reticulate patterns can be used only by firstly removing the reticulate patterns, but before removing the reticulate patterns, whether the credentials are provided with the reticulate patterns or not needs to be judged. Although some reticulate pattern image recognition methods exist in the prior art, the recognition accuracy is not ideal, so a method for accurately recognizing whether an image has reticulate patterns is needed.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a reticulate pattern image recognition method and terminal equipment, so as to solve the problem of low reticulate pattern image recognition accuracy in the prior art.
A first aspect of an embodiment of the present invention provides a method for identifying an anilox image, including:
inputting an image to be processed into a pre-trained reticulate pattern removing model to obtain an image without reticulate pattern to be processed, wherein the reticulate pattern removing model is a model which is obtained by training based on reticulate pattern image samples and non-reticulate pattern image samples in advance and is used for removing reticulate patterns in the image;
performing gray value difference calculation on the image to be processed and the non-reticulate pattern image to be processed, and performing reticulate pattern reconstruction based on the gray value difference obtained by calculation and a preset difference threshold value to obtain a corresponding reticulate pattern;
Counting the number of pixel points contained in the reticulate pattern, and calculating the proportion value of the number of the obtained pixel points to the total number of the pixel points of the image to be processed; performing pattern matching on the reticulate pattern based on a preset reticulate pattern library;
and if the number of the reticulate pattern pixels is larger than a preset number threshold, the ratio value is larger than a preset ratio threshold and the reticulate pattern matching is successful, judging that the image to be processed is a reticulate pattern image.
A second aspect of an embodiment of the present invention provides an anilox image recognition apparatus, including:
the reticulation removing module is used for inputting an image to be processed into a pre-trained reticulation removing model to obtain a to-be-processed reticulation-free image, wherein the reticulation removing model is a model which is obtained by training based on reticulation image samples and reticulation-free image samples in advance and is used for removing reticulation in the image;
the reticulate pattern reconstruction module is used for carrying out gray value difference calculation on the image to be processed and the image without reticulate pattern to be processed, and carrying out reticulate pattern reconstruction based on the gray value difference obtained by calculation and a preset difference threshold value to obtain a corresponding reticulate pattern;
the characteristic processing module is used for counting the pixel points contained in the reticulate pattern and calculating the proportion value of the obtained pixel points to the total pixel points of the image to be processed; performing pattern matching on the reticulate pattern based on a preset reticulate pattern library;
And the reticulate pattern judging module is used for judging that the image to be processed is a reticulate pattern image if the reticulate pattern pixel number is larger than a preset number threshold value, the proportion value is larger than a preset proportion threshold value and the reticulate pattern is successfully matched.
A third aspect of the embodiments of the present invention provides a terminal device comprising a memory, a processor, the memory having stored thereon a computer program executable on the processor, the processor implementing the steps of the anilox image recognition method as described above when the computer program is executed.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium comprising: a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the anilox image recognition method as described above.
Compared with the prior art, the embodiment of the invention has the beneficial effects that: the method comprises the steps of performing reticulate removal on an image to be processed through a pre-trained reticulate removal model to obtain an image to be processed without reticulate, and comparing gray value difference values of the images to be processed before and after reticulate removal, so that difference parts of the images to be processed before and after reticulate removal are determined, and the images are drawn into corresponding reticulate patterns (namely, the difference parts are assumed to be reticulate), and the document reticulate patterns have the following characteristics in actual conditions: 1. the size of the credentials is relatively fixed, so that the number of pixels occupied by the reticulate pattern is relatively stable. 2. In credentials, the typical reticulate pattern covers most or all of the image area, so that the proportion of pixel points occupied by the reticulate pattern is relatively stable and high. 3. The variety of reticulation patterns for credential placement is limited and known. Based on the above practical characteristics, the embodiment of the invention further checks the number of pixels contained in the drawn reticulate pattern, the proportion of the number of pixels accounting for the total number of pixels of the image to be processed, and matches the reticulate pattern, thereby realizing multi-dimensional full-aspect check of reticulate pattern, and when the number of pixels is all satisfied, indicating that the difference part existing between the image to be processed before and after reticulate pattern removal is reticulate pattern content, so that the image to be processed can be judged to be the reticulate pattern image, and the accuracy and reliability of reticulate pattern image identification are ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an implementation of an anilox image recognition method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an implementation flow of an anilox image recognition method according to a second embodiment of the present invention;
fig. 3 is a schematic implementation flow chart of an anilox image recognition method according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an anilox image recognition apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic diagram of a terminal device according to a fifth embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
Because the image with the reticulate patterns can greatly influence the accuracy of processing such as face recognition and the like on the image, a method for recognizing whether the image has reticulate patterns or not is needed, so that a foundation is provided for reticulate pattern removal, credential face recognition and the like.
In order to realize the reticulate pattern recognition of the image, in the embodiment of the invention, reticulate patterns of the image to be processed are removed through a pre-trained reticulate pattern removal model to obtain a reticulate pattern-free image to be processed, and gray value difference values of the image to be processed before and after reticulate pattern removal are compared, so that the difference parts of the image to be processed before and after reticulate pattern removal are determined, and the images are drawn into corresponding reticulate patterns (namely, the difference parts are assumed to be reticulate patterns), and the document in actual conditions has the following characteristics: 1. the size of the credentials is relatively fixed, so that the number of pixels occupied by the reticulate pattern is relatively stable. 2. In credentials, the typical reticulate pattern covers most or all of the image area, so that the proportion of pixel points occupied by the reticulate pattern is relatively stable and high. 3. The variety of reticulation patterns for credential placement is limited and known. Based on the above practical characteristics, the embodiment of the invention further checks the number of pixels contained in the drawn reticulate pattern, the proportion of the number of pixels accounting for the total number of pixels of the image to be processed, and matches the reticulate pattern, thereby realizing multi-dimensional full-aspect check of reticulate pattern, and when the number of pixels is all satisfied, indicating that the difference part existing between the image to be processed before and after reticulate pattern removal is reticulate pattern content, so that the image to be processed can be judged to be the reticulate pattern image, and the accuracy and reliability of reticulate pattern image identification are ensured.
Fig. 1 shows a flowchart of an implementation of an anilox image recognition method according to an embodiment of the present invention, which is described in detail below:
s101, inputting an image to be processed into a pre-trained reticulate pattern removing model to obtain a to-be-processed reticulate pattern free image, wherein the reticulate pattern removing model is a model which is obtained by training based on reticulate pattern image samples and reticulate pattern free image samples in advance and is used for removing reticulate patterns in the image.
In the embodiment of the invention, the reticulate pattern removing model is used for reticulate patterns in the image, and is obtained by training and constructing in advance by a technician, and after the processing of the reticulate pattern removing model, whether reticulate patterns are contained in the original image to be processed or not, the corresponding reticulate pattern-free image can be obtained. The training and constructing method of the reticulation removal model is not limited herein, and may be designed by a technician, or may be performed by training and constructing with reference to the second to sixth embodiments of the present invention.
S102, gray value difference calculation is carried out on the image to be processed and the non-reticulate pattern image to be processed, reticulate pattern reconstruction is carried out on the basis of the gray value difference obtained through calculation and a preset difference threshold value, and a corresponding reticulate pattern is obtained.
In order to identify whether the images to be processed contain reticulate patterns or not, the embodiment of the invention firstly assumes that the images to be processed contain reticulate patterns, and directly calculates gray value difference values of the images before and after reticulate pattern removal, namely, each pixel point corresponding to two images is subjected to graying treatment and gray value difference solving, so that the difference parts of the images to be processed before and after reticulate pattern removal are determined, and meanwhile, even if the images are not reticulate pattern pixels, the change of gray values possibly exists before and after reticulate pattern removal model treatment is considered, therefore, the embodiment of the invention presets a difference threshold value for selecting reticulate pattern pixels, and only the pixels with gray value difference values equal to the difference threshold value are identified as reticulate pattern pixels. The specific size of the difference threshold can be set by a technician according to actual application requirements.
After the reticulate pattern pixels are selected, the pattern formed by all reticulate pattern pixels is extracted as a corresponding reticulate pattern, so that reticulate pattern reconstruction in the original image to be processed is realized.
S103, counting the number of pixel points contained in the reticulate pattern, and calculating the proportion value of the number of the obtained pixel points to the total number of the pixel points of the image to be processed. And performing pattern matching on the reticulate pattern based on a preset reticulate pattern library.
And S104, if the number of the reticulate pattern pixels is larger than a preset number threshold value, the proportion value is larger than a preset proportion threshold value and the reticulate pattern matching is successful, judging that the image to be processed is the reticulate pattern image.
Because the document photo reticulate pattern has the following characteristics in the actual situation:
1. the size of the credentials is relatively fixed, so that the number of pixels occupied by the reticulate pattern is relatively stable.
2. In credentials, the typical reticulate pattern covers most or all of the image area, so that the proportion of pixel points occupied by the reticulate pattern is relatively stable and high.
3. The variety of reticulation patterns for credential placement is limited and known.
In the embodiment of the invention, the number of reticulate pattern pixels contained in the certificate photo actually containing reticulate patterns and the proportion of the number of pixel points in the total certificate photo image are counted in advance, the corresponding quantity threshold value and the proportion threshold value are set according to the counting result, and meanwhile, the corresponding reticulate pattern library is constructed based on the known types of the certificate photo reticulate pattern in advance. In actual processing, respectively counting the number of reticulate pixels of the drawn reticulate pattern, calculating the proportion of the total number of the pixels of the image to be processed, comparing the calculated reticulate pixel count with a corresponding quantity threshold value and a corresponding proportion threshold value, simultaneously carrying out pattern matching on a reticulate pattern library according to the drawn reticulate pattern, and if the processing result is that the number of reticulate pixels is enough and the proportion is large enough, and meanwhile, similar patterns exist in the reticulate pattern library, then the difference part content before and after reticulate removal is described, and simultaneously, 3 characteristics of the certificate-based reticulate pattern are satisfied, namely, the assumption that the difference part content is reticulate pattern is true, at the moment, the image to be processed is directly judged to be the reticulate pattern image, and the identification of the reticulate pattern image is completed.
If the conditions in the three conditions are not met, it is indicated that the detection result cannot determine whether the to-be-processed image contains reticulate patterns, and the reason for the result is that the to-be-processed image itself does not contain reticulate patterns, or there is a problem that data errors exist in the processing process, or the quality of the to-be-processed image is poor, so as to be an alternative embodiment of the invention, if the conditions in the three conditions are not met, the embodiment of the invention returns to S101 to process the to-be-processed image again, and at the same time, the total times of processing the to-be-processed image is counted, if the processing result is that the conditions in the three conditions are not met within the preset maximum total times range, the method loops back to S101 until the processing times reach the maximum total times, and the to-be-processed image is directly determined to be the image without reticulate patterns. As another alternative embodiment of the present invention, the image to be processed may be directly determined as an image without moire when the condition is not satisfied among the three conditions. The specific manner of the method is not limited herein, and the technician can select the setting according to the actual requirement.
In the embodiment of the invention, the reticulate pattern removing model is trained in advance to remove reticulate patterns of the to-be-processed image to obtain a reticulate pattern-free to-be-processed image, and gray value difference values of the to-be-processed image before and after reticulate pattern removal are compared, so that the difference parts of the to-be-processed image before and after reticulate pattern removal are determined, the to-be-processed image is drawn into corresponding reticulate patterns (namely, the difference parts are assumed to be reticulate patterns), and based on three characteristics of credentials, the number of pixels contained in the drawn reticulate patterns and the proportion of the total pixel number of the to-be-processed image are further checked, and the reticulate patterns are matched, so that multi-dimensional all-dimensional check of reticulate patterns is realized, and when the multi-dimensional all-dimensional check of reticulate patterns is satisfied, the difference parts of the to-be-processed image before and after reticulate pattern removal are indicated as reticulate patterns, so that the to judge that the to-be-processed image is the reticulate pattern image, and the accuracy and reliability of reticulate pattern recognition are ensured.
As a specific implementation manner of performing training and construction of an anilox removal model in the first embodiment of the present invention, as shown in fig. 2, the second embodiment of the present invention includes:
s201, acquiring a plurality of pairs of reticulate image samples and non-reticulate image samples, wherein only reticulate differences exist between the reticulate image samples and the non-reticulate image samples in each pair of image samples.
In the embodiment of the present invention, the image samples for model training are all present in pairs, and the anilox image samples and the non-anilox image samples in each pair of image samples are identical except for the anilox, and in order to obtain a plurality of pairs of image samples only having the anilox difference, a method is used, which includes, but is not limited to, for example, obtaining a required number of non-anilox credentials, and then adding corresponding anilox to the non-anilox credentials, or a technician may also use other methods, which are not limited herein.
S202, a reticulate pattern removal generator G (x) is constructed, a reticulate pattern addition generator F (x) is constructed, the probability that the G (x) obtains an image belonging to a reticulate pattern image through a discrimination network Dg (x) is set as Dg (x), and the probability that the F (x) obtains an image belonging to a reticulate pattern image through a discrimination network Df (x) is set as Df (x).
In the embodiment of the present invention, an initial total model is first constructed, where the total model includes an initial mesh removal generator G (x), a mesh addition generator F (x), a non-mesh discrimination network Dg (x), and a mesh discrimination network Df (x), so as to perform subsequent iterative training, where the initial model construction rules include, but are not limited to, the following: the model framework structure is set by a technician, contains a plurality of layers, attributes of each layer and the like, and randomly generates model parameters. The recognition rate of the initial G (x) and F (x) is generally low, so that the embodiment of the invention can improve the model recognition rate through iterative update training later.
S203, respectively processing the reticulate image sample a and the reticulate image sample b by using G (x) and F (x), obtaining corresponding processed images, calculating first loss values corresponding to G (x) and F (x) based on a, b, the processed images, dg (a) and Df (b), and calculating second loss values and third loss values corresponding to Dg (x) and Df (x) based on Dg (a) and Df (b).
In order to evaluate the effectiveness of the mesh removal generator G (x), the mesh addition generator F (x), the non-mesh discrimination network Dg (x), and the mesh discrimination network Df (x), in the embodiment of the present invention, G (x) and F (x) are used to process the mesh image sample a and the mesh image sample b respectively to obtain corresponding processed images, and the mesh image sample a is processed by using G (x) to obtain an image a ', the mesh image sample a is processed by using F (x) to obtain an image a ", the mesh image sample b is processed by using F (x) to obtain an image b ', and the mesh image sample b ' is processed by using F (x) to obtain an image b", and the processed images in the embodiment of the present invention include one or more of a ', a ", b ', and b". In the embodiment of the present invention, first, G (x) and F (x) are used to process a and b respectively, and since a and b only have a difference in texture, the theoretically obtained processed image should also only have a difference in texture, and similarly, it is known that, theoretically, a=a "=b ', and b=b" =a', based on this theoretical equation, the embodiment of the present invention further calculates the first loss value of the two functionally opposite generator loss functions, and the second loss value and the third loss value corresponding to Dg (x) and Df (x), where the specific loss function selection/design is not limited herein, and may be selected/designed by the skilled person according to the needs, or may refer to the second and third embodiments of the present invention for processing.
S204, calculating the image difference degree between the a and b and the processed images.
Although a=a "=b ', and b=b" =a', the processing effect of the untrained G (x) and F (x) is not necessarily good in the actual situation, so that there is necessarily a certain difference between the actual a, a "and b ', and between the actual b, b" and a', and the difference directly represents the training effect of the G (x) and the F (x), so that the embodiment of the invention uses the manner of calculating the difference degree of the images between the a, b and the processed images as a quantization manner of one dimension of the training effect of the G (x) and the F (x). The specific calculation method of the image difference degree is not limited herein, and includes, but is not limited to, calculating euclidean distance between a, a "and b ', euclidean distance between b, b" and a', or the like, or may be designed by a skilled person according to the requirement, and it should be noted that, according to the difference of the finally selected calculation method, the used image after specific processing may also have a difference, for example, may also be calculated by using only a, b, a 'and b', may also be calculated by using a, b, a ', b', a "and b", and the specific final calculation method may also be determined.
S205, judging whether the first loss value, the second loss value and the third loss value are larger than the corresponding preset loss value threshold values respectively, and judging whether the image difference degree is larger than the preset difference degree threshold value.
In order to realize iterative training on G (x), F (x), dg (x) and Df (x) to achieve the expected training effect, the embodiment of the invention presets one or more loss value thresholds and presets a difference threshold to judge the legitimacy of three loss values and image difference. The loss value threshold is used for measuring the training expected effects of G (x), F (x), dg (x) and Df (x), the number of the loss value thresholds is set by technicians according to actual demands, when the training expected effects of the generator and the discrimination network are different, an independent loss value threshold can be respectively set for each loss value, and the same loss value threshold can be uniformly set, meanwhile, the specific values of the loss value threshold and the difference threshold can be set by the technicians according to the actual demands, and the larger the loss value threshold and the difference threshold are, the lower the training expected effect requirements of the generator and the discrimination network are.
S206, if the second loss value and/or the third loss value are/is greater than or equal to the corresponding value of the preset loss value threshold, iteratively updating Dg (x) and Df (x).
S207, if the first loss value is greater than or equal to the corresponding preset loss value threshold and/or the image difference is greater than or equal to the preset difference threshold, iteratively updating G (x) and F (x).
When there is a value that does not meet the requirement of the loss value threshold in the second loss value and the third loss value, it is indicated that the discrimination effect of Dg (x) and Df (x) also reaches the expected effect, so that iterative updates Dg (x) and Df (x) are returned at this time. Similarly, when the first loss value is too large to meet the loss value threshold requirement, it is indicated that the G (x) and the F (x) do not reach the expected effect, and at this time, iterative updates G (x) and F (x) are returned.
S208, if the first loss value, the second loss value and the third loss value are smaller than the corresponding preset loss value threshold values and the image difference degree is smaller than the preset difference degree threshold value, model training of the reticulate pattern removal generator G (x) is completed, and a reticulate pattern removal model is obtained.
Because the G (x), F (x), dg (x) and Df (x) are in mutually opposite and dependent relationship, in the embodiment of the present invention, even if the second loss value and the third loss value meet the requirement, or the first loss value meets the requirement, the completion of the training of Dg (x) and Df (x) cannot be directly determined, or the training of G (x) and F (x) is completed, so in the embodiment of the present invention, only when the first loss value, the second loss value, the third loss value and the image difference degree meet the requirement at the same time, the completion of training of G (x), F (x), dg (x) and Df (x) can be determined, and at this time, the finally usable mesh removal generator G (x), that is, the mesh removal model in the first embodiment of the present invention can be obtained.
It should be noted that, although the final objective of the present invention is to perform training construction on the G (x) that can perform the mesh removal to obtain the mesh removal model of the first embodiment of the present invention, in the embodiment of the present invention, the training and training effect of the G (x) is determined to be not only related to the F (x) with the opposite functions, but also related to the accuracy of the determining network Dg (x) and Df (x) that perform whether the mesh is present, only when the training is completed by the G (x), F (x), dg (x) and Df (x) to achieve the desired effect, it is only explained that the final G (x) is accurate and effective, so that the embodiment of the present invention performs iterative update on the G (x) and F (x), and also performs iterative update on the Dg (x) and Df (x), and although the step of updating is seemingly independent (whether the updating is dependent on the second loss value and the third loss value, no reference to the first loss value and the difference value, and the second loss value and the third loss value are seemingly independent (whether the updating is dependent on the second loss value and the third loss value) is independent of the G (x) and the actual loss value) is not dependent on the first loss value and the F (x) and the step is also independent.
In the embodiment of the invention, two opposite generators and two opposite judging networks are constructed, paired image samples are processed based on the opposite generators and the judging networks, loss values and image difference degrees are calculated based on processing results, so that the quantization of training effects of the generators and the judging networks is realized, and finally the generators and the judging networks are respectively and iteratively updated according to whether the loss values and the image difference degrees meet the expected effects or not until the expected effects are met, so that the effective training of the reticulate pattern removing model is realized.
As a specific implementation manner of calculating the first loss value in the second embodiment of the present invention, the method includes:
an image a 'obtained by processing the anilox image sample a with G (x), an image a″ obtained by processing a' with F (x), an image b 'obtained by processing the anilox image sample b with F (x), an image b″ obtained by processing b' with F (x), and a first loss value calculated based on the formula (1):
Lg=-(log 10 (Dg(G(a)))-log 10 (Df(F(b)))+Lcyc,
Lcyc=L1Loss(a”,a)×lambda_a+L1 Loss(b”,b)×lambda_b+
L1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_d (1)
wherein Lg is a first Loss value, L1Loss (x, y) represents euclidean distance of two images, and lambda_a, lambda_b, lambda_c and lambda_d represent preset weights.
In theory, a=a "=b ', and b=b" =a', in the embodiment of the invention, a "and a, b" and b, a and b 'and b and a' are respectively compared and calculated to obtain difference degree quantization values corresponding to four dimensions, and meanwhile, in theory, the more the number of times of processing by an image sample generator is, the larger the probability of deviation is, and the greater the difficulty of matching with an original image sample is finally, therefore, the embodiment of the invention and the method can preset corresponding weights for the difference degrees of different dimensions to balance the difference degree values obtained under different matching difficulty conditions, and the effectiveness of calculating the first loss value is ensured to the greatest extent. The specific values of the lambda_a, the lambda_b, the lambda_c and the lambda_d can be set by a technician after measuring the matching difficulty of each dimension, and preferably, the values of the lambda_a and the lambda_b are larger than the lambda_c and the lambda_d.
As a specific implementation manner of calculating the second loss value and the third loss value in the second embodiment of the present invention, the method includes:
calculating a second loss value Ldg and a third loss value Ldf based on the formula (2) and the formula (3):
Ldg=-log 10 (Dg(G(a))-0.5)+log 10 (1.5-Dg(G(a))) (2)
Ldf=-log 10 (Df(F(b))-0.5)+log 10 (1.5-Df(F(b))) (3)
as a specific implementation manner of calculating the image difference degree in the second embodiment of the present invention, as shown in fig. 3, the third embodiment of the present invention includes:
s301, respectively carrying out gray value difference operation on a and b, a and a 'and b', and carrying out reticulate pattern extraction based on the obtained gray value difference and a preset difference threshold value to obtain a corresponding first reticulate pattern image, a second reticulate pattern image and a third reticulate pattern image, wherein a 'is an image obtained by processing a reticulate pattern image sample a by using G (x), and b' is an image obtained by processing a reticulate pattern image sample b by using F (x).
S302, calculating the image distance between the first reticulate pattern image and the second reticulate pattern image and the image distance between the first reticulate pattern image and the third reticulate pattern image, and calculating the difference value of the two obtained image distances to obtain the image difference degree.
In theory, a=a "=b ', and b=b" =a', where the first anilox image is a standard anilox image, the second anilox image is a difference part image before and after the G (x) processing, and the third anilox image is a difference part image before and after the F (x) processing, so that gray value difference operations are respectively performed on a and a 'and b', so that actual processing effects of G (x) and F (x) can be extracted, image distances between the second anilox image and the third anilox image and the first anilox image are respectively calculated, quantitative evaluation on the actual processing effects of G (x) and F (x) is realized, and finally, difference values are calculated between the two image distances, thereby obtaining the image difference degree required by the second embodiment of the invention. The image distance is the reciprocal of the similarity of the images, and the specific image distance calculating method is not limited herein, and may be set by the skilled person or refer to other embodiments of the present invention.
As a specific implementation manner of calculating the image distance in the third embodiment of the present invention, the method includes:
calculating an image distance L based on formula (4):
wherein n is the total number of pixels of the first or second moire image, x i And y i And the pixel value of the ith pixel point of the first reticulate image or the second reticulate image respectively.
In the embodiment of the invention, pixel points of the reticulate pattern image are compared one by one to calculate the pixel value difference value, and then the difference value is inverted, so that the required image distance is obtained. Similarly, the image distances of the first anilox image and the third anilox image can be calculated according to the formula (4), which is not described herein.
Corresponding to the method of the above embodiment, fig. 4 shows a block diagram of the structure of the anilox image recognition apparatus provided in the embodiment of the present invention, and for convenience of explanation, only the portion relevant to the embodiment of the present invention is shown. The anilox image recognition apparatus illustrated in fig. 4 may be an execution subject of the anilox image recognition method provided in the first embodiment described above.
Referring to fig. 4, the anilox image recognition apparatus includes:
the screen removing module 41 is configured to input the image to be processed into a pre-trained screen removing model, so as to obtain a screen-free image to be processed, where the screen removing model is a model that is obtained by training based on a screen image sample and a screen-free image sample in advance, and is used for removing screens in the image.
And the reticulate pattern reconstruction module 42 is used for carrying out gray value difference calculation on the image to be processed and the image without reticulate pattern to be processed, and carrying out reticulate pattern reconstruction based on the gray value difference obtained by calculation and a preset difference threshold value to obtain a corresponding reticulate pattern.
The feature processing module 43 is configured to count the number of pixel points included in the reticulate pattern, and calculate a proportion value of the number of pixel points obtained by calculation to the total number of pixel points of the image to be processed. And performing pattern matching on the reticulate pattern based on a preset reticulate pattern library.
And the reticulate pattern judging module 44 is configured to judge that the image to be processed is a reticulate pattern image if the reticulate pattern pixel number is greater than a preset number threshold, the ratio value is greater than a preset ratio threshold, and the reticulate pattern matching is successful.
Further, the anilox image recognition apparatus further includes:
the sample acquisition module is used for acquiring a plurality of pairs of reticulate image samples and non-reticulate image samples, wherein the reticulate image samples and the non-reticulate image samples in each pair of image samples only have reticulate differences.
The generator building module is used for building a reticulate pattern removing generator G (x), a reticulate pattern adding generator F (x), and the probability that the G (x) obtains that the image belongs to the reticulate pattern image through a discrimination network Dg (x) is set as Dg (x), and the probability that the F (x) obtains that the image belongs to the reticulate pattern image through a discrimination network Df (x) is set as Df (x).
The loss value calculation module is used for respectively processing the reticulate pattern image sample a and the reticulate pattern image sample b by using G (x) and F (x), obtaining corresponding processed images, calculating first loss values corresponding to G (x) and F (x) based on a, b and the processed images, dg (G (a)) and Df (b)), and calculating second loss values and third loss values corresponding to Dg (x) and Df (x) based on Dg (G (a) and Df (b)).
And the difference calculation module is used for calculating the image difference degree between the a and b and the processed images.
And the parameter comparison module is used for judging whether the first loss value, the second loss value and the third loss value are larger than corresponding preset loss value thresholds respectively and judging whether the image difference degree is larger than a preset difference degree threshold.
And the iteration updating module is used for iteratively updating Dg (x) and Df (x) if the second loss value and/or the third loss value are/is greater than or equal to the corresponding preset loss value threshold value. And if the first loss value is greater than or equal to a corresponding preset loss value threshold value and/or the image difference degree is greater than or equal to the preset difference degree threshold value, iteratively updating G (x) and F (x).
The model output module is configured to complete model training of the reticulation removal generator G (x) if the first loss value, the second loss value, and the third loss value are all smaller than a corresponding preset loss value threshold, and the image difference is smaller than the preset difference threshold, so as to obtain the reticulation removal model.
Further, the loss value calculation module includes:
an image a 'obtained by processing the anilox image sample a with G (x), an image a″ obtained by processing a' with F (x), an image b 'obtained by processing the anilox image sample b with F (x), an image b″ obtained by processing b' with F (x), and a first loss value calculated based on the formula (1):
Lg=-(log 10 (Dg(G(a)))-log 10 (Df(F(b)))+Lcyc,
Lcyc=L1 Loss(a”,a)×lambda_a+L1 Loss(b”,b)×lambda_b+
L1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_d (1)
wherein Lg is a first Loss value, L1Loss (x, y) represents euclidean distance of two images, and lambda_a, lambda_b, lambda_c and lambda_d represent preset weights.
Further, the loss value calculation module further includes:
calculating a second loss value Ldg and a third loss value Ldf based on the formula (2) and the formula (3):
Ldg=-log 10 (Dg(G(a))-0.5)+log 10 (1.5-Dg(G(a))) (2)
Ldf=-log 10 (Df(F(b))-0.5)+log 10 (1.5-Df(F(b))) (3)
further, the difference calculation module includes:
the reticulate pattern image extraction module is used for respectively carrying out gray value difference operation on a and b, a and a 'and b', carrying out reticulate pattern extraction based on the obtained gray value difference and the preset difference threshold value to obtain a corresponding first reticulate pattern image, a second reticulate pattern image and a third reticulate pattern image, wherein a 'is an image obtained by processing a reticulate pattern image sample a by using G (x), and b' is an image obtained by processing a reticulate pattern image sample b by using F (x).
The image difference calculation module is used for calculating the image distance between the first anilox image and the second anilox image and the image distance between the first anilox image and the third anilox image, and calculating the difference value between the two obtained image distances to obtain the image difference degree.
Further, the image difference calculation module includes:
calculating an image distance L based on formula (4):
wherein n is the total number of pixels of the first or second moire image, x i And y i And the pixel value of the ith pixel point of the first reticulate image or the second reticulate image respectively.
The process of implementing the respective functions of each module in the anilox image recognition apparatus provided in the embodiment of the present invention may refer to the description of the first embodiment shown in fig. 1, which is not repeated herein.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
It will also be understood that, although the terms "first," "second," etc. may be used herein in some embodiments of the invention to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first table may be named a second table, and similarly, a second table may be named a first table without departing from the scope of the various described embodiments. The first table and the second table are both tables, but they are not the same table.
Fig. 5 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 5, the terminal device 5 of this embodiment includes: a processor 50, a memory 51, said memory 51 having stored therein a computer program 52 executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps of the respective anilox image recognition method embodiments described above, such as steps 101 to 104 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 41 to 44 shown in fig. 4.
The terminal device 5 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal device may include, but is not limited to, a processor 50, a memory 51. It will be appreciated by those skilled in the art that fig. 5 is merely an example of the terminal device 5 and does not constitute a limitation of the terminal device 5, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the terminal device may further include an input transmitting device, a network access device, a bus, etc.
The processor 50 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. The memory 51 may be an external storage device of the terminal device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the terminal device 5. The memory 51 is used for storing the computer program as well as other programs and data required by the terminal device. The memory 51 may also be used to temporarily store data that has been transmitted or is to be transmitted.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. A method of identifying a moire image, comprising:
inputting an image to be processed into a pre-trained reticulate pattern removing model to obtain an image without reticulate pattern to be processed, wherein the reticulate pattern removing model is a model which is obtained by training based on reticulate pattern image samples and non-reticulate pattern image samples in advance and is used for removing reticulate patterns in the image;
performing gray value difference calculation on the image to be processed and the non-reticulate pattern image to be processed, and performing reticulate pattern reconstruction based on the gray value difference obtained by calculation and a preset difference threshold value to obtain a corresponding reticulate pattern;
counting the number of pixel points contained in the reticulate pattern, and calculating the proportion value of the number of the obtained pixel points to the total number of the pixel points of the image to be processed; performing pattern matching on the reticulate pattern based on a preset reticulate pattern library;
And if the number of the reticulate pattern pixels is larger than a preset number threshold, the ratio value is larger than a preset ratio threshold and the reticulate pattern matching is successful, judging that the image to be processed is a reticulate pattern image.
2. The moire image recognition method as defined in claim 1, wherein training said moire removal model comprises:
acquiring a plurality of pairs of reticulate image samples and non-reticulate image samples, wherein only reticulate differences exist between the reticulate image samples and the non-reticulate image samples in each pair of image samples;
constructing a reticulate pattern removal generator G (x), a reticulate pattern addition generator F (x), and setting the probability that the G (x) obtains an image belonging to a reticulate pattern image through a discrimination network Dg (x) as Dg (x)), and setting the probability that the F (x) obtains an image belonging to a reticulate pattern image through a discrimination network Df (x) as Df (x);
processing a reticulate image sample a and a reticulate image sample b by using G (x) and F (x) respectively to obtain corresponding processed images, calculating first loss values corresponding to the G (x) and the F (x) based on a and b, the processed images, dg (G (a)) and Df (b)), and calculating second loss values and third loss values corresponding to the Dg (x) and the Df (x) based on Dg (G (a) and Df (F (b)).
Calculating the image difference degree between the a and b and the processed images;
Judging whether the first loss value, the second loss value and the third loss value are larger than corresponding preset loss value thresholds or not, and judging whether the image difference degree is larger than a preset difference degree threshold or not;
if the second loss value and/or the third loss value is greater than the corresponding value of the preset loss value threshold, iteratively updating Dg (x) and Df (x); if the first loss value is greater than a corresponding preset loss value threshold value and/or the image difference degree is greater than the preset difference degree threshold value, iteratively updating G (x) and F (x);
and if the first loss value, the second loss value and the third loss value are smaller than the corresponding preset loss value threshold, and the image difference is smaller than the preset difference threshold, model training of the reticulation removal generator G (x) is completed, and the reticulation removal model is obtained.
3. The anilox image recognition method according to claim 2, wherein the processing the anilox image sample a and the anilox image sample b with G (x) and F (x), respectively, to obtain corresponding processed images, and calculating the first loss values corresponding to G (x) and F (x) based on a, b, the processed images, dg (G (a)) and Df (F (b)), includes:
An image a 'obtained by processing the anilox image sample a with G (x), an image a″ obtained by processing a' with F (x), an image b 'obtained by processing the anilox image sample b with F (x), an image b″ obtained by processing b' with F (x), and a first loss value calculated based on the following equation:
Lg=-(log 10 (Dg(G(a)))-log 10 (Df(F(b)))+Lcyc,
Lcyc=L1 Loss(a”,a)×lambda_a+L1 Loss(b”,b)×lambda_b+
L1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_d
wherein Lg is a first Loss value, L1 Loss (x, y) represents euclidean distance of two images, and lambda_a, lambda_b, lambda_c and lambda_d represent preset weights.
4. The moire image recognition method as defined in claim 2, wherein said calculating second and third loss values corresponding to Dg (x) and Df (x) based on Dg (G (a) and Df (F (b)) comprises:
the second loss value Ldg and the third loss value Ldf are calculated based on the following equation:
Ldg=-log 10 (Dg(G(a))-0.5)+log 10 (1.5-Dg(G(a)))
Ldf=-log 10 (Df(F(b))-0.5)+log 10 (1.5-Df(F(b)))。
5. the anilox image recognition method of claim 2, wherein the calculating of the image difference between a, b and the processed image comprises:
respectively carrying out gray value difference operation on a and b, a and a 'and b', and carrying out reticulate pattern extraction based on the obtained gray value difference and the preset difference threshold value to obtain a corresponding first reticulate pattern image, a second reticulate pattern image and a third reticulate pattern image, wherein a 'is an image obtained by processing a reticulate pattern image sample a by using G (x), and b' is an image obtained by processing a reticulate pattern image sample b by using F (x);
Calculating the image distance between the first reticulate pattern image and the second reticulate pattern image and the image distance between the first reticulate pattern image and the third reticulate pattern image, and calculating the difference value of the two obtained image distances to obtain the image difference degree.
6. The anilox image recognition method of claim 5, wherein calculating an image distance of the first anilox image and the second anilox image comprises:
the image distance L is calculated based on:
wherein n is the total number of pixels of the first or second moire image, x i And y i And the pixel value of the ith pixel point of the first reticulate image or the second reticulate image respectively.
7. A screen image recognition apparatus, comprising:
the reticulation removing module is used for inputting an image to be processed into a pre-trained reticulation removing model to obtain a to-be-processed reticulation-free image, wherein the reticulation removing model is a model which is obtained by training based on reticulation image samples and reticulation-free image samples in advance and is used for removing reticulation in the image;
the reticulate pattern reconstruction module is used for carrying out gray value difference calculation on the image to be processed and the image without reticulate pattern to be processed, and carrying out reticulate pattern reconstruction based on the gray value difference obtained by calculation and a preset difference threshold value to obtain a corresponding reticulate pattern;
The characteristic processing module is used for counting the pixel points contained in the reticulate pattern and calculating the proportion value of the obtained pixel points to the total pixel points of the image to be processed; performing pattern matching on the reticulate pattern based on a preset reticulate pattern library;
and the reticulate pattern judging module is used for judging that the image to be processed is a reticulate pattern image if the reticulate pattern pixel number is larger than a preset number threshold value, the proportion value is larger than a preset proportion threshold value and the reticulate pattern is successfully matched.
8. The anilox image recognition apparatus of claim 7, further comprising:
the sample acquisition module is used for acquiring a plurality of pairs of reticulate pattern image samples and reticulate pattern-free image samples, wherein reticulate pattern differences exist between the reticulate pattern image samples and the reticulate pattern-free image samples in each pair of image samples;
the generator building module is used for building a reticulate pattern removal generator G (x), a reticulate pattern addition generator F (x) is arranged, the probability that the G (x) obtains an image belonging to a reticulate pattern image through a discrimination network Dg (x) is set as Dg (x), and the probability that the F (x) obtains an image belonging to a reticulate pattern image through a discrimination network Df (x) is set as Df (x);
the loss value calculation module is used for respectively processing the reticulate pattern image sample a and the reticulate pattern image sample b by using G (x) and F (x), obtaining corresponding processed images, calculating first loss values corresponding to G (x) and F (x) on the basis of a, b, the processed images, dg (G (a)) and Df (b)), and calculating second loss values and third loss values corresponding to Dg (x) and Df (x) on the basis of Dg (G (a) and Df (b)).
The difference calculation module is used for calculating the image difference degree between the a and b and the processed images;
the parameter comparison module is used for judging whether the first loss value, the second loss value and the third loss value are larger than corresponding preset loss value thresholds or not and judging whether the image difference degree is larger than a preset difference degree threshold or not;
the iteration updating module is used for iteratively updating Dg (x) and Df (x) if the second loss value and/or the third loss value is greater than the corresponding value of the preset loss value threshold; if the first loss value is greater than a corresponding preset loss value threshold value and/or the image difference degree is greater than the preset difference degree threshold value, iteratively updating G (x) and F (x);
the model output module is configured to complete model training of the reticulation removal generator G (x) if the first loss value, the second loss value, and the third loss value are all smaller than a corresponding preset loss value threshold, and the image difference is smaller than the preset difference threshold, so as to obtain the reticulation removal model.
9. A terminal device, characterized in that it comprises a memory, a processor, on which a computer program is stored which is executable on the processor, the processor executing the computer program to carry out the steps of the method according to any one of claims 1 to 6.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 6.
CN201910736543.2A 2019-08-09 2019-08-09 Reticulate pattern image recognition method and device and terminal equipment Active CN110647805B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910736543.2A CN110647805B (en) 2019-08-09 2019-08-09 Reticulate pattern image recognition method and device and terminal equipment
PCT/CN2019/118652 WO2021027163A1 (en) 2019-08-09 2019-11-15 Reticulate pattern-containing image recognition method and apparatus, and terminal device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910736543.2A CN110647805B (en) 2019-08-09 2019-08-09 Reticulate pattern image recognition method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN110647805A CN110647805A (en) 2020-01-03
CN110647805B true CN110647805B (en) 2023-10-31

Family

ID=68990095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910736543.2A Active CN110647805B (en) 2019-08-09 2019-08-09 Reticulate pattern image recognition method and device and terminal equipment

Country Status (2)

Country Link
CN (1) CN110647805B (en)
WO (1) WO2021027163A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819016A (en) * 2021-02-19 2021-05-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760859A (en) * 2016-03-22 2016-07-13 中国科学院自动化研究所 Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network
CN108734673A (en) * 2018-04-20 2018-11-02 平安科技(深圳)有限公司 Descreening systematic training method, descreening method, apparatus, equipment and medium
CN109426775A (en) * 2017-08-25 2019-03-05 株式会社日立制作所 The method, device and equipment of reticulate pattern in a kind of detection facial image
WO2019085403A1 (en) * 2017-10-31 2019-05-09 平安科技(深圳)有限公司 Intelligent face recognition comparison method, electronic device, and computer readable storage medium
CN109871755A (en) * 2019-01-09 2019-06-11 中国平安人寿保险股份有限公司 A kind of auth method based on recognition of face
CN110032931A (en) * 2019-03-01 2019-07-19 阿里巴巴集团控股有限公司 Generate confrontation network training, reticulate pattern minimizing technology, device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2105936B1 (en) * 1994-03-21 1998-06-01 I D Tec S L IMPROVEMENTS INTRODUCED IN INVENTION PATENT N. P-9400595/8 BY: BIOMETRIC PROCEDURE FOR SECURITY AND IDENTIFICATION AND CREDIT CARDS, VISAS, PASSPORTS AND FACIAL RECOGNITION.
CN106548159A (en) * 2016-11-08 2017-03-29 中国科学院自动化研究所 Reticulate pattern facial image recognition method and device based on full convolutional neural networks
CN107766844A (en) * 2017-11-13 2018-03-06 杭州有盾网络科技有限公司 Method, apparatus, equipment of a kind of reticulate pattern according to recognition of face

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760859A (en) * 2016-03-22 2016-07-13 中国科学院自动化研究所 Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network
CN109426775A (en) * 2017-08-25 2019-03-05 株式会社日立制作所 The method, device and equipment of reticulate pattern in a kind of detection facial image
WO2019085403A1 (en) * 2017-10-31 2019-05-09 平安科技(深圳)有限公司 Intelligent face recognition comparison method, electronic device, and computer readable storage medium
CN108734673A (en) * 2018-04-20 2018-11-02 平安科技(深圳)有限公司 Descreening systematic training method, descreening method, apparatus, equipment and medium
CN109871755A (en) * 2019-01-09 2019-06-11 中国平安人寿保险股份有限公司 A kind of auth method based on recognition of face
CN110032931A (en) * 2019-03-01 2019-07-19 阿里巴巴集团控股有限公司 Generate confrontation network training, reticulate pattern minimizing technology, device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于RGB颜色模型的核桃缺素症判别方法;吴艳等;北华大学学报(自然科学版);第14卷(第04期);第493-496页 *

Also Published As

Publication number Publication date
CN110647805A (en) 2020-01-03
WO2021027163A1 (en) 2021-02-18

Similar Documents

Publication Publication Date Title
CN112132277A (en) Federal learning model training method and device, terminal equipment and storage medium
JP6731529B1 (en) Single-pixel attack sample generation method, device, equipment and storage medium
WO2021258699A1 (en) Image identification method and apparatus, and electronic device and computer-readable medium
CN112328715B (en) Visual positioning method, training method of related model, related device and equipment
WO2019200702A1 (en) Descreening system training method and apparatus, descreening method and apparatus, device, and medium
CN112085056B (en) Target detection model generation method, device, equipment and storage medium
CN108805174A (en) clustering method and device
CN110032931B (en) Method and device for generating countermeasure network training and removing reticulation and electronic equipment
CN111967573A (en) Data processing method, device, equipment and computer readable storage medium
JP2017010554A (en) Curved line detection method and curved line detection device
CN110647805B (en) Reticulate pattern image recognition method and device and terminal equipment
CN110876072B (en) Batch registered user identification method, storage medium, electronic device and system
CN109190757B (en) Task processing method, device, equipment and computer readable storage medium
CN110765843A (en) Face verification method and device, computer equipment and storage medium
CN111382791A (en) Deep learning task processing method, image recognition task processing method and device
CN115934484B (en) Diffusion model data enhancement-based anomaly detection method, storage medium and apparatus
CN114241044A (en) Loop detection method, device, electronic equipment and computer readable medium
CN109362027B (en) Positioning method, device, equipment and storage medium
CN113128278A (en) Image identification method and device
CN110866043A (en) Data preprocessing method and device, storage medium and terminal
TWI818496B (en) Fingerprint recognition method, fingerprint module, and electronic device
CN113269796B (en) Image segmentation method and device and terminal equipment
CN109344369B (en) Certificate making method based on original value verification and terminal equipment
CN116912518B (en) Image multi-scale feature processing method and device
CN115409129A (en) Data processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant