CN110647805A - Reticulated image identification method and device and terminal equipment - Google Patents

Reticulated image identification method and device and terminal equipment Download PDF

Info

Publication number
CN110647805A
CN110647805A CN201910736543.2A CN201910736543A CN110647805A CN 110647805 A CN110647805 A CN 110647805A CN 201910736543 A CN201910736543 A CN 201910736543A CN 110647805 A CN110647805 A CN 110647805A
Authority
CN
China
Prior art keywords
image
reticulate pattern
value
processed
loss value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910736543.2A
Other languages
Chinese (zh)
Other versions
CN110647805B (en
Inventor
徐玲玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910736543.2A priority Critical patent/CN110647805B/en
Priority to PCT/CN2019/118652 priority patent/WO2021027163A1/en
Publication of CN110647805A publication Critical patent/CN110647805A/en
Application granted granted Critical
Publication of CN110647805B publication Critical patent/CN110647805B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

The invention provides a reticulate pattern image recognition method, a reticulate pattern image recognition device and terminal equipment, which are applicable to the technical field of data processing, wherein the method comprises the following steps: inputting the image to be processed into a pre-trained reticulate pattern removal model to obtain a reticulate pattern-free image to be processed; performing gray value difference calculation on the image to be processed and the non-textured image to be processed, and performing textured image reconstruction based on the calculated gray value difference and a preset difference threshold value to obtain a corresponding textured image; counting the number of pixel points contained in the reticulate pattern, and calculating the proportion value of the number of the pixel points to the total number of the pixel points of the image to be processed; carrying out pattern matching on the reticulate pattern based on a preset reticulate pattern library; and if the number of the reticulate pattern pixels is larger than a preset number threshold, the proportion value is larger than a preset proportion threshold and the reticulate pattern matching is successful, judging that the image to be processed is the reticulate pattern image. The embodiment of the invention ensures the accuracy and reliability of the anilox image identification.

Description

Reticulated image identification method and device and terminal equipment
Technical Field
The invention belongs to the technical field of data processing, and particularly relates to a reticulate pattern image identification method and terminal equipment.
Background
Some certificate photos of the public security system are provided with reticulate patterns, and when the certificate photos are subjected to face recognition and other processing, the reticulate patterns can greatly influence the accuracy of recognition, so that the certificate photos with the reticulate patterns can be used only by removing the reticulate patterns, but whether the certificate photos are provided with the reticulate patterns or not needs to be judged before removing the reticulate patterns. Although some methods for identifying a textured image exist in the prior art, the identification accuracy is not ideal, so that a method for accurately identifying whether the image is textured or not is needed.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method for identifying a texture image and a terminal device, so as to solve the problem in the prior art that the accuracy of identifying the texture image is low.
A first aspect of an embodiment of the present invention provides a method for identifying a texture image, including:
inputting an image to be processed into a pre-trained reticulate pattern removal model to obtain a non-reticulate pattern image to be processed, wherein the reticulate pattern removal model is a model obtained by training based on reticulate pattern image samples and non-reticulate pattern image samples in advance and is used for removing reticulate patterns in the image;
performing gray value difference calculation on the image to be processed and the non-textured image to be processed, and performing textured image reconstruction based on the calculated gray value difference and a preset difference threshold value to obtain a corresponding textured image;
counting the number of pixel points contained in the reticulate pattern, and calculating the proportion value of the number of the pixel points to the total number of the pixel points of the image to be processed; carrying out pattern matching on the reticulate pattern based on a preset reticulate pattern library;
and if the number of the reticulate pattern pixels is larger than a preset number threshold, the proportion value is larger than a preset proportion threshold, and the reticulate pattern is successfully matched, judging that the image to be processed is a reticulate pattern image.
A second aspect of an embodiment of the present invention provides a cross-hatch image recognition apparatus, including:
the cob-webbing removing module is used for inputting the image to be processed to a cob-webbing removing model trained in advance to obtain a cob-webbing-free image to be processed, wherein the cob-webbing removing model is a model trained in advance based on cob-webbing image samples and cob-webbing-free image samples and is used for removing cob webbing in the image;
the reticulate pattern reconstruction module is used for calculating a gray value difference value of the image to be processed and the reticulate pattern-free image to be processed, and reconstructing reticulate patterns based on the calculated gray value difference value and a preset difference threshold value to obtain corresponding reticulate pattern;
the characteristic processing module is used for counting the number of pixel points contained in the reticulate pattern and calculating the proportion value of the number of the pixel points to the total number of the pixel points of the image to be processed; carrying out pattern matching on the reticulate pattern based on a preset reticulate pattern library;
and the reticulate pattern judging module is used for judging that the image to be processed is the reticulate pattern image if the number of reticulate pattern pixel points is greater than a preset number threshold, the proportion value is greater than a preset proportion threshold and the reticulate pattern is successfully matched.
A third aspect of the embodiments of the present invention provides a terminal device, where the terminal device includes a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor implements the steps of the moire image recognition method as described above when executing the computer program.
A fourth aspect of an embodiment of the present invention provides a computer-readable storage medium, including: a computer program is stored, characterized in that the computer program realizes the steps of the screen image recognition method as described above when executed by a processor.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: the method comprises the following steps of carrying out reticulation removal on an image to be processed through a pre-trained reticulation removal model to obtain an image to be processed without reticulation, and comparing gray value difference values of the image to be processed before and after the reticulation removal to determine a difference part existing in the image to be processed before and after the reticulation removal and draw the difference part into a corresponding reticulation graph (namely, the difference part is assumed to be reticulation), wherein the certificate image reticulation in an actual situation has the following characteristics: 1. the size of the certificate photo is relatively fixed, so that the number of pixel points occupied by the reticulate patterns is relatively stable. 2. In the identification photo, the common reticulate pattern covers most of the image area or the whole image area, so that the proportion of pixel points occupied by the reticulate pattern is relatively stable and high. 3. The types of anilox graphics placed by a certificate photo are limited and known. Based on the practical characteristics, the embodiment of the invention can further verify the pixel number contained in the drawn reticulate pattern and the proportion of the pixel number to the total pixel number of the image to be processed, and match the reticulate pattern, thereby realizing the multi-dimensional and all-around verification of the reticulate, and when the pixel number and the proportion of the pixel number to the total pixel number of the image to be processed are both satisfied, the difference part of the image to be processed before and after the reticulate is removed is the reticulate content, so that the image to be processed can be judged to be the reticulate image, and the accuracy and reliability of the reticulate image identification are ensured.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of a cross-hatch image recognition method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating an implementation of a cross-hatch image recognition method according to a second embodiment of the present invention;
fig. 3 is a schematic flow chart of an implementation of a cross-hatch image identification method according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a textured image recognition device according to a fourth embodiment of the present invention;
fig. 5 is a schematic diagram of a terminal device according to a fifth embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Because the accuracy of processing such as face recognition on the image is greatly affected by the image with the reticulate pattern, a method for identifying whether the image has the reticulate pattern is needed to provide a basis for reticulate pattern removal, face recognition of the certificate photo and the like.
In order to realize the reticulate pattern recognition of the image, the embodiment of the invention removes the reticulate pattern from the image to be processed by the trained reticulate pattern removal model in advance to obtain the image to be processed without the reticulate pattern, and compares the gray value difference values of the image to be processed before and after the reticulate pattern removal, thereby determining the difference part existing in the image to be processed before and after the reticulate pattern removal and drawing the difference part as the corresponding reticulate pattern (namely, the difference part is the reticulate pattern), because the reticulate pattern of the certificate photograph in the actual situation has the following characteristics: 1. the size of the certificate photo is relatively fixed, so that the number of pixel points occupied by the reticulate patterns is relatively stable. 2. In the identification photo, the common reticulate pattern covers most of the image area or the whole image area, so that the proportion of pixel points occupied by the reticulate pattern is relatively stable and high. 3. The types of anilox graphics placed by a certificate photo are limited and known. Based on the practical characteristics, the embodiment of the invention can further verify the pixel number contained in the drawn reticulate pattern and the proportion of the pixel number to the total pixel number of the image to be processed, and match the reticulate pattern, thereby realizing the multi-dimensional and all-around verification of the reticulate, and when the pixel number and the proportion of the pixel number to the total pixel number of the image to be processed are both satisfied, the difference part of the image to be processed before and after the reticulate is removed is the reticulate content, so that the image to be processed can be judged to be the reticulate image, and the accuracy and reliability of the reticulate image identification are ensured.
Fig. 1 shows a flowchart of an implementation of a texture image recognition method according to an embodiment of the present invention, which is detailed as follows:
s101, inputting an image to be processed into a pre-trained reticulate pattern removing model to obtain a non-reticulate pattern image to be processed, wherein the reticulate pattern removing model is a model obtained by pre-training based on reticulate pattern image samples and non-reticulate pattern image samples and is used for removing reticulate patterns in the image.
In the embodiment of the invention, the mesh removing model is used for the meshes in the image and is obtained by pre-training and constructing by technicians, and after the mesh removing model is processed, no matter whether the original image to be processed contains meshes or not, a corresponding non-mesh image can be obtained. The method for training and constructing the texture removal model is not limited herein, and may be designed by a technician or may be trained and constructed with reference to the second to sixth embodiments of the present invention.
S102, gray value difference calculation is carried out on the image to be processed and the non-textured image to be processed, and textured image reconstruction is carried out on the basis of the calculated gray value difference and a preset difference threshold value, so that a corresponding textured image is obtained.
In order to identify whether the image to be processed contains the reticulate pattern, the embodiment of the invention firstly assumes that the image to be processed contains the reticulate pattern, and directly calculates the gray value difference of the image before and after the reticulate pattern is removed, namely, each pixel point corresponding to the two images is subjected to gray processing and gray value difference calculation, so that the difference part of the image to be processed before and after the reticulate pattern is removed is determined, and meanwhile, the change of the gray value possibly exists before and after the reticulate pattern removal model processing even if the pixel point is not the reticulate pattern pixel point is considered, so the embodiment of the invention can preset a difference threshold value to select the reticulate pattern pixel point, and only the pixel point of which the gray value difference is more than or equal to the difference threshold value is identified as the reticulate pattern pixel point. The specific size of the difference threshold value can be set by a technician according to the actual application requirement.
After the reticulate pattern pixel points are selected, extracting the patterns formed by all the reticulate pattern pixel points into corresponding reticulate pattern patterns, and realizing reticulate pattern reconstruction in the original image to be processed.
S103, counting the number of pixel points contained in the reticulate pattern, and calculating the proportion value of the number of the pixel points to the total number of the pixel points of the image to be processed. And carrying out pattern matching on the reticulate pattern based on a preset reticulate pattern library.
And S104, if the number of the anilox pixel points is greater than a preset number threshold, the proportion value is greater than a preset proportion threshold, and the anilox image is successfully matched, judging that the image to be processed is the anilox image.
The certificate photo reticulate pattern in the actual situation has the following characteristics:
1. the size of the certificate photo is relatively fixed, so that the number of pixel points occupied by the reticulate patterns is relatively stable.
2. In the identification photo, the common reticulate pattern covers most of the image area or the whole image area, so that the proportion of pixel points occupied by the reticulate pattern is relatively stable and high.
3. The types of anilox graphics placed by a certificate photo are limited and known.
Just combining the above 3 characteristics of the certificate photo reticulate pattern, in the embodiment of the present invention, the number of reticulate pattern pixels included in the certificate photo actually including the reticulate pattern and the proportion of the total certificate photo pixels are counted in advance, and the corresponding number threshold and the proportion threshold are set according to the counting result, and meanwhile, a corresponding reticulate pattern library is constructed in advance based on the known certificate photo reticulate pattern type. In actual processing, respectively carrying out texture pixel point number statistics on the drawn texture image, calculating the proportion of the texture pixel point number to the total pixel point number of the image to be processed, comparing the texture pixel point number with a corresponding quantity threshold value and a corresponding proportion threshold value, simultaneously carrying out pattern matching on a texture image library according to the drawn texture image, if the processing result is that the texture pixel point number is enough and the proportion is enough, and meanwhile, similar patterns exist in the texture image library, showing that the content of a difference part before and after the removal of the texture meets 3 characteristics of the texture of the certificate, namely, the assumption that the content of the difference part is the texture is true, directly judging that the image to be processed is the texture image at the moment, and finishing the identification of the texture image.
If the three conditions do not satisfy, it is indicated that the detection result cannot determine whether the image to be processed includes the texture, and the reason for the result may be that the image to be processed does not include the texture itself, or there may be a problem of data error in the processing process, or a problem of poor quality of the image to be processed, so as to be an optional embodiment of the present invention, if the three conditions do not satisfy, the embodiment of the present invention returns to S101 to process the image to be processed again, and counts the total number of times of processing the image to be processed at the same time, and if the processing result does not satisfy the conditions in the three conditions within the preset maximum total number of times, the embodiment returns to S101 in a loop until the number of times of processing reaches the maximum total number of times, and directly determines that the image to be processed is the image without the texture. As another alternative embodiment of the present invention, it may also be determined that the image to be processed is an unmasked image directly if there is a condition that is not satisfied among the three conditions. The specific method can be selected and set by a technician according to actual needs, which is not limited herein.
In the embodiment of the invention, the cobwebbing removal is carried out on the image to be processed by the cobwebbing removal model which is trained in advance to obtain the cobwebbing-free image to be processed, the gray value difference value comparison is carried out on the image to be processed before and after cobwebbing removal, thereby determining the difference part of the images to be processed before and after the reticulation is removed, drawing the images into corresponding reticulation patterns (i.e. assuming that the difference part is the reticulation), further checking the pixel point number contained in the drawn reticulation patterns and the proportion of the total pixel point number of the images to be processed based on the three characteristics of the certificate photo reality, matching the reticulation patterns, realizing the multi-dimensional and all-round checking of the reticulation, when both the two are satisfied, the difference part existing before and after descreening of the image to be processed is the screen content, therefore, the image to be processed can be judged to be the reticulate pattern image, and the accuracy and reliability of reticulate pattern image identification are guaranteed.
As a specific implementation manner for performing the training and construction of the texture removal model in the first embodiment of the present invention, as shown in fig. 2, the second embodiment of the present invention includes:
s201, obtaining a plurality of pairs of textured image samples and non-textured image samples, wherein only a textured difference exists between the textured image samples and the non-textured image samples in each pair of image samples.
In the embodiment of the present invention, the image samples for model training are all paired, and the texture image samples and the non-texture image samples in each pair of image samples are all identical except for the texture, and in order to obtain a plurality of pairs of image samples with only texture differences, the method used includes, but is not limited to, for example, first obtaining a desired number of non-texture images and then adding corresponding textures to the non-texture images, or other methods that may be used by the technician, and are not limited herein.
S202, constructing an anilox removal generator G (x), and an anilox addition generator F (x), wherein G (x) is used for obtaining the probability that the image belongs to the non-anilox image Dg (x) through a discrimination network Dg (x), and F (x) is used for obtaining the probability that the image belongs to the anilox image Df (F (x)) through a discrimination network Df (x).
In the embodiment of the present invention, an initial total model is first constructed, where the total model includes an initial texture removal generator g (x), a texture addition generator f (x), a non-texture decision network dg (x), and a texture decision network df (x), so as to perform subsequent iterative training, where the rules for constructing the initial model include, but are not limited to, the following: the technician sets the model frame structure, including the number of layers, the attributes of each layer, and the like, and randomly generates model parameters. The recognition rates of the initial g (x) and f (x) are generally low, so the embodiment of the invention improves the model recognition rate through iterative update training.
S203, the texture image sample a and the texture image sample b are respectively processed by G (x) and F (x) to obtain corresponding processed images, first loss values corresponding to G (x) and F (x) are calculated based on a, b, the processed images, Dg (G (a)) and Df (F (b)), and second loss values and third loss values corresponding to Dg (x) and Df (x) are calculated based on Dg (G (a) and Df (F (b)).
In order to realize the effectiveness evaluation of the texture removal generator g (x), the texture addition generator f (x), the no-texture decision network dg (x), and the texture decision network df (x), in the embodiment of the present invention, g (x) and f (x) are used to process the texture image sample a and the texture image sample b, respectively, to obtain corresponding processed images, and the texture image sample a is processed by g (x) to obtain an image a ', a ' is processed by f (x) to obtain an image a ', b ' is processed by f (x) to obtain an image b ', and b ' is processed by f (x) to obtain an image b ', wherein the processed images in the embodiment of the present invention include, but are not limited to, one or more of a ', a ", b ', and b". In the embodiment of the present invention, g (x) and f (x) are first used to process a and b, respectively, and since a and b only have a difference in texture, theoretically, the obtained processed images should also have only a difference in texture, and in the same way, theoretically, a ═ b ', and b ═ a', based on this theoretical equation, the embodiment of the present invention further calculates a first loss value of two generator loss functions with opposite functions, and a second loss value and a third loss value corresponding to dg (x) and df (x), wherein the specific loss function selection/design is not limited herein, and can be selected/designed by a skilled person according to requirements, and refer to the second and third embodiments of the present invention.
And S204, calculating the image difference degree between the a and the b and the processed image.
Although theoretically a ═ b ', and b ═ a', in the actual situation, the processing effect of g (x) and f (x) which are not trained is not necessarily good, so that there is a certain difference between actual a, a "and b ', and b, b" and a', and the difference directly represents the training effect of g (x) and f (x), so the embodiment of the present invention is used as a one-dimensional quantification mode of g (x) and f (x) training effect by calculating the image difference between a, b and the processed image. The specific calculation method for the image difference degree is not limited herein, and includes, but is not limited to, calculating euclidean distances between a, a ″ and b ', and between b, b ″ and a', or the like, or may be designed by a technician according to requirements, and it should be noted that, according to the difference of the calculation method finally selected, there may be a difference in the used specifically processed image, for example, only a, b, a ', and b' may be used for calculation, or a, b, a ', b', a ", and b" may be used for calculation at the same time, and the specific final calculation method is determined.
S205, judging whether the first loss value, the second loss value and the third loss value are larger than the corresponding preset loss value threshold respectively, and judging whether the image difference degree is larger than the preset difference degree threshold.
In order to achieve the iterative training of g (x), f (x), dg (x), and df (x) to achieve the expected training effect, the embodiment of the present invention presets one or more loss value thresholds and a difference threshold for determining the validity of the three loss values and the image difference. The loss value threshold is used for measuring the expected training effects of G (x), F (x), dg (x) and Df (x), the number of the loss value thresholds is set by technical personnel according to actual requirements, when the expected training effects on the generator and the discrimination network are different, an independent loss value threshold can be respectively set for each loss value, similarly, the same loss value threshold can be uniformly set, meanwhile, specific values of the loss value threshold and the difference threshold can also be set by technical personnel according to actual requirements, and the larger the loss value threshold and the difference threshold are, the lower the requirement on the expected training effects on the generator and the discrimination network is.
S206, if the second loss value and/or the third loss value is larger than or equal to the value of the corresponding preset loss value threshold, iteratively updating dg (x) and Df (x).
S207, if the first loss value is greater than or equal to the corresponding preset loss value threshold and/or the image difference degree is greater than or equal to the preset difference degree threshold, iteratively updating G (x) and F (x).
When there is a value in the second loss value and the third loss value that does not meet the requirement of the loss value threshold, it indicates that the distinguishing effect of dg (x) and df (x) still achieves the expected effect, so that the iterative updating of dg (x) and df (x) is returned at this time. Similarly, when the first loss value is too large and does not meet the requirement of the loss value threshold, the expected effect of G (x) and F (x) is not achieved, and the iterative updating of G (x) and F (x) is returned.
And S208, if the first loss value, the second loss value and the third loss value are all smaller than the corresponding preset loss value threshold values, and the image difference degree is smaller than the preset difference degree threshold value, completing model training of the reticulate pattern removal generator G (x), and obtaining a reticulate pattern removal model.
Because g (x), f (x), dg (x) and df (x) are mutually opposite and dependent, even if the second loss value and the third loss value both satisfy the requirement, or the first loss value satisfies the requirement, it cannot be directly determined that the training of dg (x) and df (x) is completed, or the training of g (x) and f (x) is completed, so in the embodiment of the present invention, the training of g (x), f (x), dg (x) and df (x) is completed only when the first loss value, the second loss value, the third loss value and the image difference simultaneously satisfy the requirement, and at this time, the finally available texture removal generator g (x), i.e., the texture removal model in the first embodiment of the present invention, can be obtained.
It should be noted that although we finally aim to train and construct g (x) that can be textured to obtain the textured removal model of the first embodiment of the present invention, in the embodiment of the present invention, the training and training effect of g (x) are not only related to f (x) information that is functionally opposite, but also related to the accuracy of the networks dg (x) and df (x) that perform textured, and only when g (x), f (x), dg (x) and df (x) have been trained to achieve the desired effect, the final g (x) is accurate and effective, so the embodiment of the present invention iteratively updates g (x) and f (x) at the same time, and the updating steps dg (x) and df (x) are seemingly independent (whether the updating is dependent on the second loss value and the third loss value only), no reference is made to the first loss value and the image disparity), but actually the second loss value and the third loss value are also dependent on the processing effect of g (x) and f (x) on the image sample updated in real time, so that the updating of dg (x) and df (x) and g (x) and f (x) are actually inseparable with an inherent relationship, and cannot be simply regarded as two independent iterative updating operation steps.
In the embodiment of the invention, two opposite generators and two opposite judging networks are constructed, paired image samples are processed based on the opposite generators and the judging networks, loss values and image difference degrees are calculated based on processing results so as to realize the quantification of training effects of the generators and the judging networks, and finally, the generators and the judging networks are respectively updated iteratively according to whether the loss values and the image difference degrees meet expected effects or not until the expected effects are met, so that the effective training of the reticulate pattern removal model is realized.
As a specific implementation manner of calculating the first loss value in the second embodiment of the present invention, the method includes:
calculating a first loss value based on formula (1) by using g (x) to process an image a 'obtained by processing a', using f (x) to process an image b 'obtained by processing a texture image sample b', and using f (x) to process an image b 'obtained by processing b':
Lg=-(log10(Dg(G(a)))-log10(Df(F(b)))+Lcyc,
Lcyc=L1Loss(a”,a)×lambda_a+L1 Loss(b”,b)×lambda_b+
L1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_d (1)
where Lg is a first Loss value, L1Loss (x, y) represents the euclidean distance between two images, and lambda _ a, lambda _ b, lambda _ c, and lambda _ d represent preset weights.
In the embodiment of the present invention, a "and a", b "and b", a and b 'and b and a' are compared to each other to calculate corresponding euclidean distances, so as to obtain quantized values of difference degrees corresponding to four dimensions, and meanwhile, as the number of times of processing by the image sample generator is increased, the probability of deviation is increased, and finally, the difficulty of matching with the original image sample is increased. The specific values of lambda _ a, lambda _ b, lambda _ c and lambda _ d can be set by a technician after evaluating the matching difficulty of each dimension, and preferably, the values of lambda _ a and lambda _ b are both greater than lambda _ c and lambda _ d.
As a specific implementation manner of calculating the second loss value and the third loss value in the second embodiment of the present invention, the implementation manner includes:
calculating a second loss value Ldg and a third loss value Ldf based on equation (2) and equation (3):
Ldg=-log10(Dg(G(a))-0.5)+log10(1.5-Dg(G(a))) (2)
Ldf=-log10(Df(F(b))-0.5)+log10(1.5-Df(F(b))) (3)
as a specific implementation manner of calculating the image difference in the second embodiment of the present invention, as shown in fig. 3, the third embodiment of the present invention includes:
s301, performing gray value difference operation on a and b, a and a ', and b', respectively, and performing texture extraction based on the obtained gray value difference and a preset difference threshold to obtain a corresponding first texture image, a second texture image, and a third texture image, where a 'is an image obtained by processing a texture image sample a with g (x), and b' is an image obtained by processing a texture image sample b with f (x).
S302, calculating the image distance between the first mesh image and the second mesh image and the image distance between the first mesh image and the third mesh image, and calculating the difference value of the two obtained image distances to obtain the image difference.
In theory, a ═ b ', and b ═ a', the first texture image is a standard texture image, the second texture image is a difference partial image before and after g (x) processing, and the third texture image is a difference partial image before and after f (x) processing, so that gray value difference operation is performed on a, a 'and b, b and b' respectively, extraction of actual processing effects of g (x) and f (x) can be realized, image distances between the second texture image and the third texture image and the first texture image are calculated respectively, quantitative evaluation on actual processing effects of g (x) and f (x) is realized, and finally, the difference between the two image distances is calculated, so that the image difference required by the second embodiment of the invention can be obtained. The image distance is the reciprocal of the similarity of the images, and the specific image distance calculation method is not limited herein and can be set by a technician or refer to other embodiments of the present invention.
As a specific implementation manner of calculating the image distance in the third embodiment of the present invention, the implementation manner includes:
the image distance L is calculated based on formula (4):
Figure BDA0002162351670000111
wherein n is the first texture image or the second textureTotal number of pixels, x, of the imageiAnd yiAnd the pixel values are respectively the pixel values of the ith pixel point of the first reticulate pattern image or the second reticulate pattern image.
In the embodiment of the invention, pixel points of the reticulate pattern image are compared one by one to calculate the pixel value difference, and then the reciprocal of the difference is obtained, so that the required image distance is obtained. Similarly, the image distance between the first texture image and the third texture image can also be calculated according to the formula (4), which is not described herein again.
Fig. 4 shows a structural block diagram of the texture image recognition device provided by the embodiment of the present invention, corresponding to the method of the above embodiment, and for convenience of explanation, only the parts related to the embodiment of the present invention are shown. The texture image recognition device illustrated in fig. 4 may be an execution subject of the texture image recognition method provided in the first embodiment.
Referring to fig. 4, the screen image recognition apparatus includes:
and the reticulation removing module 41 is configured to input the image to be processed to a trained reticulation removing model in advance to obtain a reticulation-free image to be processed, where the reticulation removing model is a model trained in advance based on reticulation image samples and reticulation-free image samples and is used to remove reticulation in the image.
And the reticulate pattern reconstruction module 42 is configured to perform gray value difference calculation on the image to be processed and the reticulate pattern-free image to be processed, and perform reticulate pattern reconstruction based on the calculated gray value difference and a preset difference threshold value to obtain a corresponding reticulate pattern.
And the feature processing module 43 is configured to count the number of pixel points included in the checkered graph, and calculate a ratio of the number of pixel points to the total number of pixel points of the image to be processed. And carrying out pattern matching on the reticulate pattern based on a preset reticulate pattern library.
And the reticulate pattern judging module 44 is configured to judge that the image to be processed is a reticulate pattern image if the number of reticulate pattern pixels is greater than a preset number threshold, the proportion value is greater than a preset proportion threshold, and the reticulate pattern is successfully matched.
Further, the screen image recognition apparatus further includes:
and the sample acquiring module is used for acquiring a plurality of pairs of textured image samples and non-textured image samples, wherein only the textured difference exists between the textured image samples and the non-textured image samples in each pair of image samples.
And the generator construction module is used for constructing an anilox removal generator G (x), an anilox addition generator F (x), and setting G (x) to obtain the probability that the image belongs to the non-anilox image Dg (x) through a discrimination network Dg (x), and F (x) to obtain the probability that the image belongs to the anilox image Df (F (x)) through a discrimination network Df (x).
And the loss value calculating module is used for respectively processing the texture image sample a and the texture image sample b by utilizing G (x) and F (x) to obtain corresponding processed images, calculating first loss values corresponding to G (x) and F (x) based on a, b, the processed images, Dg (G (a)) and Df (F (b)), and calculating second loss values and third loss values corresponding to Dg (x) and Df (x) based on Dg (G (a) and Df (F (b)).
And the difference calculating module is used for calculating the image difference degrees between the a and the b and the processed image.
And the parameter comparison module is used for judging whether the first loss value, the second loss value and the third loss value are greater than preset loss value thresholds respectively corresponding to the first loss value, the second loss value and the third loss value, and judging whether the image difference degree is greater than a preset difference degree threshold.
And the iteration updating module is used for iteratively updating dg (x) and df (x) if the second loss value and/or the third loss value is larger than or equal to the value of the corresponding preset loss value threshold. And if the first loss value is more than or equal to the corresponding preset loss value threshold value and/or the image difference degree is more than or equal to the preset difference degree threshold value, iteratively updating G (x) and F (x).
And the model output module is used for finishing model training of the reticulate pattern removal generator G (x) if the first loss value, the second loss value and the third loss value are all smaller than corresponding preset loss value thresholds and the image difference is smaller than the preset difference threshold, so as to obtain the reticulate pattern removal model.
Further, a loss value calculation module comprising:
calculating a first loss value based on formula (1) by using g (x) to process an image a 'obtained by processing a', using f (x) to process an image b 'obtained by processing a texture image sample b', and using f (x) to process an image b 'obtained by processing b':
Lg=-(log10(Dg(G(a)))-log10(Df(F(b)))+Lcyc,
Lcyc=L1 Loss(a”,a)×lambda_a+L1 Loss(b”,b)×lambda_b+
L1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_d (1)
where Lg is a first Loss value, L1Loss (x, y) represents the euclidean distance between two images, and lambda _ a, lambda _ b, lambda _ c, and lambda _ d represent preset weights.
Further, the loss value calculation module further includes:
calculating a second loss value Ldg and a third loss value Ldf based on equation (2) and equation (3):
Ldg=-log10(Dg(G(a))-0.5)+log10(1.5-Dg(G(a))) (2)
Ldf=-log10(Df(F(b))-0.5)+log10(1.5-Df(F(b))) (3)
further, a difference calculation module comprising:
and the texture image extracting module is used for respectively carrying out gray value difference value operation on a and b, a and a ', b and b', and carrying out texture extraction on the basis of the obtained gray value difference value and the preset difference threshold value to obtain a corresponding first texture image, a second texture image and a third texture image, wherein a 'is an image obtained by processing a texture image sample a by using G (x), and b' is an image obtained by processing a texture image sample b by using F (x).
And the image difference calculating module is used for calculating the image distance between the first textured image and the second textured image and the image distance between the first textured image and the third textured image, and calculating the difference value of the two obtained image distances to obtain the image difference.
Further, an image difference calculation module comprising:
the image distance L is calculated based on formula (4):
wherein n is the total number of pixel points of the first or second moire image, xiAnd yiAnd the pixel values are respectively the pixel values of the ith pixel point of the first reticulate pattern image or the second reticulate pattern image.
The process of implementing each function by each module in the moire image recognition device provided by the embodiment of the present invention may specifically refer to the description of the first embodiment shown in fig. 1, and will not be described herein again.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements in some embodiments of the invention, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first table may be named a second table, and similarly, a second table may be named a first table, without departing from the scope of various described embodiments. The first table and the second table are both tables, but they are not the same table.
Fig. 5 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 5, the terminal device 5 of this embodiment includes: a processor 50, a memory 51, said memory 51 having stored therein a computer program 52 executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps in the various embodiments of the texture image recognition method described above, such as the steps 101 to 104 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 41 to 44 shown in fig. 4.
The terminal device 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 50, a memory 51. It will be appreciated by those skilled in the art that fig. 5 is merely an example of a terminal device 5 and does not constitute a limitation of the terminal device 5 and may include more or less components than those shown, or some components may be combined, or different components, for example the terminal device may also include an input transmitting device, a network access device, a bus, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. The memory 51 may also be an external storage device of the terminal device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the terminal device 5. The memory 51 is used for storing the computer program and other programs and data required by the terminal device. The memory 51 may also be used to temporarily store data that has been transmitted or is to be transmitted.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method for identifying a texture image, comprising:
inputting an image to be processed into a pre-trained reticulate pattern removal model to obtain a non-reticulate pattern image to be processed, wherein the reticulate pattern removal model is a model obtained by training based on reticulate pattern image samples and non-reticulate pattern image samples in advance and is used for removing reticulate patterns in the image;
performing gray value difference calculation on the image to be processed and the non-textured image to be processed, and performing textured image reconstruction based on the calculated gray value difference and a preset difference threshold value to obtain a corresponding textured image;
counting the number of pixel points contained in the reticulate pattern, and calculating the proportion value of the number of the pixel points to the total number of the pixel points of the image to be processed; carrying out pattern matching on the reticulate pattern based on a preset reticulate pattern library;
and if the number of the reticulate pattern pixels is larger than a preset number threshold, the proportion value is larger than a preset proportion threshold, and the reticulate pattern is successfully matched, judging that the image to be processed is a reticulate pattern image.
2. The method of claim 1, wherein the training of the texture removal model comprises:
acquiring a plurality of pairs of textured image samples and non-textured image samples, wherein only a texture difference exists between the textured image samples and the non-textured image samples in each pair of image samples;
constructing an anilox removal generator G (x), an anilox addition generator F (x), and setting G (x) to obtain the probability that the image belongs to the non-anilox image Dg (x) through a discrimination network Dg (x), and F (x) to obtain the probability that the image belongs to the anilox image Df (F (x)) through a discrimination network Df (x);
respectively processing a texture image sample a and a texture image sample b by utilizing G (x) and F (x) to obtain corresponding processed images, calculating first loss values corresponding to G (x) and F (x) based on a, b, the processed images, Dg (G (a)) and Df (F (b)), and calculating second loss values and third loss values corresponding to Dg (x) and Df (x) based on Dg (G (a) and Df (F (b));
calculating the image difference degree between the a and the b and the processed image;
judging whether the first loss value, the second loss value and the third loss value are greater than preset loss value thresholds corresponding to the first loss value, the second loss value and the third loss value respectively, and judging whether the image difference degree is greater than a preset difference degree threshold;
iteratively updating dg (x) and df (x) if the second loss value and/or the third loss value is greater than or equal to a value corresponding to a predetermined loss value threshold; if the first loss value is more than or equal to a corresponding preset loss value threshold value and/or the image difference degree is more than or equal to the preset difference degree threshold value, iteratively updating G (x) and F (x);
and if the first loss value, the second loss value and the third loss value are all smaller than corresponding preset loss value thresholds, and the image difference is smaller than the preset difference threshold, completing model training of the reticulate pattern removal generator G (x), and obtaining the reticulate pattern removal model.
3. The texture image recognition method of claim 2, wherein the texture image samples a and b are respectively processed by g (x) and f (x) to obtain corresponding processed images, and the corresponding first loss values g (x) and f (x) are calculated based on a, b, the processed images, Dg (g (a)) and Df (f (b)), and include:
calculating a first loss value based on the following formula, wherein the first loss value is calculated by using G (x) to process an image a 'obtained by processing a', using F (x) to process an image b 'obtained by processing a textured image sample b', using F (x) to process an image b 'obtained by processing b':
Lg=-(log10(Dg(G(a)))-log10(Df(F(b)))+Lcyc,
Lcyc=L1 Loss(a”,a)×lambda_a+L1 Loss(b”,b)×lambda_b+L1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_d
where Lg is the first Loss value, L1Loss (x, y) represents the euclidean distance of the two images, and lambda _ a, lambda _ b, lambda _ c, and lambda _ d represent preset weights.
4. Anilox image recognition method as claimed in claim 2, wherein the calculating of corresponding second and third loss values for Dg (x) and Df (x) based on Dg (G (a) and Df (F (b)) comprises:
calculating a second loss value Ldg and a third loss value Ldf based on the following equation:
Ldg=-log10(Dg(G(a))-0.5)+log10(1.5-Dg(G(a)))
Ldf=-log10(Df(F(b))-0.5)+log10(1.5-Df(F(b)))。
5. a screen image recognition method as claimed in claim 2, wherein the calculating a, b and the image difference between the processed images comprises:
respectively performing gray value difference value operation on a and b, a and a ', and b', and performing texture extraction based on the obtained gray value difference value and the preset difference threshold value to obtain a corresponding first texture image, a second texture image and a third texture image, wherein a 'is an image obtained by processing a texture image sample a by using G (x), and b' is an image obtained by processing a texture image sample b by using F (x);
and calculating the image distance between the first textured image and the second textured image and the image distance between the first textured image and the third textured image, and calculating the difference value of the two image distances to obtain the image difference.
6. The cross hatch image recognition method of claim 5, characterized in that calculating the image distance of the first cross hatch image and the second cross hatch image comprises:
the image distance L is calculated based on:
Figure FDA0002162351660000031
wherein n is the total number of pixel points of the first or second moire image, xiAnd yiAnd the pixel values are respectively the pixel values of the ith pixel point of the first reticulate pattern image or the second reticulate pattern image.
7. A cross-hatch image recognition device, comprising:
the cob-webbing removing module is used for inputting the image to be processed to a cob-webbing removing model trained in advance to obtain a cob-webbing-free image to be processed, wherein the cob-webbing removing model is a model trained in advance based on cob-webbing image samples and cob-webbing-free image samples and is used for removing cob webbing in the image;
the reticulate pattern reconstruction module is used for calculating a gray value difference value of the image to be processed and the reticulate pattern-free image to be processed, and reconstructing reticulate patterns based on the calculated gray value difference value and a preset difference threshold value to obtain corresponding reticulate pattern;
the characteristic processing module is used for counting the number of pixel points contained in the reticulate pattern and calculating the proportion value of the number of the pixel points to the total number of the pixel points of the image to be processed; carrying out pattern matching on the reticulate pattern based on a preset reticulate pattern library;
and the reticulate pattern judging module is used for judging that the image to be processed is the reticulate pattern image if the number of reticulate pattern pixel points is greater than a preset number threshold, the proportion value is greater than a preset proportion threshold and the reticulate pattern is successfully matched.
8. Anilox image recognition device as claimed in claim 7, further comprising:
the system comprises a sample acquisition module, a data processing module and a data processing module, wherein the sample acquisition module is used for acquiring a plurality of pairs of textured image samples and non-textured image samples, and the textured image samples and the non-textured image samples in each pair of image samples only have texture differences;
the generator construction module is used for constructing an anilox removal generator G (x), an anilox addition generator F (x), and setting G (x) to obtain the probability that the image belongs to the non-anilox image Dg (x) through a judgment network Dg (x), and F (x) to obtain the probability that the image belongs to the anilox image Df (F (x) through a judgment network Df (x);
the loss value calculation module is used for respectively processing the texture image sample a and the texture image sample b by utilizing G (x) and F (x) to obtain corresponding processed images, calculating first loss values corresponding to G (x) and F (x) based on a, b, the processed images, Dg (G (a)) and Df (F (b)), and calculating second loss values and third loss values corresponding to Dg (x) and Df (x) based on Dg (G (a) and Df (F (b));
the difference calculation module is used for calculating the image difference degrees between the a and the b and the processed image;
the parameter comparison module is used for judging whether the first loss value, the second loss value and the third loss value are greater than preset loss value thresholds which respectively correspond to the first loss value, the second loss value and the third loss value, and judging whether the image difference degree is greater than a preset difference degree threshold;
an iterative update module, configured to iteratively update dg (x) and df (x) if the second loss value and/or the third loss value is greater than or equal to a corresponding predetermined loss value threshold value; if the first loss value is more than or equal to a corresponding preset loss value threshold value and/or the image difference degree is more than or equal to the preset difference degree threshold value, iteratively updating G (x) and F (x);
and the model output module is used for finishing model training of the reticulate pattern removal generator G (x) if the first loss value, the second loss value and the third loss value are all smaller than corresponding preset loss value thresholds and the image difference is smaller than the preset difference threshold, so as to obtain the reticulate pattern removal model.
9. A terminal device, characterized in that the terminal device comprises a memory, a processor, a computer program being stored on the memory and being executable on the processor, the processor implementing the steps of the method according to any of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201910736543.2A 2019-08-09 2019-08-09 Reticulate pattern image recognition method and device and terminal equipment Active CN110647805B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910736543.2A CN110647805B (en) 2019-08-09 2019-08-09 Reticulate pattern image recognition method and device and terminal equipment
PCT/CN2019/118652 WO2021027163A1 (en) 2019-08-09 2019-11-15 Reticulate pattern-containing image recognition method and apparatus, and terminal device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910736543.2A CN110647805B (en) 2019-08-09 2019-08-09 Reticulate pattern image recognition method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN110647805A true CN110647805A (en) 2020-01-03
CN110647805B CN110647805B (en) 2023-10-31

Family

ID=68990095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910736543.2A Active CN110647805B (en) 2019-08-09 2019-08-09 Reticulate pattern image recognition method and device and terminal equipment

Country Status (2)

Country Link
CN (1) CN110647805B (en)
WO (1) WO2021027163A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819016A (en) * 2021-02-19 2021-05-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760859A (en) * 2016-03-22 2016-07-13 中国科学院自动化研究所 Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network
CN108734673A (en) * 2018-04-20 2018-11-02 平安科技(深圳)有限公司 Descreening systematic training method, descreening method, apparatus, equipment and medium
CN109426775A (en) * 2017-08-25 2019-03-05 株式会社日立制作所 The method, device and equipment of reticulate pattern in a kind of detection facial image
WO2019085403A1 (en) * 2017-10-31 2019-05-09 平安科技(深圳)有限公司 Intelligent face recognition comparison method, electronic device, and computer readable storage medium
CN109871755A (en) * 2019-01-09 2019-06-11 中国平安人寿保险股份有限公司 A kind of auth method based on recognition of face
CN110032931A (en) * 2019-03-01 2019-07-19 阿里巴巴集团控股有限公司 Generate confrontation network training, reticulate pattern minimizing technology, device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2105936B1 (en) * 1994-03-21 1998-06-01 I D Tec S L IMPROVEMENTS INTRODUCED IN INVENTION PATENT N. P-9400595/8 BY: BIOMETRIC PROCEDURE FOR SECURITY AND IDENTIFICATION AND CREDIT CARDS, VISAS, PASSPORTS AND FACIAL RECOGNITION.
CN106548159A (en) * 2016-11-08 2017-03-29 中国科学院自动化研究所 Reticulate pattern facial image recognition method and device based on full convolutional neural networks
CN107766844A (en) * 2017-11-13 2018-03-06 杭州有盾网络科技有限公司 Method, apparatus, equipment of a kind of reticulate pattern according to recognition of face

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760859A (en) * 2016-03-22 2016-07-13 中国科学院自动化研究所 Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network
CN109426775A (en) * 2017-08-25 2019-03-05 株式会社日立制作所 The method, device and equipment of reticulate pattern in a kind of detection facial image
WO2019085403A1 (en) * 2017-10-31 2019-05-09 平安科技(深圳)有限公司 Intelligent face recognition comparison method, electronic device, and computer readable storage medium
CN108734673A (en) * 2018-04-20 2018-11-02 平安科技(深圳)有限公司 Descreening systematic training method, descreening method, apparatus, equipment and medium
CN109871755A (en) * 2019-01-09 2019-06-11 中国平安人寿保险股份有限公司 A kind of auth method based on recognition of face
CN110032931A (en) * 2019-03-01 2019-07-19 阿里巴巴集团控股有限公司 Generate confrontation network training, reticulate pattern minimizing technology, device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴艳等: "基于RGB颜色模型的核桃缺素症判别方法", 北华大学学报(自然科学版), vol. 14, no. 04, pages 493 - 496 *

Also Published As

Publication number Publication date
WO2021027163A1 (en) 2021-02-18
CN110647805B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN109886997B (en) Identification frame determining method and device based on target detection and terminal equipment
US11176418B2 (en) Model test methods and apparatuses
CN109272016B (en) Target detection method, device, terminal equipment and computer readable storage medium
CN111383186B (en) Image processing method and device and terminal equipment
WO2019200702A1 (en) Descreening system training method and apparatus, descreening method and apparatus, device, and medium
WO2021258699A1 (en) Image identification method and apparatus, and electronic device and computer-readable medium
CN109063776B (en) Image re-recognition network training method and device and image re-recognition method and device
CN110032931B (en) Method and device for generating countermeasure network training and removing reticulation and electronic equipment
CN110675334A (en) Image enhancement method and device
CN107908998B (en) Two-dimensional code decoding method and device, terminal equipment and computer readable storage medium
WO2023065744A1 (en) Face recognition method and apparatus, device and storage medium
CN110765843A (en) Face verification method and device, computer equipment and storage medium
CN113221601A (en) Character recognition method, device and computer readable storage medium
CN110647805B (en) Reticulate pattern image recognition method and device and terminal equipment
CN115689947B (en) Image sharpening method, system, electronic device and storage medium
CN107403199B (en) Data processing method and device
CN113239738B (en) Image blurring detection method and blurring detection device
CN113792671A (en) Method and device for detecting face synthetic image, electronic equipment and medium
CN113128278A (en) Image identification method and device
TWI818496B (en) Fingerprint recognition method, fingerprint module, and electronic device
CN113269796B (en) Image segmentation method and device and terminal equipment
CN113240110B (en) Method, apparatus and computer readable storage medium for determining model
CN110321884B (en) Method and device for identifying serial number
CN116934850A (en) Feature point determining method and device, electronic equipment and readable storage medium
CN113191877A (en) Data feature acquisition method and system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant