WO2021027163A1 - Reticulate pattern-containing image recognition method and apparatus, and terminal device and medium - Google Patents

Reticulate pattern-containing image recognition method and apparatus, and terminal device and medium Download PDF

Info

Publication number
WO2021027163A1
WO2021027163A1 PCT/CN2019/118652 CN2019118652W WO2021027163A1 WO 2021027163 A1 WO2021027163 A1 WO 2021027163A1 CN 2019118652 W CN2019118652 W CN 2019118652W WO 2021027163 A1 WO2021027163 A1 WO 2021027163A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
loss value
texture
mesh
processed
Prior art date
Application number
PCT/CN2019/118652
Other languages
French (fr)
Chinese (zh)
Inventor
徐玲玲
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021027163A1 publication Critical patent/WO2021027163A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Definitions

  • This application belongs to the field of data processing technology, and in particular relates to a method, device, terminal equipment and medium for recognizing a textured image.
  • ID photos of the public security system have textures. When performing face recognition and other processing, the textures will greatly affect the accuracy of recognition. Therefore, ID photos with textures need to be reticulated before they can be used. Before removing the texture, you need to determine whether the ID photo has texture. Although there are some methods for recognizing moire images in the prior art, the recognition accuracy is not ideal. Therefore, a method that can accurately identify whether an image has moire is needed.
  • the embodiments of the present application provide a method, device, terminal equipment, and medium for recognizing a textured image, which can solve the problem of low accuracy of recognizing a textured image.
  • the first aspect of the embodiments of the present application provides a method for recognizing a textured image, including:
  • the reticulation removal model is a model obtained by pre-training based on the reticulated image sample and the non-reticulated image sample, and is used for Remove the mesh in the image;
  • the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully, it is determined that the image to be processed is a mesh image.
  • a second aspect of the embodiments of the present application provides a texture image recognition device, including:
  • the de-texturing module is used to input the image to be processed into a pre-trained reticulation removal model to obtain a non-reticulated image to be processed.
  • the reticulation removal model is based on the reticulated image sample and the non-reticulated image sample in advance.
  • the trained model is used to remove the moire in the image;
  • the reticulation reconstruction module is used to calculate the gray value difference of the image to be processed and the non-reticulated image to be processed, and perform reticulation based on the calculated gray value difference and a preset difference threshold Reconstruct to get the corresponding net pattern;
  • the feature processing module is used to count the number of pixels contained in the texture pattern, and calculate the ratio of the number of pixels to the total number of pixels in the image to be processed; and compare the texture pattern based on a preset texture pattern library Perform graphic matching;
  • the mesh discrimination module is configured to determine that the image to be processed is a mesh image if the number of mesh pixels is greater than a preset number threshold, the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully.
  • a third aspect of the embodiments of the present application provides a terminal device, including a memory and a processor.
  • the memory stores computer-readable instructions that can run on the processor.
  • the processor executes the computer The following steps are implemented when reading instructions:
  • the reticulation removal model is a model obtained by pre-training based on the reticulated image sample and the non-reticulated image sample, and is used for Remove the mesh in the image;
  • the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully, it is determined that the image to be processed is a mesh image.
  • a fourth aspect of the embodiments of the present application provides a computer-readable storage medium that stores computer-readable instructions, wherein the computer-readable instructions are implemented when executed by at least one processor The following steps:
  • the reticulation removal model is a model obtained by pre-training based on the reticulated image sample and the non-reticulated image sample, and is used for Remove the mesh in the image;
  • the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully, it is determined that the image to be processed is a mesh image.
  • the pre-trained reticulation removal model is used to remove reticulation of the image to be processed to obtain a non-reticulated image to be processed, and then the gray value difference of the to-be-processed image before and after removal is compared to determine the reticulation
  • the differences between the images to be processed before and after are drawn as corresponding mesh patterns (that is, the difference is assumed to be mesh).
  • the mesh pattern of the ID photo has the following characteristics: 1.
  • the size of the ID photo is relatively fixed , So the number of pixels occupied by the texture is relatively stable. 2.
  • the mesh In the ID photo, the mesh generally covers most or the entire area of the image, so the proportion of pixels occupied by the mesh is relatively stable and high. 3.
  • the types of texture patterns placed on ID photos are limited and known.
  • the embodiments of this application will further verify the number of pixels contained in the drawn texture pattern and the proportion of the total pixel points in the image to be processed, and match the texture pattern to achieve Check the multi-dimensional all aspects of the mesh, when all are satisfied, it means that the difference between the image to be processed before and after removing the mesh is the content of the mesh, so the image to be processed can be determined to be a mesh image, ensuring the mesh image The recognition is accurate and reliable.
  • FIG. 1 is a schematic diagram of the implementation process of the moire image recognition method provided by Embodiment 1 of the present application;
  • FIG. 2 is a schematic diagram of the implementation process of the moire image recognition method provided in Embodiment 2 of the present application;
  • FIG. 3 is a schematic diagram of the implementation process of the moire image recognition method provided by Embodiment 3 of the present application.
  • FIG. 4 is a schematic diagram of the structure of the texture image recognition device provided by the fourth embodiment of the present application.
  • FIG. 5 is a schematic diagram of a terminal device provided in Embodiment 5 of the present application.
  • the pre-trained reticulation removal model is used to perform reticulation removal on the image to be processed to obtain a non-reticulated image to be processed, and then the to-be-processed image before and after the removal Carry out the gray value difference comparison, so as to determine the difference part of the image to be processed before and after the netting, and draw it as the corresponding net pattern (that is, assuming the difference is the net pattern), because the ID photo has the net pattern in the actual situation
  • the size of the ID photo is relatively fixed, so the number of pixels occupied by the mesh is relatively stable. 2.
  • the mesh In the ID photo, the mesh generally covers most or the entire area of the image, so the proportion of pixels occupied by the mesh is relatively stable and high. 3.
  • the types of texture patterns placed on ID photos are limited and known. Based on the above-mentioned practical characteristics, the embodiments of this application will further verify the number of pixels contained in the drawn texture pattern and the proportion of the total pixel points in the image to be processed, and match the texture pattern to achieve Check the multi-dimensional all aspects of the mesh, when all are satisfied, it means that the difference between the image to be processed before and after removing the mesh is the content of the mesh, so the image to be processed can be determined to be a mesh image, ensuring the mesh image The recognition is accurate and reliable.
  • FIG. 1 shows the implementation flowchart of the moire image recognition method provided by Embodiment 1 of the present application, and the details are as follows:
  • S101 Input the image to be processed into a pre-trained reticulation removal model to obtain a non-reticulated image to be processed.
  • the reticulation removal model is a model obtained by pre-training based on the reticulated image sample and the non-reticulated image sample, and is used for Remove the moiré in the image.
  • the mesh removal model is used for the mesh in the image, which is pre-trained and constructed by the technicians. After the mesh removal model is processed, it will be obtained regardless of whether the original image to be processed contains mesh or not. The corresponding non-textured image.
  • the training and construction method of the reticulation removal model is not limited here, and can be designed by the technicians themselves, or the training and construction can also be carried out by referring to Embodiments 2 to 6 of this application.
  • S102 Perform gray value difference calculation on the image to be processed and the non-textured image to be processed, and perform reticulation reconstruction based on the calculated gray value difference and a preset difference threshold to obtain a corresponding reticulated pattern.
  • the embodiment of the present application will first assume that the image to be processed contains mesh, and will directly calculate the gray value difference of the image before and after the mesh is removed. Each pixel corresponding to the image is grayed out and the gray value difference is performed to determine the difference between the image to be processed before and after the texture is removed. At the same time, even if it is not a texture pixel, it is removed through the texture There may also be changes in the gray value before and after the model is processed. Therefore, in this embodiment of the application, a difference threshold is set in advance to select mesh pixels, and only the pixels whose gray value difference is greater than the difference threshold are set Recognized as mesh pixels. Among them, the specific size of the difference threshold can be set by a technician according to actual application requirements.
  • the graphics composed of all the reticulated pixels are extracted as the corresponding reticulated graphics, so as to realize the reconstruction of the reticulated in the original image to be processed.
  • S103 Count the number of pixels included in the texture pattern, and calculate the ratio of the number of pixels to the total number of pixels of the image to be processed. Perform graphic matching on the reticulated graphic based on the preset reticulated graphic library.
  • ID photo reticulation has the following characteristics:
  • the size of the ID photo is relatively fixed, so the number of pixels occupied by the mesh is relatively stable.
  • the mesh In the ID photo, the mesh generally covers most or the entire area of the image, so the proportion of pixels occupied by the mesh is relatively stable and high.
  • the number of reticulated pixels contained in the ID photo that actually contains the reticulation and the proportion of pixels in the total ID photo image will be counted in advance, and According to the statistical results, the corresponding number threshold and proportion threshold are set.
  • a corresponding reticulated pattern library is constructed.
  • the reticulated graphics obtained by the drawing were counted and the ratio of the total pixel points of the image to be processed was calculated, and compared with the corresponding number threshold and ratio threshold, and at the same time according to the drawn
  • the reticulated graphics matches the reticulated graphic library. If the processing result is that the number of reticulated pixels is large enough and the proportion is large enough, and there are similar graphics in the reticulated graphic library, it indicates the difference before and after removal of the reticulated graphic library.
  • the content also satisfies the three characteristics of the mesh of the above-mentioned ID photo, that is, the assumption that the content of the difference is a mesh is established. At this time, the image to be processed is directly determined to be a mesh image to complete the recognition of the mesh image.
  • the embodiment of this application will return to S101 Reprocess the image to be processed, and count the total number of processing of the image to be processed at the same time. If the processing result is within the preset maximum total number of times, the processing result is that one of the three conditions is not met, then loop back to S101 until the processing times When the maximum total number of times is reached, it is directly determined that the image to be processed is an image without mesh. As another optional embodiment of the present application, if the three conditions are not met, it is also possible to directly determine that the image to be processed is an image without mesh. The specific method can be selected and set by the technicians according to actual needs, and is not limited here.
  • the pre-trained reticulation removal model is used to perform reticulation removal on the image to be processed to obtain a non-reticulated image to be processed, and then the gray value difference ratio of the to-be-processed image before and after the removal Yes, so as to determine the difference part of the image to be processed before and after the netting, and draw it as the corresponding net pattern (that is, assuming that the difference part is the net pattern), and then based on the three characteristics of the ID photo, further analyze the drawn net
  • the number of pixels contained in the texture pattern and the proportion of the total pixel points in the image to be processed are verified, and the texture pattern is matched to achieve a multi-dimensional full-scale check of the texture. When all are satisfied, the texture is removed.
  • the difference between the images to be processed before and after is the mesh content, so the image to be processed can be determined to be a mesh image, which ensures the accuracy and reliability of the mesh image recognition.
  • the second embodiment of the present application includes:
  • the image samples for model training are all in pairs, and the mesh image samples and non- mesh image samples in each pair of image samples are identical except for the mesh.
  • the methods used include, but are not limited to, obtaining the required number of non-textured ID photos, and then adding corresponding textures to these non-textured ID photos, or technical personnel Other methods can also be used, which are not limited here.
  • the initial overall model will be constructed first, and the overall model includes the initial texture removal generator G(x), texture addition generator F(x), non-texture discrimination network Dg(x), and The reticulated discriminant network Df(x) is used for subsequent iterative training.
  • the initial model construction rules include but are not limited to the following: The model frame structure, how many layers it contains, the attributes of each layer, etc., are set by the technicians, and random Generate model parameters.
  • the initial recognition rates of G(x) and F(x) are generally low. Therefore, in the embodiments of the present application, subsequent iterative update training will be used to improve the model recognition rate.
  • S203 Use G(x) and F(x) to process the texture image sample a and the non-texture image sample b respectively to obtain the corresponding processed image, and based on a, b, processed image, Dg(G( a)) and Df(F(b)) calculate the first loss value corresponding to G(x) and F(x), and calculate Dg(x) and Df based on Dg(G(a) and Df(F(b)) (x) The corresponding second loss value and third loss value.
  • the processed image in the embodiment of the present application includes but is not limited to one or more of a', a", b', and b".
  • the embodiment of this application will further calculate the generator loss of the two functions.
  • the selection/design of the specific loss function is not limited here, and can be selected by the technicians according to their needs /Design, you can also refer to the second and third embodiments of this application for processing.
  • the specific calculation method of the image difference degree is not limited here, including but not limited to calculating the Euclidean distance between a, a" and b', and the Euclidean distance between b, b" and a' It can also be designed by the technicians according to their needs.
  • the final calculation method there will be differences in the images used to the specific processing. For example, only a, b, a can be used. 'And b'for calculation, you can also use a, b, a', b', a" and b" for calculation at the same time, and the specific final calculation method is determined.
  • the embodiment of this application will preset one or more loss value thresholds, and set them in advance
  • a good difference threshold is used to judge the validity of the three loss values and the difference degree of the image.
  • the loss threshold is used to measure the expected training effects of G(x), F(x), Dg(x) and Df(x). The number of them is set by the technicians according to actual needs.
  • an independent loss threshold can be set for each loss value.
  • the same loss threshold can also be set uniformly.
  • the specific values of the loss threshold and the difference threshold can also have technical personnel Set according to actual needs, and the larger the loss threshold and the difference threshold, the lower the requirements for the expected effect of training the generator and the discriminant network.
  • G(x ) Training and training effect judgment is not only closely related to the relative function of F(x), but also closely related to the accuracy of the network Dg(x) and Df(x) to determine whether there is reticulation. Only in G(x), Only when F(x), Dg(x) and Df(x) have completed the training to achieve the expected effect, can it be explained that the final G(x) is accurate and effective.
  • the embodiments of the present application compare G(x) and F(x) ) While iteratively updating, Dg(x) and Df(x) will also be iteratively updated, although the steps of Dg(x) and Df(x) update seem to be independent (the judgment of whether to update depends only on the second loss value In the case of the third loss value and the third loss value, there is no reference to the first loss value and the image difference degree), but in fact the second loss value and the third loss value also depend on the G(x) and F(x) pairs of images updated in real time
  • the sample processing effect so in fact, the update of Dg(x) and Df(x) and G(x) and F(x) are inseparable from each other and cannot be simply regarded as two independent iterative updates. Steps.
  • the paired image samples are processed based on the opposing generators and discriminant networks, and the loss value and image are calculated based on the processing results.
  • the calculation of the difference degree is to realize the quantification of the training effect of the generator and the discriminant network.
  • the generator and the discriminant network are respectively updated iteratively according to whether the loss value and the image difference degree meet the expected effect, until they both meet the expected effect. The effective training of the reticulation removal model is realized.
  • the first loss value is calculated based on the formula (1) for the image b′ obtained by processing b′ by F(x):
  • Lcyc L1Loss(a′′,a) ⁇ lambda_a+L1Loss(b′′,b) ⁇ lambda_b+
  • Lg is the first loss value
  • L1 Loss(x, y) represents the Euclidean distance of the two images
  • lambda_a, lambda_b, lambda_c, and lambda_d represent preset weights.
  • corresponding weights are preset for the degree of difference of different dimensions to balance the difference degree values obtained from different matching difficulty situations, and to ensure the effectiveness of the first loss value calculation to the greatest extent Sex.
  • the specific values of lambda_a, lambda_b, lambda_c, and lambda_d can be set by a technician after measuring the matching difficulty of each dimension.
  • the values of lambda_a and lambda_b are greater than lambda_c and lambda_d.
  • the third embodiment of the present application includes:
  • the first mesh image is a standard mesh image
  • the second mesh image is the difference part image before and after G(x) processing.
  • the three-screen image is the different part of the image before and after the F(x) processing. Therefore, the gray value difference operation is performed on a and a'and b and b'respectively to realize the actual processing of G(x) and F(x) The effect is extracted, and then the image distances between the second and third mesh images and the first mesh image are respectively calculated.
  • the image distance is the reciprocal of the similarity of the image, and the specific image distance calculation method is not limited here, and can be set by the technicians themselves, or refer to other embodiments of this application.
  • n is the total number of pixels of the first mesh image or the second mesh image
  • x i and y i are the ith of the first mesh image or the second mesh image, respectively The pixel value of each pixel.
  • the pixel points of the texture image are compared one by one to find the pixel value difference, and then the difference is taken as the reciprocal to obtain the required image distance.
  • the image distance between the first mesh image and the third mesh image can also be calculated according to formula (4), which will not be repeated here.
  • FIG. 4 shows a structural block diagram of a moire image recognition device provided in an embodiment of the present application.
  • the texture image recognition device illustrated in FIG. 4 may be the execution subject of the texture image recognition method provided in the first embodiment.
  • the texture image recognition device includes:
  • the de-interlacing module 41 is used to input the image to be processed into a pre-trained de-interlacing model to obtain a de-interlacing image to be processed, and the de-interlacing model is based on the deinterlacing image sample and the non-interlacing image sample
  • the model obtained by training is used to remove moire in the image.
  • the mesh reconstruction module 42 is used to calculate the gray value difference between the image to be processed and the image without mesh to be processed, and perform mesh based on the calculated gray value difference and a preset difference threshold. Reconstruction of the texture to obtain the corresponding texture pattern.
  • the feature processing module 43 is configured to count the number of pixels included in the texture pattern, and calculate the ratio of the number of pixels to the total number of pixels of the image to be processed. Graphic matching is performed on the mesh pattern based on a preset mesh pattern library.
  • the mesh discrimination module 44 is configured to determine that the image to be processed is a mesh image if the number of mesh pixels is greater than a preset number threshold, the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully.
  • the texture image recognition device further includes:
  • the sample acquisition module is used to obtain multiple pairs of textured image samples and non-reticulated image samples, where each pair of image samples includes a textured image sample and a non-reticulated image sample, and each pair of image samples There is only a texture difference between the texture image sample and the non-texture image sample in.
  • the generator building module is used to construct the reticulation removal generator G(x), and the reticulation addition generator F(x). Let G(x) obtain the probability that the image belongs to the non-recreation image through the discrimination network Dg(x) as Dg(G(x)), F(x) through the discriminant network Df(x), the probability that the image belongs to the texture image is Df(F(x)).
  • Loss value calculation module used to use G(x) and F(x) to process the texture image sample a and the non-texture image sample b respectively to obtain the corresponding processed image, and based on a, b, and the processing After image, Dg(G(a)) and Df(F(b)) calculate the first loss value corresponding to G(x) and F(x), based on Dg(G(a) and Df(F(b)) Calculate the second loss value and the third loss value corresponding to Dg(x) and Df(x).
  • the difference calculation module is used to calculate the image difference degree between a, b and the processed images.
  • the iterative update module is configured to iteratively update Dg(x) and Df(x) if the second loss value and/or the third loss value are substantially equal to the corresponding preset loss value threshold. If the first loss value is greater than the corresponding preset loss value threshold and/or the image difference degree is greater than the preset difference degree threshold, G(x) and F(x) are updated iteratively.
  • a model output module configured to: if the first loss value, the second loss value, and the third loss value are all less than a corresponding preset loss value threshold, and the image difference degree is less than the preset difference degree Threshold, complete the model training of the mesh removal generator G(x) to obtain the mesh removal model.
  • the loss value calculation module includes:
  • Lcyc L1Loss(a′′,a) ⁇ lambda_a+L1Loss(b′′,b) ⁇ lambda_b+
  • Lg is the first loss value
  • L1 Loss(x, y) represents the Euclidean distance of the two images
  • lambda_a, lambda_b, lambda_c, and lambda_d represent preset weights.
  • the loss value calculation module also includes:
  • the difference calculation module includes:
  • the texture image extraction module is used to perform gray value difference calculations on a and b, a and a', and b and b'respectively, and perform a network based on the obtained gray value difference and the preset difference threshold. Texture extraction to obtain the corresponding first, second and third texture images, where a'is the image obtained by processing the texture image sample a using G(x), and b'is the use F(x) The image obtained by processing the non-textured image sample b.
  • An image difference calculation module for calculating the image distance between the first mesh image and the second mesh image, and the image distance between the first mesh image and the third mesh image , And calculate the difference between the distances of the two images to obtain the image difference degree.
  • the image difference calculation module includes:
  • n is the total number of pixels of the first mesh image or the second mesh image
  • x i and y i are the ith of the first mesh image or the second mesh image, respectively The pixel value of each pixel.
  • first”, “second”, etc. are used in the text in some embodiments of the present application to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
  • the first table may be named the second table, and similarly, the second table may be named the first table without departing from the scope of the various described embodiments.
  • the first form and the second form are both forms, but they are not the same form.
  • Fig. 5 is a schematic diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device 5 of this embodiment includes a processor 50 and a memory 51, and the memory 51 stores computer-readable instructions 52 that can run on the processor 50.
  • the processor 50 executes the computer-readable instructions 52, the steps in the foregoing embodiments of the moire image recognition method are implemented, such as steps 101 to 104 shown in FIG. 1.
  • the processor 50 executes the computer-readable instructions 52
  • the functions of the modules/units in the foregoing device embodiments such as the functions of the modules 41 to 44 shown in FIG. 4, are realized.
  • the terminal device 5 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, a processor 50 and a memory 51.
  • FIG. 5 is only an example of the terminal device 5, and does not constitute a limitation on the terminal device 5. It may include more or less components than shown in the figure, or a combination of certain components, or different components.
  • the terminal device may also include an input sending device, a network access device, a bus, and the like.
  • the so-called processor 50 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 51 may be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5.
  • the memory 51 may also be an external storage device of the terminal device 5, such as a plug-in hard disk equipped on the terminal device 5, a smart memory card (Smart Media Card, SMC), and a Secure Digital (SD) Card, Flash Card, etc. Further, the memory 51 may also include both an internal storage unit of the terminal device 5 and an external storage device.
  • the memory 51 is used to store the computer-readable instructions and other programs and data required by the terminal device.
  • the memory 51 can also be used to temporarily store data that has been sent or will be sent.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • ROM read only memory
  • PROM programmable ROM
  • EPROM electrically programmable ROM
  • EEPROM electrically erasable programmable ROM
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

A reticulate pattern-containing image recognition method and apparatus, and a terminal device and a medium, suitable for the technical field of data processing. The method comprises: inputting an image to be processed into a pre-trained reticulate pattern removal model to obtain a reticulate pattern-free image to be processed (S101); performing gray value difference calculation on the image to be processed and the reticulate pattern-free image to be processed, and performing reticulate pattern reconstruction on the basis of the calculated gray value difference and a preset difference threshold to obtain a corresponding reticulate pattern graph (S102); calculating the number of pixel points included in the reticulate pattern graph, and calculating the proportion value of the obtained number of pixel points in the total number of pixel points of the image to be processed; performing graph matching on the reticulate pattern graph on the basis of a preset reticulate pattern graph library (S103); and if the number of reticulate pattern pixel points is greater than a preset number threshold, the proportion value is greater than a preset proportion threshold, and the matching of the reticulate pattern graph succeeds, determining that the image to be processed is a reticulate pattern-containing image (S104). The method ensures the accuracy and reliability of reticulate pattern-containing image recognition.

Description

一种网纹图像识别方法、装置、终端设备及介质Recognition method, device, terminal equipment and medium for reticulated image
本申请要求于2019年08月09日提交中国专利局、申请号为201910736543.2、发明名称为“一种网纹图像识别方法、装置及终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on August 9, 2019, the application number is 201910736543.2, and the invention title is "A method, device and terminal device for reticulated image recognition", the entire content of which is incorporated by reference Incorporated in this application.
技术领域Technical field
本申请属于数据处理技术领域,尤其涉及一种网纹图像识别方法、装置、终端设备及介质。This application belongs to the field of data processing technology, and in particular relates to a method, device, terminal equipment and medium for recognizing a textured image.
背景技术Background technique
公安系统的证件照有些带有网纹,在进行人脸识别等处理的时候,网纹会极大地影响识别的准确性,因此带有网纹的证件照需要先去网纹才能使用,但是在去网纹之前需要先判断证件照是否带网纹。现有技术中虽然也有一些网纹图像识别方法,但识别准确率并不理想,因此需要一种可以准确识别图像是否带有网纹的方法。Some of the ID photos of the public security system have textures. When performing face recognition and other processing, the textures will greatly affect the accuracy of recognition. Therefore, ID photos with textures need to be reticulated before they can be used. Before removing the texture, you need to determine whether the ID photo has texture. Although there are some methods for recognizing moire images in the prior art, the recognition accuracy is not ideal. Therefore, a method that can accurately identify whether an image has moire is needed.
技术问题technical problem
有鉴于此,本申请实施例提供了一种网纹图像识别方法、装置、终端设备及介质,可解决对网纹图像识别准确率较低的问题。In view of this, the embodiments of the present application provide a method, device, terminal equipment, and medium for recognizing a textured image, which can solve the problem of low accuracy of recognizing a textured image.
技术解决方案Technical solutions
本申请实施例的第一方面提供了一种网纹图像识别方法,包括:The first aspect of the embodiments of the present application provides a method for recognizing a textured image, including:
将待处理图像输入至预先训练好的网纹去除模型,得到待处理无网纹图像,所述网纹去除模型为预先基于网纹图像样本和无网纹图像样本进行训练得到的模型,用于去除图像中网纹;Input the image to be processed into a pre-trained reticulation removal model to obtain a non-reticulated image to be processed. The reticulation removal model is a model obtained by pre-training based on the reticulated image sample and the non-reticulated image sample, and is used for Remove the mesh in the image;
对所述待处理图像和所述待处理无网纹图像进行灰度值差值计算,并基于计算得到的灰度值差值和预设差值阈值进行网纹重构,得到对应的网纹图形;Perform gray value difference calculation on the to-be-processed image and the to-be-processed image without texture, and perform texture reconstruction based on the calculated gray value difference and a preset difference threshold to obtain the corresponding texture Graphics
统计所述网纹图形中包含的像素点数,并计算得到的像素点数占所述待处理图像总像素点数的比例值;基于预设网纹图形库对所述网纹图形进行图形匹配;Counting the number of pixels contained in the texture pattern, and calculating the ratio of the number of pixels to the total number of pixels in the image to be processed; performing graphic matching on the texture pattern based on a preset texture pattern library;
若所述网纹像素点数大于预设数量阈值、所述比例值大于预设比例阈值且所述网纹图形匹配成功,判定所述待处理图像为网纹图像。If the number of mesh pixels is greater than a preset number threshold, the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully, it is determined that the image to be processed is a mesh image.
本申请实施例的第二方面提供了一种网纹图像识别装置,包括:A second aspect of the embodiments of the present application provides a texture image recognition device, including:
去网纹模块,用于将待处理图像输入至预先训练好的网纹去除模型,得到待处理无网纹图像,所述网纹去除模型为预先基于网纹图像样本和无网纹图像样本进行训练得到的模型,用于去除图像中网纹;The de-texturing module is used to input the image to be processed into a pre-trained reticulation removal model to obtain a non-reticulated image to be processed. The reticulation removal model is based on the reticulated image sample and the non-reticulated image sample in advance. The trained model is used to remove the moire in the image;
网纹重构模块,用于对所述待处理图像和所述待处理无网纹图像进行灰度值差值计算,并基于计算得到的灰度值差值和预设差值阈值进行网纹重构,得到对应的网纹图形;The reticulation reconstruction module is used to calculate the gray value difference of the image to be processed and the non-reticulated image to be processed, and perform reticulation based on the calculated gray value difference and a preset difference threshold Reconstruct to get the corresponding net pattern;
特征处理模块,用于统计所述网纹图形中包含的像素点数,并计算得到的像素点数占所述待处理图像总像素点数的比例值;基于预设网纹图形库对所述网纹图形进行图形匹配;The feature processing module is used to count the number of pixels contained in the texture pattern, and calculate the ratio of the number of pixels to the total number of pixels in the image to be processed; and compare the texture pattern based on a preset texture pattern library Perform graphic matching;
网纹判别模块,用于若所述网纹像素点数大于预设数量阈值、所述比例值大于预设比例阈值且所述网纹图形匹配成功,判定所述待处理图像为网纹图像。The mesh discrimination module is configured to determine that the image to be processed is a mesh image if the number of mesh pixels is greater than a preset number threshold, the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully.
本申请实施例的第三方面提供了一种终端设备,包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:A third aspect of the embodiments of the present application provides a terminal device, including a memory and a processor. The memory stores computer-readable instructions that can run on the processor. The processor executes the computer The following steps are implemented when reading instructions:
将待处理图像输入至预先训练好的网纹去除模型,得到待处理无网纹图像,所述网纹去除模型为预先基于网纹图像样本和无网纹图像样本进行训练得到的模型,用于去除图像中网纹;Input the image to be processed into a pre-trained reticulation removal model to obtain a non-reticulated image to be processed. The reticulation removal model is a model obtained by pre-training based on the reticulated image sample and the non-reticulated image sample, and is used for Remove the mesh in the image;
对所述待处理图像和所述待处理无网纹图像进行灰度值差值计算,并基于计算得到的灰度值差值和预设差值阈值进行网纹重构,得到对应的网纹图形;Perform gray value difference calculation on the to-be-processed image and the to-be-processed image without texture, and perform texture reconstruction based on the calculated gray value difference and a preset difference threshold to obtain the corresponding texture Graphics
统计所述网纹图形中包含的像素点数,并计算得到的像素点数占所述待处理图像总像素点数的比例值;基于预设网纹图形库对所述网纹图形进行图形匹配;Counting the number of pixels contained in the texture pattern, and calculating the ratio of the number of pixels to the total number of pixels in the image to be processed; performing graphic matching on the texture pattern based on a preset texture pattern library;
若所述网纹像素点数大于预设数量阈值、所述比例值大于预设比例阈值且所述网纹图形匹配成功,判定所述待处理图像为网纹图像。If the number of mesh pixels is greater than a preset number threshold, the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully, it is determined that the image to be processed is a mesh image.
本申请实施例的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,其特征在于,所述计算机可读指令被至少一个处理器执行时实现如下步骤:A fourth aspect of the embodiments of the present application provides a computer-readable storage medium that stores computer-readable instructions, wherein the computer-readable instructions are implemented when executed by at least one processor The following steps:
将待处理图像输入至预先训练好的网纹去除模型,得到待处理无网纹图像,所述网纹去除模型为预先基于网纹图像样本和无网纹图像样本进行训练得到的模型,用于去除图像中网纹;Input the image to be processed into a pre-trained reticulation removal model to obtain a non-reticulated image to be processed. The reticulation removal model is a model obtained by pre-training based on the reticulated image sample and the non-reticulated image sample, and is used for Remove the mesh in the image;
对所述待处理图像和所述待处理无网纹图像进行灰度值差值计算,并基于计算得到的灰度值差值和预设差值阈值进行网纹重构,得到对应的网纹图形;Perform gray value difference calculation on the to-be-processed image and the to-be-processed image without texture, and perform texture reconstruction based on the calculated gray value difference and a preset difference threshold to obtain the corresponding texture Graphics
统计所述网纹图形中包含的像素点数,并计算得到的像素点数占所述待处理图像总像素点数的比例值;基于预设网纹图形库对所述网纹图形进行图形匹配;Counting the number of pixels contained in the texture pattern, and calculating the ratio of the number of pixels to the total number of pixels in the image to be processed; performing graphic matching on the texture pattern based on a preset texture pattern library;
若所述网纹像素点数大于预设数量阈值、所述比例值大于预设比例阈值且所述网纹图形匹配成功,判定所述待处理图像为网纹图像。If the number of mesh pixels is greater than a preset number threshold, the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully, it is determined that the image to be processed is a mesh image.
有益效果Beneficial effect
通过预先训练好的网纹去除模型对待处理图像进行网纹去除,得到无网纹的待处理图像,再将去网纹前后的待处理图像进行灰度值差值比对,从而确定出去网纹前后待处理图像存在的差异部分,并绘制为对应的网纹图形(即假设差异部分是网纹),由于实际情况中证件照网纹具有以下几个特点:1、证件照的尺寸大小相对固定,因此网纹所占的像素点数量相对稳定。2、证件照中,一般网纹都会覆盖图像大部分区域,或者整个区域,因此网纹所占的像素点数比例相对稳定且较高。3、证件照放置的网纹图形种类是有限且已知的。正是基于上述几个实际特点,本申请实施例中会进一步对绘制的网纹图形内包含的像素点数、占待处理图像总像素点数比例进行校验,并对网纹图形进行匹配,实现了对网纹的多维度全方面校验,当均满足时则说明去网纹前后待处理图像存在的差异部分是网纹内容,因此即可判定待处理图像为网纹图像,保证了网纹图像识别的准确可靠。The pre-trained reticulation removal model is used to remove reticulation of the image to be processed to obtain a non-reticulated image to be processed, and then the gray value difference of the to-be-processed image before and after removal is compared to determine the reticulation The differences between the images to be processed before and after are drawn as corresponding mesh patterns (that is, the difference is assumed to be mesh). In actual situations, the mesh pattern of the ID photo has the following characteristics: 1. The size of the ID photo is relatively fixed , So the number of pixels occupied by the texture is relatively stable. 2. In the ID photo, the mesh generally covers most or the entire area of the image, so the proportion of pixels occupied by the mesh is relatively stable and high. 3. The types of texture patterns placed on ID photos are limited and known. Based on the above-mentioned practical characteristics, the embodiments of this application will further verify the number of pixels contained in the drawn texture pattern and the proportion of the total pixel points in the image to be processed, and match the texture pattern to achieve Check the multi-dimensional all aspects of the mesh, when all are satisfied, it means that the difference between the image to be processed before and after removing the mesh is the content of the mesh, so the image to be processed can be determined to be a mesh image, ensuring the mesh image The recognition is accurate and reliable.
附图说明Description of the drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only of the present application. For some embodiments, for those of ordinary skill in the art, other drawings can be obtained from these drawings without creative labor.
图1是本申请实施例一提供的网纹图像识别方法的实现流程示意图;FIG. 1 is a schematic diagram of the implementation process of the moire image recognition method provided by Embodiment 1 of the present application;
图2是本申请实施例二提供的网纹图像识别方法的实现流程示意图;FIG. 2 is a schematic diagram of the implementation process of the moire image recognition method provided in Embodiment 2 of the present application;
图3是本申请实施例三提供的网纹图像识别方法的实现流程示意图;FIG. 3 is a schematic diagram of the implementation process of the moire image recognition method provided by Embodiment 3 of the present application;
图4是本申请实施例四提供的网纹图像识别装置的结构示意图;FIG. 4 is a schematic diagram of the structure of the texture image recognition device provided by the fourth embodiment of the present application;
图5是本申请实施例五提供的终端设备的示意图。FIG. 5 is a schematic diagram of a terminal device provided in Embodiment 5 of the present application.
本发明的实施方式Embodiments of the invention
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, for the purpose of illustration rather than limitation, specific details such as a specific system structure and technology are proposed for a thorough understanding of the embodiments of the present application. However, it should be clear to those skilled in the art that the present application can also be implemented in other embodiments without these specific details. In other cases, detailed descriptions of well-known systems, devices, circuits, and methods are omitted to avoid unnecessary details from obstructing the description of this application.
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。In order to illustrate the technical solutions described in the present application, specific embodiments are used for description below.
由于带网纹图像会极大影响对图像进行人脸识别等处理的准确性,因此需要一种可以识别图像是否有网纹的方法,以为网纹去除和证件照人脸识别等提供基础。Since an image with mesh will greatly affect the accuracy of face recognition and other processing on the image, a method that can identify whether an image has mesh is needed to provide a basis for mesh removal and face recognition in ID photos.
为了实现对图像的网纹识别,本申请实施例中通过预先训练好的网纹去除模型对待处理图像进行网纹去除,得到无网纹的待处理图像,再将去网纹前后的待处理图像进行灰度值差值比对,从而确定出去网纹前后待处理图像存在的差异部分,并绘制为对应的网纹图形(即假设差异部分是网纹),由于实际情况中证件照网纹具有以下几个特点:1、证件照的尺寸大小相对固定,因此网纹所占的像素点数量相对稳定。2、证件照中,一般网纹都会覆盖图像大部分区域,或者整个区域,因此网纹所占的像素点数比例相对稳定且较高。3、证件照放置的网纹图形种类是有限且已知的。正是基于上述几个实际特点,本申请实施例中会进一步对绘制的网纹图形内包含的像素点数、占待处理图像总像素点数比例进行校验,并对网纹图形进行匹配,实现了对网纹的多维度全方面校验,当均满足时则说明去网纹前后待处理图像存在的差异部分是网纹内容,因此即可判定待处理图像为网纹图像,保证了网纹图像识别的准确可靠。In order to realize the reticulation of the image, in the embodiment of the application, the pre-trained reticulation removal model is used to perform reticulation removal on the image to be processed to obtain a non-reticulated image to be processed, and then the to-be-processed image before and after the removal Carry out the gray value difference comparison, so as to determine the difference part of the image to be processed before and after the netting, and draw it as the corresponding net pattern (that is, assuming the difference is the net pattern), because the ID photo has the net pattern in the actual situation The following features: 1. The size of the ID photo is relatively fixed, so the number of pixels occupied by the mesh is relatively stable. 2. In the ID photo, the mesh generally covers most or the entire area of the image, so the proportion of pixels occupied by the mesh is relatively stable and high. 3. The types of texture patterns placed on ID photos are limited and known. Based on the above-mentioned practical characteristics, the embodiments of this application will further verify the number of pixels contained in the drawn texture pattern and the proportion of the total pixel points in the image to be processed, and match the texture pattern to achieve Check the multi-dimensional all aspects of the mesh, when all are satisfied, it means that the difference between the image to be processed before and after removing the mesh is the content of the mesh, so the image to be processed can be determined to be a mesh image, ensuring the mesh image The recognition is accurate and reliable.
图1示出了本申请实施例一提供的网纹图像识别方法的实现流程图,详述如下:FIG. 1 shows the implementation flowchart of the moire image recognition method provided by Embodiment 1 of the present application, and the details are as follows:
S101,将待处理图像输入至预先训练好的网纹去除模型,得到待处理无网纹图像,网纹去除模型为预先基于网纹图像样本和无网纹图像样本进行训练得到的模型,用于去除图像中网纹。S101: Input the image to be processed into a pre-trained reticulation removal model to obtain a non-reticulated image to be processed. The reticulation removal model is a model obtained by pre-training based on the reticulated image sample and the non-reticulated image sample, and is used for Remove the moiré in the image.
在本申请实施例中,网纹去除模型用于图像中的网纹,由技术人员预先训练构建得到,在经由网纹去除模型的处理之后,无论原待处理图像内是否包含网纹,都会得到对应的无网纹图像。其中,对网纹去除模型的训练构建方法此处不予限定,可由技术人员自行设计,或者也可参考本申请实施例二至六进行训练构建。In the embodiment of this application, the mesh removal model is used for the mesh in the image, which is pre-trained and constructed by the technicians. After the mesh removal model is processed, it will be obtained regardless of whether the original image to be processed contains mesh or not. The corresponding non-textured image. Among them, the training and construction method of the reticulation removal model is not limited here, and can be designed by the technicians themselves, or the training and construction can also be carried out by referring to Embodiments 2 to 6 of this application.
S102,对待处理图像和待处理无网纹图像进行灰度值差值计算,并基于计算得到的灰度值差值和预设差值阈值进行网纹重构,得到对应的网纹图形。S102: Perform gray value difference calculation on the image to be processed and the non-textured image to be processed, and perform reticulation reconstruction based on the calculated gray value difference and a preset difference threshold to obtain a corresponding reticulated pattern.
为了识别待处理图像内是否包含网纹,本申请实施例首先会假设待处理图像内是包含网纹的,并会直接对将网纹去除前后的图像进行灰度值差值计算,即将两张图像对应的每个像素点都进行灰度化处理并进行灰度值求差,从而确定出去网纹前后待处理图像存在的差异部分,同时考虑到即使不是网纹像素点,在经由网纹去除模型处理前后也有可能会存在灰度值的变化,因此本申请实施例会预先设置一个差值阈值,用来挑选出网纹像素点,并仅将灰度值差值大等于差值阈值的像素点识别为网纹像素点。其中,差值阈值的具体大小,可由技术人员根据实际应用需求设置。In order to identify whether the image to be processed contains mesh, the embodiment of the present application will first assume that the image to be processed contains mesh, and will directly calculate the gray value difference of the image before and after the mesh is removed. Each pixel corresponding to the image is grayed out and the gray value difference is performed to determine the difference between the image to be processed before and after the texture is removed. At the same time, even if it is not a texture pixel, it is removed through the texture There may also be changes in the gray value before and after the model is processed. Therefore, in this embodiment of the application, a difference threshold is set in advance to select mesh pixels, and only the pixels whose gray value difference is greater than the difference threshold are set Recognized as mesh pixels. Among them, the specific size of the difference threshold can be set by a technician according to actual application requirements.
在挑选出网纹像素点之后,将所有网纹像素点构成的图形提取为对应的网纹图形,实现对原待处理图像内的网纹重构。After the reticulated pixels are selected, the graphics composed of all the reticulated pixels are extracted as the corresponding reticulated graphics, so as to realize the reconstruction of the reticulated in the original image to be processed.
S103,统计网纹图形中包含的像素点数,并计算得到的像素点数占待处理图像总像素点数的比例值。基于预设网纹图形库对网纹图形进行图形匹配。S103: Count the number of pixels included in the texture pattern, and calculate the ratio of the number of pixels to the total number of pixels of the image to be processed. Perform graphic matching on the reticulated graphic based on the preset reticulated graphic library.
S104,若网纹像素点数大于预设数量阈值、比例值大于预设比例阈值且网纹图形匹配成功,判定待处理图像为网纹图像。S104: If the number of mesh pixels is greater than the preset number threshold, the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully, it is determined that the image to be processed is a mesh image.
由于实际情况中证件照网纹具有以下几个特点:Due to the actual situation, ID photo reticulation has the following characteristics:
1、证件照的尺寸大小相对固定,因此网纹所占的像素点数量相对稳定。1. The size of the ID photo is relatively fixed, so the number of pixels occupied by the mesh is relatively stable.
2、证件照中,一般网纹都会覆盖图像大部分区域,或者整个区域,因此网纹所占的像素点数比例相对稳定且较高。2. In the ID photo, the mesh generally covers most or the entire area of the image, so the proportion of pixels occupied by the mesh is relatively stable and high.
3、证件照放置的网纹图形种类是有限且已知的。3. The types of texture patterns placed on ID photos are limited and known.
正是结合证件照网纹的上述3个特点,本申请实施例中,会预先对实际含有网纹的证件照中包含的网纹像素点数,以及占总证件照图像像素点数比例进行统计,并根据统计结果设置对应的数量阈值以及比例阈值,同时预先基于已知的证件照网纹图形种类,构建了对应的网纹图形库。在实际处理时,分别对绘制得到的网纹图形进行了网纹像素点数统计并对占待处理图像总像素点数比例进行计算,并与对应的数量阈值和比例阈值进行比对,同时根据绘制的网纹图形对网纹图形库进行图形匹配,若处理结果为网纹像素点数足够多,且占比例足够大,同时在网纹图形库中也有相似的图形,则说明网纹去除前后的差异部分内容同时满足上述证件照网纹具有的3个特点,即对差异部分内容是网纹的假设成立,此时会直接判定待处理图像为网纹图像,完成对网纹图像的识别。It is precisely in combination with the above three characteristics of the reticulation of the ID photo that, in this embodiment of the application, the number of reticulated pixels contained in the ID photo that actually contains the reticulation and the proportion of pixels in the total ID photo image will be counted in advance, and According to the statistical results, the corresponding number threshold and proportion threshold are set. At the same time, based on the known types of reticulated patterns of ID photos, a corresponding reticulated pattern library is constructed. In the actual processing, the reticulated graphics obtained by the drawing were counted and the ratio of the total pixel points of the image to be processed was calculated, and compared with the corresponding number threshold and ratio threshold, and at the same time according to the drawn The reticulated graphics matches the reticulated graphic library. If the processing result is that the number of reticulated pixels is large enough and the proportion is large enough, and there are similar graphics in the reticulated graphic library, it indicates the difference before and after removal of the reticulated graphic library. The content also satisfies the three characteristics of the mesh of the above-mentioned ID photo, that is, the assumption that the content of the difference is a mesh is established. At this time, the image to be processed is directly determined to be a mesh image to complete the recognition of the mesh image.
其中,若三个条件中存在条件不满足,则说明此次检测结果无法确定待处理图像中是否包含网纹,导致这种结果的原因不仅仅可能是待处理图像本身就不包含网纹,也有可能是处理过程中存在数据出错等问题,或者待处理图像质量不佳的问题,因此作为本申请的一个可选实施例,在若三个条件中存在条件不满足时,本申请实施例会返回S101重新对待处理图像进行处理,并同时统计对待处理图像处理的总次数,若在预设的最大总次数范围内,处理结果为三个条件中存在条件不满足,则一直循环返回S101,直至处理次数达到最大总次数,则直接判定该待处理图像为不带网纹图像。作为本申请的另一个可选实施例,也可以在若三个条件中存在条件不满足时,直接判定该待处理图像为不带网纹图像。具体采用何种方式,可由技术人员根据实际需求自行选取设定,此处不予限定。Among them, if the three conditions are not met, it means that the detection result cannot determine whether the image to be processed contains mesh. The reason for this result is not only that the image to be processed does not contain mesh, but also There may be problems such as data errors during processing, or poor image quality to be processed. Therefore, as an optional embodiment of this application, if the three conditions are not met, the embodiment of this application will return to S101 Reprocess the image to be processed, and count the total number of processing of the image to be processed at the same time. If the processing result is within the preset maximum total number of times, the processing result is that one of the three conditions is not met, then loop back to S101 until the processing times When the maximum total number of times is reached, it is directly determined that the image to be processed is an image without mesh. As another optional embodiment of the present application, if the three conditions are not met, it is also possible to directly determine that the image to be processed is an image without mesh. The specific method can be selected and set by the technicians according to actual needs, and is not limited here.
在本申请实施例中,通过预先训练好的网纹去除模型对待处理图像进行网纹去除,得到无网纹的待处理图像,再将去网纹前后的待处理图像进行灰度值差值比对,从而确定出去网 纹前后待处理图像存在的差异部分,并绘制为对应的网纹图形(即假设差异部分是网纹),再基于证件照实际具有的三个特点,进一步对绘制的网纹图形内包含的像素点数、占待处理图像总像素点数比例进行校验,并对网纹图形进行匹配,实现了对网纹的多维度全方面校验,当均满足时则说明去网纹前后待处理图像存在的差异部分是网纹内容,因此即可判定待处理图像为网纹图像,保证了网纹图像识别的准确可靠。In the embodiment of the application, the pre-trained reticulation removal model is used to perform reticulation removal on the image to be processed to obtain a non-reticulated image to be processed, and then the gray value difference ratio of the to-be-processed image before and after the removal Yes, so as to determine the difference part of the image to be processed before and after the netting, and draw it as the corresponding net pattern (that is, assuming that the difference part is the net pattern), and then based on the three characteristics of the ID photo, further analyze the drawn net The number of pixels contained in the texture pattern and the proportion of the total pixel points in the image to be processed are verified, and the texture pattern is matched to achieve a multi-dimensional full-scale check of the texture. When all are satisfied, the texture is removed. The difference between the images to be processed before and after is the mesh content, so the image to be processed can be determined to be a mesh image, which ensures the accuracy and reliability of the mesh image recognition.
作为本申请实施例一中进行网纹去除模型训练构建的一种具体实现方式,如图2所示,本申请实施例二,包括:As a specific implementation of the training and construction of the mesh removal model in the first embodiment of the present application, as shown in FIG. 2, the second embodiment of the present application includes:
S201,获取多对网纹图像样本和无网纹图像样本,其中,每对图像样本中包含一张网纹图像样本和一张无网纹图像样本,且每对图像样本中的网纹图像样本和无网纹图像样本之间仅存在网纹差别。S201. Obtain multiple pairs of textured image samples and non-reticulated image samples, where each pair of image samples includes a textured image sample and a non-reticulated image sample, and the textured image sample in each pair of image samples There is only a difference in texture between the sample and the sample without texture.
在本申请实施例中,进行模型训练的图像样本都是成对存在的,且每一对图像样内的网纹图像样本和无网纹图像样本除了网纹以外都是完全相同的,为了获取多对仅存在网纹差别的图像样本,所使用的方法包括但不限于如,先获取所需数量的无网纹证件照,再对这些无网纹证件照添加对应的网纹,或者技术人员也可以使用其他方法,此处不予限定。In the embodiment of this application, the image samples for model training are all in pairs, and the mesh image samples and non- mesh image samples in each pair of image samples are identical except for the mesh. In order to obtain For multiple pairs of image samples with only differences in texture, the methods used include, but are not limited to, obtaining the required number of non-textured ID photos, and then adding corresponding textures to these non-textured ID photos, or technical personnel Other methods can also be used, which are not limited here.
S202,构建网纹去除生成器G(x),网纹添加生成器F(x),设G(x)通过判别网络Dg(x)得到图像属于无网纹图像的概率为Dg(G(x)),F(x)通过判别网络Df(x)得到图像属于网纹图像的概率为Df(F(x))。S202, construct a texture removal generator G(x), a texture addition generator F(x), set G(x) through the discriminating network Dg(x) to obtain the probability that the image belongs to a non-textured image as Dg(G(x) )), F(x) obtains the probability that the image belongs to the texture image through the discriminant network Df(x) as Df(F(x)).
在本申请实施例中首先会构建初始的总模型,总模型中包括初始的网纹去除生成器G(x)、网纹添加生成器F(x)、无网纹判别网络Dg(x)以及网纹判别网络Df(x),以进行后续的迭代训练,其中对初始的模型构建规则包括但不限于如下:由技术人员设置好模型框架结构,包含多少层,每层的属性等,并随机生成模型参数。初始G(x)和F(x)的识别率一般都是较低的,因此本申请实施例后续会通过迭代更新训练来提高模型识别率。In the embodiment of this application, the initial overall model will be constructed first, and the overall model includes the initial texture removal generator G(x), texture addition generator F(x), non-texture discrimination network Dg(x), and The reticulated discriminant network Df(x) is used for subsequent iterative training. The initial model construction rules include but are not limited to the following: The model frame structure, how many layers it contains, the attributes of each layer, etc., are set by the technicians, and random Generate model parameters. The initial recognition rates of G(x) and F(x) are generally low. Therefore, in the embodiments of the present application, subsequent iterative update training will be used to improve the model recognition rate.
S203,利用G(x)和F(x)分别对网纹图像样本a和无网纹图像样本b进行处理,得到对应的处理后图像,并基于a、b、处理后图像、Dg(G(a))和Df(F(b))计算G(x)和F(x)对应的第一损失值,基于Dg(G(a)和Df(F(b))计算Dg(x)和Df(x)对应的第二损失值和第三损失值。S203: Use G(x) and F(x) to process the texture image sample a and the non-texture image sample b respectively to obtain the corresponding processed image, and based on a, b, processed image, Dg(G( a)) and Df(F(b)) calculate the first loss value corresponding to G(x) and F(x), and calculate Dg(x) and Df based on Dg(G(a) and Df(F(b)) (x) The corresponding second loss value and third loss value.
为了实现对网纹去除生成器G(x)、网纹添加生成器F(x)、无网纹判别网络Dg(x)以及网纹判别网络Df(x)的有效性评估,本申请实施例中会使用G(x)和F(x)分别对网纹图像样本a和无网纹图像样本b进行处理,以得到对应的处理后的图像,设利用G(x)对网纹图像样本a进行处理得到图像a',利用F(x)对a'进行处理得到图像a″,利用F(x)对无网纹图像样本b进行处理得到图像b',利用F(x)对b'进行处理得到图像b″,本申请实施例中的处理后图像包括但不限于a'、a″、b'和b″中的一种或多种。在本申请实施例中,首先会利用G(x)和F(x)分 别对a和b进行处理,由于a和b仅存在网纹的差异,因此理论上得到的处理后的图像也应当仅存在网纹的差异,同理可知,理论上a=a″=b',且b=b″=a',基于这个理论等式,本申请实施例会进一步地计算两个功能相对的生成器损失函数的第一损失值,以及Dg(x)和Df(x)对应的第二损失值和第三损失值,其中具体的损失函数选取/设计此处不予限定,可由技术人员根据需求自行选取/设计,亦可以参考本申请实施例二和三进行处理。In order to realize the effectiveness evaluation of the texture removal generator G(x), the texture addition generator F(x), the non-texture discrimination network Dg(x), and the texture discrimination network Df(x), embodiments of the present application We will use G(x) and F(x) to process the textured image sample a and the non-reticulated image sample b respectively to obtain the corresponding processed image. Suppose G(x) is used for the textured image sample a Process to get image a', use F(x) to process a'to get image a", use F(x) to process non-textured image sample b to get image b', use F(x) to process b' The processed image b" is obtained. The processed image in the embodiment of the present application includes but is not limited to one or more of a', a", b', and b". In the embodiment of this application, firstly, G(x) and F(x) are used to process a and b respectively. Since a and b only have the difference in texture, the theoretically obtained processed image should also be only There is a difference in reticulation. In the same way, we can see that in theory a=a"=b' and b=b"=a'. Based on this theoretical equation, the embodiment of this application will further calculate the generator loss of the two functions. The first loss value of the function, and the second and third loss values corresponding to Dg(x) and Df(x). The selection/design of the specific loss function is not limited here, and can be selected by the technicians according to their needs /Design, you can also refer to the second and third embodiments of this application for processing.
S204,计算a、b以及处理后图像之间的图像差异度。S204: Calculate the degree of image difference between a, b and the processed image.
虽然理论上a=a″=b',且b=b″=a',但实际情况中未训练完成的G(x)和F(x)的处理效果必然不会很好,因此实际的a、a″和b'之间,以及b、b″和a'之间,必然存在一定的差异,而这个差异大小直接体现着G(x)和F(x)训练效果,因此本申请实施例会以计算a、b以及处理后图像之间的图像差异度的方式,来作为G(x)和F(x)训练效果一个维度的量化方式。其中,对于图像差异度的具体计算方式此处不予限定,包括但不限于如计算a、a″和b'之间的欧氏距离,以及b、b″和a'之间的欧氏距离等,或者也可以由技术人员自行根据需求设计,同时应当说明地,根据最终选取的计算方式的不同,所使用的到具体处理后图像也会存在差异,例如也可以仅使用a、b、a'和b'来计算,也可以同时使用a、b、a'、b'、a″和b″来计算,具体最终计算方式确定。Although theoretically a=a"=b' and b=b"=a', the processing effect of untrained G(x) and F(x) will not be very good in actual situations, so the actual a , A" and b', and between b, b" and a', there must be a certain difference, and this difference directly reflects the training effect of G(x) and F(x), so the embodiment of the application will The method of calculating the image difference between a, b and the processed images is used as a quantification method of one dimension of the training effect of G(x) and F(x). Among them, the specific calculation method of the image difference degree is not limited here, including but not limited to calculating the Euclidean distance between a, a" and b', and the Euclidean distance between b, b" and a' It can also be designed by the technicians according to their needs. At the same time, it should be noted that depending on the final calculation method, there will be differences in the images used to the specific processing. For example, only a, b, a can be used. 'And b'for calculation, you can also use a, b, a', b', a" and b" for calculation at the same time, and the specific final calculation method is determined.
为了实现对G(x)、F(x)、Dg(x)和Df(x)的迭代训练,以达到预期的训练效果,本申请实施例会预先设置一个或多个损失值阈值,并预先设置好一个差异度阈值,用以判断三个损失值和图像差异度的合法性。其中损失值阈值用于衡量G(x)、F(x)、Dg(x)和Df(x)的训练预期效果,其的数量由技术人员根据实际需求设定,当对生成器和判别网络的训练预期效果不同时,可以对每个损失值分别设置一个独立的损失值阈值,同样,也可以统一设置一个相同的损失值阈值,同时损失值阈值和差异度阈值具体值也可有技术人员根据实际需求设置,且损失值阈值和差异度阈值越大,意味着对生成器和判别网络的训练预期效果要求越低。In order to realize the iterative training of G(x), F(x), Dg(x) and Df(x) to achieve the expected training effect, the embodiment of this application will preset one or more loss value thresholds, and set them in advance A good difference threshold is used to judge the validity of the three loss values and the difference degree of the image. The loss threshold is used to measure the expected training effects of G(x), F(x), Dg(x) and Df(x). The number of them is set by the technicians according to actual needs. When the generator and the discriminant network When the expected effects of training are different, an independent loss threshold can be set for each loss value. Similarly, the same loss threshold can also be set uniformly. At the same time, the specific values of the loss threshold and the difference threshold can also have technical personnel Set according to actual needs, and the larger the loss threshold and the difference threshold, the lower the requirements for the expected effect of training the generator and the discriminant network.
S205,若第二损失值和/或第三损失值大等于对应的预设损失值阈值的值,迭代更新Dg(x)和Df(x)。S205: If the second loss value and/or the third loss value are substantially equal to the corresponding preset loss value threshold value, iteratively update Dg(x) and Df(x).
S206,若第一损失值大等于对应的预设损失值阈值和/或图像差异度大等于预设差异度阈值,迭代更新G(x)和F(x)。S206: If the first loss value is greater than the corresponding preset loss value threshold and/or the image difference degree is greater than the preset difference degree threshold, iteratively update G(x) and F(x).
当第二损失值和第三损失值中存在不满足损失值阈值要求的值时,说明Dg(x)和Df(x)的判别效果还达到预期效果,因此此时会返回迭代更新Dg(x)和Df(x)。同理当第一损失值过大不满足损失值阈值要求时,说明G(x)和F(x)未达到预期效果,此时会返回迭代更新G(x)和F(x)。When there is a value that does not meet the loss threshold requirement in the second loss value and the third loss value, it means that the discriminating effect of Dg(x) and Df(x) has reached the expected effect, so it will return to iterative update Dg(x ) And Df(x). Similarly, when the first loss value is too large and does not meet the loss value threshold requirement, it means that G(x) and F(x) have not achieved the expected effect, and iteratively update G(x) and F(x) at this time.
S207,若第一损失值、第二损失值和第三损失值均小于对应的预设损失值阈值,且图 像差异度小于预设差异度阈值,完成对网纹去除生成器G(x)的模型训练,得到网纹去除模型。S207: If the first loss value, the second loss value, and the third loss value are all less than the corresponding preset loss value threshold, and the image difference degree is less than the preset difference degree threshold, complete the image removal generator G(x) The model is trained to get the reticulation removal model.
由于G(x)、F(x)、Dg(x)和Df(x)之间是相互对立又依存的关系,因此在本申请实施例中即使第二损失值和第三损失值均满足要求,或者第一损失值满足要求,也不能直接判定Dg(x)和Df(x)训练完成,或者G(x)和F(x)训练完成,因此,在本申请实施例中,仅会在第一损失值、第二损失值、第三损失值以及图像差异度同时满足要求时,才会判定对G(x)、F(x)、Dg(x)和Df(x)的训练完成,此时即可得到最终可用的网纹去除生成器G(x),即本申请实施例一中的网纹去除模型。Since G(x), F(x), Dg(x), and Df(x) are mutually opposed and dependent on each other, in this embodiment of the application even if the second loss value and the third loss value both meet the requirements , Or the first loss value meets the requirements, it cannot be directly determined that the training of Dg(x) and Df(x) is completed, or the training of G(x) and F(x) is completed. Therefore, in the embodiment of this application, only When the first loss value, the second loss value, the third loss value and the image difference degree meet the requirements at the same time, it will be judged that the training of G(x), F(x), Dg(x) and Df(x) is completed. At this time, the final usable texture removal generator G(x) can be obtained, that is, the texture removal model in the first embodiment of the present application.
应当说明地,虽然我们最终的目的是对可以进行网纹去除的G(x)进行训练构建,以得到本申请实施例一的网纹去除模型,但在本申请实施例中,对G(x)的训练和训练效果判别不仅与功能相对的F(x)息息相关,同时也与进行是否有网纹的判别网络Dg(x)和Df(x)的准确性息息相关,只有在G(x)、F(x)、Dg(x)和Df(x)均完成训练达到预期效果时,才能说明最终的G(x)是准确有效的,因此本申请实施例在对G(x)和F(x)迭代更新的同时,也会对Dg(x)和Df(x)进行迭代更新,虽然Dg(x)和Df(x)更新的步骤看似独立(是否更新的判别仅依赖于第二损失值和第三损失值的情况,没有参考第一损失值和图像差异度),但实际上第二损失值和第三损失值也是依赖于实时更新出的G(x)和F(x)对图像样本的处理效果的,因此实际上Dg(x)和Df(x)的更新和G(x)和F(x)是具有内在联系密不可分的,不能简单的看做是两个独立的迭代更新操作步骤。It should be noted that although our ultimate goal is to train and construct G(x) that can be reticulated to obtain the reticulated removal model of the first embodiment of this application, in this embodiment of the present application, G(x ) Training and training effect judgment is not only closely related to the relative function of F(x), but also closely related to the accuracy of the network Dg(x) and Df(x) to determine whether there is reticulation. Only in G(x), Only when F(x), Dg(x) and Df(x) have completed the training to achieve the expected effect, can it be explained that the final G(x) is accurate and effective. Therefore, the embodiments of the present application compare G(x) and F(x) ) While iteratively updating, Dg(x) and Df(x) will also be iteratively updated, although the steps of Dg(x) and Df(x) update seem to be independent (the judgment of whether to update depends only on the second loss value In the case of the third loss value and the third loss value, there is no reference to the first loss value and the image difference degree), but in fact the second loss value and the third loss value also depend on the G(x) and F(x) pairs of images updated in real time The sample processing effect, so in fact, the update of Dg(x) and Df(x) and G(x) and F(x) are inseparable from each other and cannot be simply regarded as two independent iterative updates. Steps.
在本申请实施例中,通过构建两个对立的生成器和两个对立的判别网络,基于对立的生成器和判别网络对成对的图像样本进行处理,并基于处理结果来对损失值和图像差异度的计算,以实现对生成器和判别网络训练效果的量化,最后根据损失值和图像差异度是否满足预期效果来对生成器和判别网络分别进行迭代更新,直至均满足预期效果为止,从而实现了对网纹去除模型的有效训练。In the embodiment of this application, by constructing two opposing generators and two opposing discriminant networks, the paired image samples are processed based on the opposing generators and discriminant networks, and the loss value and image are calculated based on the processing results. The calculation of the difference degree is to realize the quantification of the training effect of the generator and the discriminant network. Finally, the generator and the discriminant network are respectively updated iteratively according to whether the loss value and the image difference degree meet the expected effect, until they both meet the expected effect. The effective training of the reticulation removal model is realized.
作为本申请实施例二中计算第一损失值的一种具体实现方式,包括:As a specific implementation manner of calculating the first loss value in the second embodiment of the present application, it includes:
利用G(x)对网纹图像样本a进行处理得到的图像a',利用F(x)对a'进行处理得到的图像a″,利用F(x)对无网纹图像样本b进行处理得到的图像b',利用F(x)对b'进行处理得到的图像b″,基于公式(1)计算第一损失值:Use G(x) to process the image a'obtained by processing the textured image sample a, use F(x) to process the image a" obtained from a', and use F(x) to process the non-textured image sample b to obtain The first loss value is calculated based on the formula (1) for the image b′ obtained by processing b′ by F(x):
Lg=-(log 10(Dg(G(a)))-log 10(Df(F(b)))+Lcyc, Lg=-(log 10 (Dg(G(a)))-log 10 (Df(F(b)))+Lcyc,
Lcyc=L1 Loss(a″,a)×lambda_a+L1 Loss(b″,b)×lambda_b+Lcyc=L1Loss(a″,a)×lambda_a+L1Loss(b″,b)×lambda_b+
L1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_d  (1)L1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_d (1)
其中,Lg为第一损失值,L1 Loss(x,y)表示两张图像的欧氏距离,lambda_a、lambda_b、lambda_c和lambda_d表示预设权值。Among them, Lg is the first loss value, L1 Loss(x, y) represents the Euclidean distance of the two images, and lambda_a, lambda_b, lambda_c, and lambda_d represent preset weights.
由于理论上a=a″=b',且b=b″=a',本申请实施例中会将a″和a、b″和b、a和b'以及b和a'分别进行比对计算对应的欧氏距离,从而得到对应四个维度的差异程度量化值,同时还会理论上图像样本生成器处理的次数越多,出现偏差的概率越大,最终与原图像样本匹配的难度越大,因此,本申请实施例和会对不同维度的差异程度预先设置对应的权值,以对不同匹配难度情况进行得到的差异程度值进行平衡,最大程度地保证对第一损失值计算的有效性。其中,lambda_a、lambda_b、lambda_c和lambda_d的具体值可由技术人员对各维度的匹配难度情况进行衡量后设置,优选地,lambda_a和lambda_b的值,均大于lambda_c和lambda_d。Since theoretically a=a"=b', and b=b"=a', in the embodiments of this application, a" and a, b" and b, a and b', and b and a'will be compared respectively Calculate the corresponding Euclidean distance to obtain the quantified value of the difference degree of the corresponding four dimensions. At the same time, in theory, the more the image sample generator processes, the greater the probability of deviation, and the more difficult it is to match the original image sample. Therefore, in the embodiment of the present application, corresponding weights are preset for the degree of difference of different dimensions to balance the difference degree values obtained from different matching difficulty situations, and to ensure the effectiveness of the first loss value calculation to the greatest extent Sex. Among them, the specific values of lambda_a, lambda_b, lambda_c, and lambda_d can be set by a technician after measuring the matching difficulty of each dimension. Preferably, the values of lambda_a and lambda_b are greater than lambda_c and lambda_d.
作为本申请实施例二中计算第二损失值和第三损失值的一种具体实现方式,包括:As a specific implementation manner of calculating the second loss value and the third loss value in the second embodiment of the present application, it includes:
基于公式(2)和公式(3)计算第二损失值Ldg和第三损失值Ldf:Calculate the second loss value Ldg and the third loss value Ldf based on formula (2) and formula (3):
Ldg=-log 10(Dg(G(a))-0.5)+log 10(1.5-Dg(G(a)))  (2) Ldg=-log 10 (Dg(G(a))-0.5)+log 10 (1.5-Dg(G(a))) (2)
Ldf=-log 10(Df(F(b))-0.5)+log 10(1.5-Df(F(b)))  (3) Ldf=-log 10 (Df(F(b))-0.5)+log 10 (1.5-Df(F(b))) (3)
作为本申请实施例二中计算图像差异度的一种具体实现方式,如图3所示,本申请实施例三,包括:As a specific implementation of calculating the degree of image difference in the second embodiment of the present application, as shown in FIG. 3, the third embodiment of the present application includes:
S301,对a和b、a和a'以及b和b'分别进行灰度值差值运算,并基于得到的灰度值差值和预设差值阈值进行网纹提取,得到对应的第一网纹图像、第二网纹图像和第三网纹图像,其中,a'为利用G(x)对网纹图像样本a进行处理得到的图像,b'为利用F(x)对无网纹图像样本b进行处理得到的图像。S301. Perform gray value difference calculations on a and b, a and a', and b and b'respectively, and perform mesh extraction based on the obtained gray value difference and a preset difference threshold to obtain the corresponding first Reticulated image, second reticulated image and third reticulated image, where a'is the image obtained by processing the reticulated image sample a using G(x), and b'is the non-reticulated image using F(x) The image obtained by processing the image sample b.
S302,计算第一网纹图像和第二网纹图像之间的图像距离,以及第一网纹图像和第三网纹图像之间的图像距离,并计算得到的两个图像距离的差值,得到图像差异度。S302. Calculate the image distance between the first mesh image and the second mesh image, and the image distance between the first mesh image and the third mesh image, and calculate the difference between the two image distances. Get the image difference degree.
由于理论上a=a″=b',且b=b″=a',第一网纹图像是标准的网纹图像,第二网纹图像是G(x)处理前后的差异部分图像,第三网纹图像是F(x)处理前后的差异部分图像,因此对a和a'以及b和b'分别进行灰度值差值运算,可以实现对G(x)和F(x)实际处理效果提取,再分别计算第二网纹图像和第三网纹图像分别与第一网纹图像的图像距离,几个实现对G(x)和F(x)实际处理效果的量化评估,最后将两个图像距离计算差值,即可得到本申请实施例二所需的图像差异度。其中,图像距离即为图像的相似度的倒数,具体的图像距离计算方法此处不予限定,可由技术人员自行设定,或者参考本申请其他实施例。Since in theory a=a"=b' and b=b"=a', the first mesh image is a standard mesh image, and the second mesh image is the difference part image before and after G(x) processing. The three-screen image is the different part of the image before and after the F(x) processing. Therefore, the gray value difference operation is performed on a and a'and b and b'respectively to realize the actual processing of G(x) and F(x) The effect is extracted, and then the image distances between the second and third mesh images and the first mesh image are respectively calculated. Several realizing the quantitative evaluation of the actual processing effects of G(x) and F(x), and finally By calculating the difference between the two image distances, the degree of image difference required in the second embodiment of the present application can be obtained. Wherein, the image distance is the reciprocal of the similarity of the image, and the specific image distance calculation method is not limited here, and can be set by the technicians themselves, or refer to other embodiments of this application.
作为本申请实施例三中计算图像距离的一种具体实现方式,包括:As a specific implementation of calculating the image distance in the third embodiment of the present application, it includes:
基于公式(4)计算图像距离L:Calculate the image distance L based on formula (4):
Figure PCTCN2019118652-appb-000001
Figure PCTCN2019118652-appb-000001
其中,n是所述第一网纹图像或所述第二网纹图像的像素点总数,x i和y i,分别为所述 第一网纹图像或所述第二网纹图像的第i个像素点的像素值。 Wherein, n is the total number of pixels of the first mesh image or the second mesh image, and x i and y i are the ith of the first mesh image or the second mesh image, respectively The pixel value of each pixel.
在本申请实施例中,将网纹图像的像素点逐个比对求像素值差值,再对差值取倒数,从而得到所需的图像距离。同理,对第一网纹图像和第三网纹图像的图像距离亦可根据公式(4)进行计算得到,此处不予赘述。In the embodiment of the present application, the pixel points of the texture image are compared one by one to find the pixel value difference, and then the difference is taken as the reciprocal to obtain the required image distance. In the same way, the image distance between the first mesh image and the third mesh image can also be calculated according to formula (4), which will not be repeated here.
对应于上文实施例的方法,图4示出了本申请实施例提供的网纹图像识别装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。图4示例的网纹图像识别装置可以是前述实施例一提供的网纹图像识别方法的执行主体。Corresponding to the method of the above embodiment, FIG. 4 shows a structural block diagram of a moire image recognition device provided in an embodiment of the present application. For ease of description, only the parts related to the embodiment of the present application are shown. The texture image recognition device illustrated in FIG. 4 may be the execution subject of the texture image recognition method provided in the first embodiment.
参照图4,该网纹图像识别装置包括:4, the texture image recognition device includes:
去网纹模块41,用于将待处理图像输入至预先训练好的网纹去除模型,得到待处理无网纹图像,所述网纹去除模型为预先基于网纹图像样本和无网纹图像样本进行训练得到的模型,用于去除图像中网纹。The de-interlacing module 41 is used to input the image to be processed into a pre-trained de-interlacing model to obtain a de-interlacing image to be processed, and the de-interlacing model is based on the deinterlacing image sample and the non-interlacing image sample The model obtained by training is used to remove moire in the image.
网纹重构模块42,用于对所述待处理图像和所述待处理无网纹图像进行灰度值差值计算,并基于计算得到的灰度值差值和预设差值阈值进行网纹重构,得到对应的网纹图形。The mesh reconstruction module 42 is used to calculate the gray value difference between the image to be processed and the image without mesh to be processed, and perform mesh based on the calculated gray value difference and a preset difference threshold. Reconstruction of the texture to obtain the corresponding texture pattern.
特征处理模块43,用于统计所述网纹图形中包含的像素点数,并计算得到的像素点数占所述待处理图像总像素点数的比例值。基于预设网纹图形库对所述网纹图形进行图形匹配。The feature processing module 43 is configured to count the number of pixels included in the texture pattern, and calculate the ratio of the number of pixels to the total number of pixels of the image to be processed. Graphic matching is performed on the mesh pattern based on a preset mesh pattern library.
网纹判别模块44,用于若所述网纹像素点数大于预设数量阈值、所述比例值大于预设比例阈值且所述网纹图形匹配成功,判定所述待处理图像为网纹图像。The mesh discrimination module 44 is configured to determine that the image to be processed is a mesh image if the number of mesh pixels is greater than a preset number threshold, the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully.
进一步地,该网纹图像识别装置,还包括:Further, the texture image recognition device further includes:
样本获取模块,用于用于获取多对网纹图像样本和无网纹图像样本,其中,每对图像样本中包含一张网纹图像样本和一张无网纹图像样本,且每对图像样本中的网纹图像样本和无网纹图像样本之间仅存在网纹差别。The sample acquisition module is used to obtain multiple pairs of textured image samples and non-reticulated image samples, where each pair of image samples includes a textured image sample and a non-reticulated image sample, and each pair of image samples There is only a texture difference between the texture image sample and the non-texture image sample in.
生成器构建模块,用于构建网纹去除生成器G(x),网纹添加生成器F(x),设G(x)通过判别网络Dg(x)得到图像属于无网纹图像的概率为Dg(G(x)),F(x)通过判别网络Df(x)得到图像属于网纹图像的概率为Df(F(x))。The generator building module is used to construct the reticulation removal generator G(x), and the reticulation addition generator F(x). Let G(x) obtain the probability that the image belongs to the non-recreation image through the discrimination network Dg(x) as Dg(G(x)), F(x) through the discriminant network Df(x), the probability that the image belongs to the texture image is Df(F(x)).
损失值计算模块,用于利用G(x)和F(x)分别对网纹图像样本a和无网纹图像样本b进行处理,得到对应的处理后图像,并基于a、b、所述处理后图像、Dg(G(a))和Df(F(b))计算G(x)和F(x)对应的第一损失值,基于Dg(G(a)和Df(F(b))计算Dg(x)和Df(x)对应的第二损失值和第三损失值。Loss value calculation module, used to use G(x) and F(x) to process the texture image sample a and the non-texture image sample b respectively to obtain the corresponding processed image, and based on a, b, and the processing After image, Dg(G(a)) and Df(F(b)) calculate the first loss value corresponding to G(x) and F(x), based on Dg(G(a) and Df(F(b)) Calculate the second loss value and the third loss value corresponding to Dg(x) and Df(x).
差异计算模块,用于计算a、b以及所述处理后图像之间的图像差异度。The difference calculation module is used to calculate the image difference degree between a, b and the processed images.
迭代更新模块,用于若所述第二损失值和/或所述第三损失值大等于对应的预设损失值阈值的值,迭代更新Dg(x)和Df(x)。若所述第一损失值大等于对应的预设损失值阈值和/或 所述图像差异度大等于所述预设差异度阈值,迭代更新G(x)和F(x)。The iterative update module is configured to iteratively update Dg(x) and Df(x) if the second loss value and/or the third loss value are substantially equal to the corresponding preset loss value threshold. If the first loss value is greater than the corresponding preset loss value threshold and/or the image difference degree is greater than the preset difference degree threshold, G(x) and F(x) are updated iteratively.
模型输出模块,用于若所述第一损失值、所述第二损失值和所述第三损失值均小于对应的预设损失值阈值,且所述图像差异度小于所述预设差异度阈值,完成对网纹去除生成器G(x)的模型训练,得到所述网纹去除模型。A model output module, configured to: if the first loss value, the second loss value, and the third loss value are all less than a corresponding preset loss value threshold, and the image difference degree is less than the preset difference degree Threshold, complete the model training of the mesh removal generator G(x) to obtain the mesh removal model.
进一步地,损失值计算模块,包括:Further, the loss value calculation module includes:
利用G(x)对网纹图像样本a进行处理得到的图像a',利用F(x)对a'进行处理得到的图像a″,利用F(x)对无网纹图像样本b进行处理得到的图像b',利用F(x)对b'进行处理得到的图像b″,基于公式(1)计算第一损失值:Use G(x) to process the image a'obtained by processing the textured image sample a, use F(x) to process the image a" obtained from a', and use F(x) to process the non-textured image sample b to obtain Calculate the first loss value based on the formula (1) based on the image b′ obtained by processing b′ with F(x):
Lg=-(log 10(Dg(G(a)))-log 10(Df(F(b)))+Lcyc, Lg=-(log 10 (Dg(G(a)))-log 10 (Df(F(b)))+Lcyc,
Lcyc=L1 Loss(a″,a)×lambda_a+L1 Loss(b″,b)×lambda_b+Lcyc=L1Loss(a″,a)×lambda_a+L1Loss(b″,b)×lambda_b+
L1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_d   (1)L1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_d (1)
其中,Lg为第一损失值,L1 Loss(x,y)表示两张图像的欧氏距离,lambda_a、lambda_b、lambda_c和lambda_d表示预设权值。Among them, Lg is the first loss value, L1 Loss(x, y) represents the Euclidean distance of the two images, and lambda_a, lambda_b, lambda_c, and lambda_d represent preset weights.
进一步地,损失值计算模块,还包括:Further, the loss value calculation module also includes:
基于公式(2)和公式(3)计算第二损失值Ldg和第三损失值Ldf:Calculate the second loss value Ldg and the third loss value Ldf based on formula (2) and formula (3):
Ldg=-log 10(Dg(G(a))-0.5)+log 10(1.5-Dg(G(a)))  (2) Ldg=-log 10 (Dg(G(a))-0.5)+log 10 (1.5-Dg(G(a))) (2)
Ldf=-log 10(Df(F(b))-0.5)+log 10(1.5-Df(F(b)))  (3) Ldf=-log 10 (Df(F(b))-0.5)+log 10 (1.5-Df(F(b))) (3)
进一步地,差异计算模块,包括:Further, the difference calculation module includes:
网纹图像提取模块,用于对a和b、a和a'以及b和b'分别进行灰度值差值运算,并基于得到的灰度值差值和所述预设差值阈值进行网纹提取,得到对应的第一网纹图像、第二网纹图像和第三网纹图像,其中,a'为利用G(x)对网纹图像样本a进行处理得到的图像,b'为利用F(x)对无网纹图像样本b进行处理得到的图像。The texture image extraction module is used to perform gray value difference calculations on a and b, a and a', and b and b'respectively, and perform a network based on the obtained gray value difference and the preset difference threshold. Texture extraction to obtain the corresponding first, second and third texture images, where a'is the image obtained by processing the texture image sample a using G(x), and b'is the use F(x) The image obtained by processing the non-textured image sample b.
图像差异计算模块,用于计算所述第一网纹图像和所述第二网纹图像之间的图像距离,以及所述第一网纹图像和所述第三网纹图像之间的图像距离,并计算得到的两个图像距离的差值,得到所述图像差异度。An image difference calculation module for calculating the image distance between the first mesh image and the second mesh image, and the image distance between the first mesh image and the third mesh image , And calculate the difference between the distances of the two images to obtain the image difference degree.
进一步地,图像差异计算模块,包括:Further, the image difference calculation module includes:
基于公式(4)计算图像距离L:Calculate the image distance L based on formula (4):
Figure PCTCN2019118652-appb-000002
Figure PCTCN2019118652-appb-000002
其中,n是所述第一网纹图像或所述第二网纹图像的像素点总数,x i和y i,分别为所述第一网纹图像或所述第二网纹图像的第i个像素点的像素值。 Wherein, n is the total number of pixels of the first mesh image or the second mesh image, and x i and y i are the ith of the first mesh image or the second mesh image, respectively The pixel value of each pixel.
本申请实施例提供的网纹图像识别装置中各模块实现各自功能的过程,具体可参考前述图1所示实施例一的描述,此处不再赘述。For the process of each module in the texture image recognition device provided by the embodiment of the present application to realize their respective functions, please refer to the description of the first embodiment shown in FIG. 1 for details, which will not be repeated here.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence number of each step in the foregoing embodiment does not mean the order of execution. The execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation to the implementation process of the embodiment of the present application.
还应理解的是,虽然术语“第一”、“第二”等在文本中在一些本申请实施例中用来描述各种元素,但是这些元素不应该受到这些术语的限制。这些术语只是用来将一个元素与另一元素区分开。例如,第一表格可以被命名为第二表格,并且类似地,第二表格可以被命名为第一表格,而不背离各种所描述的实施例的范围。第一表格和第二表格都是表格,但是它们不是同一表格。It should also be understood that although the terms “first”, “second”, etc. are used in the text in some embodiments of the present application to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, the first table may be named the second table, and similarly, the second table may be named the first table without departing from the scope of the various described embodiments. The first form and the second form are both forms, but they are not the same form.
图5是本申请一实施例提供的终端设备的示意图。如图5所示,该实施例的终端设备5包括:处理器50、存储器51,所述存储器51中存储有可在所述处理器50上运行的计算机可读指令52。所述处理器50执行所述计算机可读指令52时实现上述各个网纹图像识别方法实施例中的步骤,例如图1所示的步骤101至104。或者,所述处理器50执行所述计算机可读指令52时实现上述各装置实施例中各模块/单元的功能,例如图4所示模块41至44的功能。Fig. 5 is a schematic diagram of a terminal device provided by an embodiment of the present application. As shown in FIG. 5, the terminal device 5 of this embodiment includes a processor 50 and a memory 51, and the memory 51 stores computer-readable instructions 52 that can run on the processor 50. When the processor 50 executes the computer-readable instructions 52, the steps in the foregoing embodiments of the moire image recognition method are implemented, such as steps 101 to 104 shown in FIG. 1. Alternatively, when the processor 50 executes the computer-readable instructions 52, the functions of the modules/units in the foregoing device embodiments, such as the functions of the modules 41 to 44 shown in FIG. 4, are realized.
所述终端设备5可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述终端设备可包括,但不仅限于,处理器50、存储器51。本领域技术人员可以理解,图5仅仅是终端设备5的示例,并不构成对终端设备5的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备还可以包括输入发送设备、网络接入设备、总线等。The terminal device 5 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server. The terminal device may include, but is not limited to, a processor 50 and a memory 51. Those skilled in the art can understand that FIG. 5 is only an example of the terminal device 5, and does not constitute a limitation on the terminal device 5. It may include more or less components than shown in the figure, or a combination of certain components, or different components. For example, the terminal device may also include an input sending device, a network access device, a bus, and the like.
所称处理器50可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The so-called processor 50 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
所述存储器51可以是所述终端设备5的内部存储单元,例如终端设备5的硬盘或内存。所述存储器51也可以是所述终端设备5的外部存储设备,例如所述终端设备5上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器51还可以既包括所述终端设备5的内部存储单元也包括外部存储设备。所述存储器51用于存储所述计算机可读指令以及所述终端设备所需的其他程序和数据。所述存储器51还可以用于暂时地存储已经发送或者将要发送的 数据。The memory 51 may be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. The memory 51 may also be an external storage device of the terminal device 5, such as a plug-in hard disk equipped on the terminal device 5, a smart memory card (Smart Media Card, SMC), and a Secure Digital (SD) Card, Flash Card, etc. Further, the memory 51 may also include both an internal storage unit of the terminal device 5 and an external storage device. The memory 51 is used to store the computer-readable instructions and other programs and data required by the terminal device. The memory 51 can also be used to temporarily store data that has been sent or will be sent.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, the functional units in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through computer-readable instructions, which can be stored in a non-volatile computer. In a readable storage medium, when the computer-readable instructions are executed, they may include the processes of the above-mentioned method embodiments. Wherein, any reference to memory, storage, database or other media used in the embodiments provided in this application may include non-volatile and/or volatile memory. Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. As an illustration and not a limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使对应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that it can still implement the foregoing The technical solutions recorded in the examples are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the application, and should be included in Within the scope of protection of this application.

Claims (20)

  1. 一种网纹图像识别方法,其特征在于,包括:A method for recognizing a textured image, which is characterized in that it comprises:
    将待处理图像输入至预先训练好的网纹去除模型,得到待处理无网纹图像,所述网纹去除模型为预先基于网纹图像样本和无网纹图像样本进行训练得到的模型,用于去除图像中网纹;Input the image to be processed into a pre-trained reticulation removal model to obtain a non-reticulated image to be processed. The reticulation removal model is a model obtained by pre-training based on the reticulated image sample and the non-reticulated image sample, and is used for Remove the mesh in the image;
    对所述待处理图像和所述待处理无网纹图像进行灰度值差值计算,并基于计算得到的灰度值差值和预设差值阈值进行网纹重构,得到对应的网纹图形;Perform gray value difference calculation on the to-be-processed image and the to-be-processed image without texture, and perform texture reconstruction based on the calculated gray value difference and a preset difference threshold to obtain the corresponding texture Graphics
    统计所述网纹图形中包含的像素点数,并计算得到的像素点数占所述待处理图像总像素点数的比例值;基于预设网纹图形库对所述网纹图形进行图形匹配;Counting the number of pixels contained in the texture pattern, and calculating the ratio of the number of pixels to the total number of pixels in the image to be processed; performing graphic matching on the texture pattern based on a preset texture pattern library;
    若所述网纹像素点数大于预设数量阈值、所述比例值大于预设比例阈值且所述网纹图形匹配成功,判定所述待处理图像为网纹图像。If the number of mesh pixels is greater than a preset number threshold, the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully, it is determined that the image to be processed is a mesh image.
  2. 如权利要求1所述的网纹图像识别方法,其特征在于,对所述网纹去除模型的训练,包括:The method for recognizing a reticulated image according to claim 1, wherein the training of the reticulated removal model comprises:
    获取多对网纹图像样本和无网纹图像样本,其中,每对图像样本中包含一张网纹图像样本和一张无网纹图像样本,且每对图像样本中的网纹图像样本和无网纹图像样本之间仅存在网纹差别;Obtain multiple pairs of reticulated image samples and non-reticulated image samples, where each pair of image samples includes a reticulated image sample and a non-reticulated image sample, and the reticulated image sample and non-reticulated image sample in each pair of image samples There is only a difference in the texture between the texture image samples;
    构建网纹去除生成器G(x),网纹添加生成器F(x),设G(x)通过判别网络Dg(x)得到图像属于无网纹图像的概率为Dg(G(x)),F(x)通过判别网络Df(x)得到图像属于网纹图像的概率为Df(F(x));Construct a texture removal generator G(x), a texture addition generator F(x), let G(x) get the probability that the image belongs to a non-textured image through the discrimination network Dg(x) as Dg(G(x)) , F(x) obtains the probability that the image belongs to the texture image through the discrimination network Df(x) as Df(F(x));
    利用G(x)和F(x)分别对网纹图像样本a和无网纹图像样本b进行处理,得到对应的处理后图像,并基于a、b、所述处理后图像、Dg(G(a))和Df(F(b))计算G(x)和F(x)对应的第一损失值,基于Dg(G(a)和Df(F(b))计算Dg(x)和Df(x)对应的第二损失值和第三损失值;Use G(x) and F(x) to process the texture image sample a and the non-texture image sample b respectively to obtain the corresponding processed image, and based on a, b, the processed image, Dg(G( a)) and Df(F(b)) calculate the first loss value corresponding to G(x) and F(x), and calculate Dg(x) and Df based on Dg(G(a) and Df(F(b)) (x) The corresponding second loss value and third loss value;
    计算a、b以及所述处理后图像之间的图像差异度;Calculating a, b and the degree of image difference between the processed images;
    若所述第二损失值和/或所述第三损失值大等于对应的预设损失值阈值的值,迭代更新Dg(x)和Df(x);若所述第一损失值大等于对应的预设损失值阈值和/或所述图像差异度大等于所述预设差异度阈值,迭代更新G(x)和F(x);If the second loss value and/or the third loss value is greater than the corresponding preset loss value threshold value, iteratively update Dg(x) and Df(x); if the first loss value is greater than the corresponding The preset loss value threshold of and/or the image difference degree is substantially equal to the preset difference degree threshold, and G(x) and F(x) are updated iteratively;
    若所述第一损失值、所述第二损失值和所述第三损失值均小于对应的预设损失值阈值,且所述图像差异度小于所述预设差异度阈值,完成对网纹去除生成器G(x)的模型训练,得到所述网纹去除模型。If the first loss value, the second loss value, and the third loss value are all less than the corresponding preset loss value threshold, and the image difference degree is less than the preset difference degree threshold, the texture adjustment is completed The model training of the removal generator G(x) is used to obtain the mesh removal model.
  3. 如权利要求2所述的网纹图像识别方法,其特征在于,所述利用G(x)和F(x)分别对 网纹图像样本a和无网纹图像样本b进行处理,得到对应的处理后图像,并基于a、b、所述处理后图像、Dg(G(a))和Df(F(b))计算G(x)和F(x)对应的第一损失值,包括:2. The texture image recognition method according to claim 2, wherein said using G(x) and F(x) to process texture image sample a and non-texture image sample b respectively to obtain corresponding processing After image, and calculate the first loss value corresponding to G(x) and F(x) based on a, b, the processed image, Dg(G(a)) and Df(F(b)), including:
    利用G(x)对网纹图像样本a进行处理得到的图像a',利用F(x)对a'进行处理得到的图像a”,利用F(x)对无网纹图像样本b进行处理得到的图像b',利用F(x)对b'进行处理得到的图像b”,基于下式计算第一损失值:Use G(x) to process the image a'obtained by processing the textured image sample a, use F(x) to process the image a'obtained from a', and use F(x) to process the non-textured image sample b to obtain The image b'of, the image b” obtained by processing b'with F(x), calculate the first loss value based on the following formula:
    Lg=-(log10(Dg(G(a)))-log10(Df(F(b)))+Lcyc,Lg=-(log10(Dg(G(a)))-log10(Df(F(b)))+Lcyc,
    Lcyc=L1 Loss(a”,a)×lambda_a+L1 Loss(b”,b)×lambda_b+Lcyc=L1Loss(a”,a)×lambda_a+L1Loss(b”,b)×lambda_b+
    L1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_dL1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_d
    其中,Lg为第一损失值,L1 Loss(x,y)表示两张图像的欧氏距离,lambda_a、lambda_b、lambda_c和lambda_d表示预设权值。Among them, Lg is the first loss value, L1 Loss(x, y) represents the Euclidean distance of the two images, and lambda_a, lambda_b, lambda_c, and lambda_d represent preset weights.
  4. 如权利要求2所述的网纹图像识别方法,其特征在于,所述基于Dg(G(a)和Df(F(b))计算Dg(x)和Df(x)对应的第二损失值和第三损失值,包括:The reticulated image recognition method of claim 2, wherein the second loss value corresponding to Dg(x) and Df(x) is calculated based on Dg(G(a) and Df(F(b)) And the third loss value, including:
    基于下式计算第二损失值Ldg和第三损失值Ldf:Calculate the second loss value Ldg and the third loss value Ldf based on the following formula:
    Ldg=-log10(Dg(G(a))-0.5)+log10(1.5-Dg(G(a)))Ldg=-log10(Dg(G(a))-0.5)+log10(1.5-Dg(G(a)))
    Ldf=-log10(Df(F(b))-0.5)+log10(1.5-Df(F(b)))。Ldf=-log10(Df(F(b))-0.5)+log10(1.5-Df(F(b))).
  5. 如权利要求3所述的网纹图像识别方法,其特征在于,所述计算a、b以及所述处理后图像之间的图像差异度,包括:8. The method for recognizing a textured image according to claim 3, wherein said calculating a, b and the image difference degree between the processed images comprises:
    对a和b、a和a'以及b和b'分别进行灰度值差值运算,并基于得到的灰度值差值和所述预设差值阈值进行网纹提取,得到对应的第一网纹图像、第二网纹图像和第三网纹图像,其中,a'为利用G(x)对网纹图像样本a进行处理得到的图像,b'为利用F(x)对无网纹图像样本b进行处理得到的图像;Perform gray value difference calculations on a and b, a and a', and b and b'respectively, and perform mesh extraction based on the obtained gray value difference and the preset difference threshold to obtain the corresponding first Reticulated image, second reticulated image and third reticulated image, where a'is the image obtained by processing the reticulated image sample a using G(x), and b'is the non-reticulated image using F(x) Image obtained by processing image sample b;
    计算所述第一网纹图像和所述第二网纹图像之间的图像距离,以及所述第一网纹图像和所述第三网纹图像之间的图像距离,并计算得到的两个图像距离的差值,得到所述图像差异度。Calculate the image distance between the first mesh image and the second mesh image, and the image distance between the first mesh image and the third mesh image, and calculate the two The difference between the image distances is used to obtain the image difference degree.
  6. 如权利要求5所述的网纹图像识别方法,其特征在于,计算所述第一网纹图像和所述第二网纹图像的图像距离,包括:8. The method for recognizing a mesh image according to claim 5, wherein calculating the image distance between the first mesh image and the second mesh image comprises:
    基于下式计算图像距离L:Calculate the image distance L based on the following formula:
    Figure PCTCN2019118652-appb-100001
    Figure PCTCN2019118652-appb-100001
    其中,n是所述第一网纹图像或所述第二网纹图像的像素点总数,x i和y i,分别为所述 第一网纹图像或所述第二网纹图像的第i个像素点的像素值。 Wherein, n is the total number of pixels of the first mesh image or the second mesh image, and x i and y i are the ith of the first mesh image or the second mesh image, respectively The pixel value of each pixel.
  7. 一种网纹图像识别装置,其特征在于,包括:A reticulated image recognition device, characterized in that it comprises:
    去网纹模块,用于将待处理图像输入至预先训练好的网纹去除模型,得到待处理无网纹图像,所述网纹去除模型为预先基于网纹图像样本和无网纹图像样本进行训练得到的模型,用于去除图像中网纹;The de-texturing module is used to input the image to be processed into a pre-trained reticulation removal model to obtain a non-reticulated image to be processed. The reticulation removal model is based on the reticulated image sample and the non-reticulated image sample in advance. The trained model is used to remove the moire in the image;
    网纹重构模块,用于对所述待处理图像和所述待处理无网纹图像进行灰度值差值计算,并基于计算得到的灰度值差值和预设差值阈值进行网纹重构,得到对应的网纹图形;The reticulation reconstruction module is used to calculate the gray value difference of the image to be processed and the non-reticulated image to be processed, and perform reticulation based on the calculated gray value difference and a preset difference threshold Reconstruct to get the corresponding net pattern;
    特征处理模块,用于统计所述网纹图形中包含的像素点数,并计算得到的像素点数占所述待处理图像总像素点数的比例值;基于预设网纹图形库对所述网纹图形进行图形匹配;The feature processing module is used to count the number of pixels contained in the texture pattern, and calculate the ratio of the number of pixels to the total number of pixels in the image to be processed; and compare the texture pattern based on a preset texture pattern library Perform graphic matching;
    网纹判别模块,用于若所述网纹像素点数大于预设数量阈值、所述比例值大于预设比例阈值且所述网纹图形匹配成功,判定所述待处理图像为网纹图像。The mesh discrimination module is configured to determine that the image to be processed is a mesh image if the number of mesh pixels is greater than a preset number threshold, the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully.
  8. 如权利要求7所述的网纹图像识别装置,其特征在于,还包括:8. The texture image recognition device of claim 7, further comprising:
    样本获取模块,用于用于获取多对网纹图像样本和无网纹图像样本,其中,每对图像样本中包含一张网纹图像样本和一张无网纹图像样本,且每对图像样本中的网纹图像样本和无网纹图像样本之间仅存在网纹差别;The sample acquisition module is used to obtain multiple pairs of textured image samples and non-reticulated image samples, where each pair of image samples includes a textured image sample and a non-reticulated image sample, and each pair of image samples There is only a texture difference between the reticulated image sample and the non-reticulated image sample in;
    生成器构建模块,用于构建网纹去除生成器G(x),网纹添加生成器F(x),设G(x)通过判别网络Dg(x)得到图像属于无网纹图像的概率为Dg(G(x)),F(x)通过判别网络Df(x)得到图像属于网纹图像的概率为Df(F(x));The generator building module is used to construct the reticulation removal generator G(x), and the reticulation addition generator F(x). Let G(x) obtain the probability that the image belongs to the non-recreation image through the discrimination network Dg(x) as Dg(G(x)), F(x) through the discriminant network Df(x), the probability that the image belongs to the texture image is Df(F(x));
    损失值计算模块,用于利用G(x)和F(x)分别对网纹图像样本a和无网纹图像样本b进行处理,得到对应的处理后图像,并基于a、b、所述处理后图像、Dg(G(a))和Df(F(b))计算G(x)和F(x)对应的第一损失值,基于Dg(G(a)和Df(F(b))计算Dg(x)和Df(x)对应的第二损失值和第三损失值;Loss value calculation module, used to use G(x) and F(x) to process the texture image sample a and the non-texture image sample b respectively to obtain the corresponding processed image, and based on a, b, and the processing After image, Dg(G(a)) and Df(F(b)) calculate the first loss value corresponding to G(x) and F(x), based on Dg(G(a) and Df(F(b)) Calculate the second loss value and the third loss value corresponding to Dg(x) and Df(x);
    差异计算模块,用于计算a、b以及所述处理后图像之间的图像差异度;A difference calculation module for calculating the image difference degree between a, b and the processed images;
    迭代更新模块,用于若所述第二损失值和/或所述第三损失值大等于对应的预设损失值阈值的值,迭代更新Dg(x)和Df(x);若所述第一损失值大等于对应的预设损失值阈值和/或所述图像差异度大等于所述预设差异度阈值,迭代更新G(x)和F(x);An iterative update module, configured to iteratively update Dg(x) and Df(x) if the second loss value and/or the third loss value is substantially equal to the corresponding preset loss value threshold; A loss value is greater than the corresponding preset loss value threshold and/or the image difference degree is greater than the preset difference degree threshold, and G(x) and F(x) are updated iteratively;
    模型输出模块,用于若所述第一损失值、所述第二损失值和所述第三损失值均小于对应的预设损失值阈值,且所述图像差异度小于所述预设差异度阈值,完成对网纹去除生成器G(x)的模型训练,得到所述网纹去除模型。A model output module, configured to: if the first loss value, the second loss value, and the third loss value are all less than a corresponding preset loss value threshold, and the image difference degree is less than the preset difference degree Threshold, complete the model training of the mesh removal generator G(x) to obtain the mesh removal model.
  9. 如权利要求8所述的网纹图像识别装置,其特征在于,所述损失值计算模块,包括:8. The texture image recognition device of claim 8, wherein the loss value calculation module comprises:
    利用G(x)对网纹图像样本a进行处理得到的图像a',利用F(x)对a'进行处理得到的图像 a”,利用F(x)对无网纹图像样本b进行处理得到的图像b',利用F(x)对b'进行处理得到的图像b”,基于下式计算第一损失值:Use G(x) to process the image a'obtained by processing the textured image sample a, use F(x) to process the image a'obtained from a', and use F(x) to process the non-textured image sample b to obtain The image b'of, the image b” obtained by processing b'with F(x), calculate the first loss value based on the following formula:
    Lg=-(log 10(Dg(G(a)))-log 10(Df(F(b)))+Lcyc, Lg=-(log 10 (Dg(G(a)))-log 10 (Df(F(b)))+Lcyc,
    Lcyc=L1 Loss(a”,a)×lambda_a+L1 Loss(b”,b)×lambda_b+Lcyc=L1Loss(a”,a)×lambda_a+L1Loss(b”,b)×lambda_b+
    L1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_dL1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_d
    其中,Lg为第一损失值,L1 Loss(x,y)表示两张图像的欧氏距离,lambda_a、lambda_b、lambda_c和lambda_d表示预设权值。Among them, Lg is the first loss value, L1 Loss(x, y) represents the Euclidean distance of the two images, and lambda_a, lambda_b, lambda_c, and lambda_d represent preset weights.
  10. 如权利要求8所述的网纹图像识别装置,其特征在于,所述损失值计算模块,还包括:8. The texture image recognition device of claim 8, wherein the loss value calculation module further comprises:
    基于下式计算第二损失值Ldg和第三损失值Ldf:Calculate the second loss value Ldg and the third loss value Ldf based on the following formula:
    Ldg=-log 10(Dg(G(a))-0.5)+log 10(1.5-Dg(G(a))) Ldg=-log 10 (Dg(G(a))-0.5)+log 10 (1.5-Dg(G(a)))
    Ldf=-log 10(Df(F(b))-0.5)+log 10(1.5-Df(F(b)))。 Ldf=-log 10 (Df(F(b))-0.5)+log 10 (1.5-Df(F(b))).
  11. 如权利要求9所述的网纹图像识别装置,其特征在于,迭代更新模块,包括:9. The texture image recognition device of claim 9, wherein the iterative update module comprises:
    网纹图像提取模块,用于对a和b、a和a'以及b和b'分别进行灰度值差值运算,并基于得到的灰度值差值和所述预设差值阈值进行网纹提取,得到对应的第一网纹图像、第二网纹图像和第三网纹图像,其中,a'为利用G(x)对网纹图像样本a进行处理得到的图像,b'为利用F(x)对无网纹图像样本b进行处理得到的图像;The texture image extraction module is used to perform gray value difference calculations on a and b, a and a', and b and b'respectively, and perform a network based on the obtained gray value difference and the preset difference threshold. Texture extraction to obtain the corresponding first, second and third texture images, where a'is the image obtained by processing the texture image sample a using G(x), and b'is the use F(x) the image obtained by processing the non-textured image sample b;
    图像差异计算模块,用于计算所述第一网纹图像和所述第二网纹图像之间的图像距离,以及所述第一网纹图像和所述第三网纹图像之间的图像距离,并计算得到的两个图像距离的差值,得到所述图像差异度。An image difference calculation module for calculating the image distance between the first mesh image and the second mesh image, and the image distance between the first mesh image and the third mesh image , And calculate the difference between the distances of the two images to obtain the image difference degree.
  12. 如权利要求11所述的网纹图像识别装置,其特征在于,11. The texture image recognition device of claim 11, wherein:
    基于下式计算图像距离L:Calculate the image distance L based on the following formula:
    Figure PCTCN2019118652-appb-100002
    Figure PCTCN2019118652-appb-100002
    其中,n是所述第一网纹图像或所述第二网纹图像的像素点总数,x i和y i,分别为所述第一网纹图像或所述第二网纹图像的第i个像素点的像素值。 Wherein, n is the total number of pixels of the first mesh image or the second mesh image, and x i and y i are the ith of the first mesh image or the second mesh image, respectively The pixel value of each pixel.
  13. 一种终端设备,其特征在于,所述终端设备包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:A terminal device, characterized in that, the terminal device includes a memory and a processor, the memory stores computer-readable instructions that can run on the processor, and the processor executes the computer-readable instructions When implementing the following steps:
    将待处理图像输入至预先训练好的网纹去除模型,得到待处理无网纹图像,所述网纹 去除模型为预先基于网纹图像样本和无网纹图像样本进行训练得到的模型,用于去除图像中网纹;Input the image to be processed into a pre-trained reticulation removal model to obtain a non-reticulated image to be processed. The reticulation removal model is a model obtained by pre-training based on the reticulated image sample and the non-reticulated image sample, and is used for Remove the mesh in the image;
    对所述待处理图像和所述待处理无网纹图像进行灰度值差值计算,并基于计算得到的灰度值差值和预设差值阈值进行网纹重构,得到对应的网纹图形;Perform gray value difference calculation on the to-be-processed image and the to-be-processed image without texture, and perform texture reconstruction based on the calculated gray value difference and a preset difference threshold to obtain the corresponding texture Graphics
    统计所述网纹图形中包含的像素点数,并计算得到的像素点数占所述待处理图像总像素点数的比例值;基于预设网纹图形库对所述网纹图形进行图形匹配;Counting the number of pixels contained in the texture pattern, and calculating the ratio of the number of pixels to the total number of pixels in the image to be processed; performing graphic matching on the texture pattern based on a preset texture pattern library;
    若所述网纹像素点数大于预设数量阈值、所述比例值大于预设比例阈值且所述网纹图形匹配成功,判定所述待处理图像为网纹图像。If the number of mesh pixels is greater than a preset number threshold, the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully, it is determined that the image to be processed is a mesh image.
  14. 如权利要求13所述的终端设备,其特征在于,对所述网纹去除模型的训练,包括:The terminal device of claim 13, wherein the training of the mesh removal model includes:
    获取多对网纹图像样本和无网纹图像样本,其中,每对图像样本中包含一张网纹图像样本和一张无网纹图像样本,且每对图像样本中的网纹图像样本和无网纹图像样本之间仅存在网纹差别;Obtain multiple pairs of reticulated image samples and non-reticulated image samples, where each pair of image samples includes a reticulated image sample and a non-reticulated image sample, and the reticulated image sample and non-reticulated image sample in each pair of image samples There is only a difference in the texture between the texture image samples;
    构建网纹去除生成器G(x),网纹添加生成器F(x),设G(x)通过判别网络Dg(x)得到图像属于无网纹图像的概率为Dg(G(x)),F(x)通过判别网络Df(x)得到图像属于网纹图像的概率为Df(F(x));Construct a texture removal generator G(x), a texture addition generator F(x), let G(x) get the probability that the image belongs to a non-textured image through the discrimination network Dg(x) as Dg(G(x)) , F(x) obtains the probability that the image belongs to the texture image through the discrimination network Df(x) as Df(F(x));
    利用G(x)和F(x)分别对网纹图像样本a和无网纹图像样本b进行处理,得到对应的处理后图像,并基于a、b、所述处理后图像、Dg(G(a))和Df(F(b))计算G(x)和F(x)对应的第一损失值,基于Dg(G(a)和Df(F(b))计算Dg(x)和Df(x)对应的第二损失值和第三损失值;Use G(x) and F(x) to process the texture image sample a and the non-texture image sample b respectively to obtain the corresponding processed image, and based on a, b, the processed image, Dg(G( a)) and Df(F(b)) calculate the first loss value corresponding to G(x) and F(x), and calculate Dg(x) and Df based on Dg(G(a) and Df(F(b)) (x) The corresponding second loss value and third loss value;
    计算a、b以及所述处理后图像之间的图像差异度;Calculating a, b and the degree of image difference between the processed images;
    若所述第二损失值和/或所述第三损失值大等于对应的预设损失值阈值的值,迭代更新Dg(x)和Df(x);若所述第一损失值大等于对应的预设损失值阈值和/或所述图像差异度大等于所述预设差异度阈值,迭代更新G(x)和F(x);If the second loss value and/or the third loss value is greater than the corresponding preset loss value threshold value, iteratively update Dg(x) and Df(x); if the first loss value is greater than the corresponding The preset loss value threshold of and/or the image difference degree is substantially equal to the preset difference degree threshold, and G(x) and F(x) are updated iteratively;
    若所述第一损失值、所述第二损失值和所述第三损失值均小于对应的预设损失值阈值,且所述图像差异度小于所述预设差异度阈值,完成对网纹去除生成器G(x)的模型训练,得到所述网纹去除模型。If the first loss value, the second loss value, and the third loss value are all less than the corresponding preset loss value threshold, and the image difference degree is less than the preset difference degree threshold, the texture adjustment is completed The model training of the removal generator G(x) is used to obtain the mesh removal model.
  15. 如权利要求14所述的终端设备,其特征在于,所述利用G(x)和F(x)分别对网纹图像样本a和无网纹图像样本b进行处理,得到对应的处理后图像,并基于a、b、所述处理后图像、Dg(G(a))和Df(F(b))计算G(x)和F(x)对应的第一损失值,包括:The terminal device according to claim 14, characterized in that said using G(x) and F(x) to process the textured image sample a and the non-textured image sample b respectively to obtain the corresponding processed image, And calculate the first loss value corresponding to G(x) and F(x) based on a, b, the processed image, Dg(G(a)) and Df(F(b)), including:
    利用G(x)对网纹图像样本a进行处理得到的图像a',利用F(x)对a'进行处理得到的图像a”,利用F(x)对无网纹图像样本b进行处理得到的图像b',利用F(x)对b'进行处理得到的图像b”,基于下式计算第一损失值:Use G(x) to process the image a'obtained by processing the textured image sample a, use F(x) to process the image a'obtained from a', and use F(x) to process the non-textured image sample b to obtain The image b'of, the image b” obtained by processing b'with F(x), calculate the first loss value based on the following formula:
    Lg=-(log10(Dg(G(a)))-log10(Df(F(b)))+Lcyc,Lg=-(log10(Dg(G(a)))-log10(Df(F(b)))+Lcyc,
    Lcyc=L1 Loss(a”,a)×lambda_a+L1 Loss(b”,b)×lambda_b+Lcyc=L1Loss(a”,a)×lambda_a+L1Loss(b”,b)×lambda_b+
    L1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_dL1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_d
    其中,Lg为第一损失值,L1 Loss(x,y)表示两张图像的欧氏距离,lambda_a、lambda_b、lambda_c和lambda_d表示预设权值。Among them, Lg is the first loss value, L1 Loss(x, y) represents the Euclidean distance of the two images, and lambda_a, lambda_b, lambda_c, and lambda_d represent preset weights.
  16. 如权利要求14所述的终端设备,其特征在于,所述基于Dg(G(a)和Df(F(b))计算Dg(x)和Df(x)对应的第二损失值和第三损失值,包括:The terminal device of claim 14, wherein the second loss value and the third loss value corresponding to Dg(x) and Df(x) are calculated based on Dg(G(a) and Df(F(b)) Loss value, including:
    基于下式计算第二损失值Ldg和第三损失值Ldf:Calculate the second loss value Ldg and the third loss value Ldf based on the following formula:
    Ldg=-log10(Dg(G(a))-0.5)+log10(1.5-Dg(G(a)))Ldg=-log10(Dg(G(a))-0.5)+log10(1.5-Dg(G(a)))
    Ldf=-log10(Df(F(b))-0.5)+log10(1.5-Df(F(b)))。Ldf=-log10(Df(F(b))-0.5)+log10(1.5-Df(F(b))).
  17. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,其特征在于,所述计算机可读指令被至少一个处理器执行时实现如下步骤:A computer-readable storage medium that stores computer-readable instructions, wherein the computer-readable instructions are executed by at least one processor to implement the following steps:
    将待处理图像输入至预先训练好的网纹去除模型,得到待处理无网纹图像,所述网纹去除模型为预先基于网纹图像样本和无网纹图像样本进行训练得到的模型,用于去除图像中网纹;Input the image to be processed into a pre-trained reticulation removal model to obtain a non-reticulated image to be processed. The reticulation removal model is a model obtained by pre-training based on the reticulated image sample and the non-reticulated image sample, and is used for Remove the mesh in the image;
    对所述待处理图像和所述待处理无网纹图像进行灰度值差值计算,并基于计算得到的灰度值差值和预设差值阈值进行网纹重构,得到对应的网纹图形;Perform gray value difference calculation on the to-be-processed image and the to-be-processed image without texture, and perform texture reconstruction based on the calculated gray value difference and a preset difference threshold to obtain the corresponding texture Graphics
    统计所述网纹图形中包含的像素点数,并计算得到的像素点数占所述待处理图像总像素点数的比例值;基于预设网纹图形库对所述网纹图形进行图形匹配;Counting the number of pixels contained in the texture pattern, and calculating the ratio of the number of pixels to the total number of pixels in the image to be processed; performing graphic matching on the texture pattern based on a preset texture pattern library;
    若所述网纹像素点数大于预设数量阈值、所述比例值大于预设比例阈值且所述网纹图形匹配成功,判定所述待处理图像为网纹图像。If the number of mesh pixels is greater than a preset number threshold, the ratio value is greater than the preset ratio threshold, and the mesh pattern matches successfully, it is determined that the image to be processed is a mesh image.
  18. 如权利要求17所述的计算机可读存储介质,其特征在于,对所述网纹去除模型的训练,包括:17. The computer-readable storage medium of claim 17, wherein the training of the mesh removal model comprises:
    获取多对网纹图像样本和无网纹图像样本,其中,每对图像样本中包含一张网纹图像样本和一张无网纹图像样本,且每对图像样本中的网纹图像样本和无网纹图像样本之间仅存在网纹差别;Obtain multiple pairs of reticulated image samples and non-reticulated image samples, where each pair of image samples includes a reticulated image sample and a non-reticulated image sample, and the reticulated image sample and non-reticulated image sample in each pair of image samples There is only a difference in the texture between the texture image samples;
    构建网纹去除生成器G(x),网纹添加生成器F(x),设G(x)通过判别网络Dg(x)得到图像属于无网纹图像的概率为Dg(G(x)),F(x)通过判别网络Df(x)得到图像属于网纹图像的概率为Df(F(x));Construct a texture removal generator G(x), a texture addition generator F(x), let G(x) get the probability that the image belongs to a non-textured image through the discrimination network Dg(x) as Dg(G(x)) , F(x) obtains the probability that the image belongs to the texture image through the discrimination network Df(x) as Df(F(x));
    利用G(x)和F(x)分别对网纹图像样本a和无网纹图像样本b进行处理,得到对应的处理后图像,并基于a、b、所述处理后图像、Dg(G(a))和Df(F(b))计算G(x)和F(x)对应的第一 损失值,基于Dg(G(a)和Df(F(b))计算Dg(x)和Df(x)对应的第二损失值和第三损失值;Use G(x) and F(x) to process the texture image sample a and the non-texture image sample b respectively to obtain the corresponding processed image, and based on a, b, the processed image, Dg(G( a)) and Df(F(b)) calculate the first loss value corresponding to G(x) and F(x), and calculate Dg(x) and Df based on Dg(G(a) and Df(F(b)) (x) The corresponding second loss value and third loss value;
    计算a、b以及所述处理后图像之间的图像差异度;Calculating a, b and the degree of image difference between the processed images;
    若所述第二损失值和/或所述第三损失值大等于对应的预设损失值阈值的值,迭代更新Dg(x)和Df(x);若所述第一损失值大等于对应的预设损失值阈值和/或所述图像差异度大等于所述预设差异度阈值,迭代更新G(x)和F(x);If the second loss value and/or the third loss value is greater than the corresponding preset loss value threshold value, iteratively update Dg(x) and Df(x); if the first loss value is greater than the corresponding The preset loss value threshold of and/or the image difference degree is substantially equal to the preset difference degree threshold, and G(x) and F(x) are updated iteratively;
    若所述第一损失值、所述第二损失值和所述第三损失值均小于对应的预设损失值阈值,且所述图像差异度小于所述预设差异度阈值,完成对网纹去除生成器G(x)的模型训练,得到所述网纹去除模型。If the first loss value, the second loss value, and the third loss value are all less than the corresponding preset loss value threshold, and the image difference degree is less than the preset difference degree threshold, the texture adjustment is completed The model training of the removal generator G(x) is used to obtain the mesh removal model.
  19. 如权利要求17所述的计算机可读存储介质,其特征在于,所述利用G(x)和F(x)分别对网纹图像样本a和无网纹图像样本b进行处理,得到对应的处理后图像,并基于a、b、所述处理后图像、Dg(G(a))和Df(F(b))计算G(x)和F(x)对应的第一损失值,包括:The computer-readable storage medium according to claim 17, wherein said using G(x) and F(x) to process the textured image sample a and the non-textured image sample b respectively to obtain corresponding processing After image, and calculate the first loss value corresponding to G(x) and F(x) based on a, b, the processed image, Dg(G(a)) and Df(F(b)), including:
    利用G(x)对网纹图像样本a进行处理得到的图像a',利用F(x)对a'进行处理得到的图像a”,利用F(x)对无网纹图像样本b进行处理得到的图像b',利用F(x)对b'进行处理得到的图像b”,基于下式计算第一损失值:Use G(x) to process the image a'obtained by processing the textured image sample a, use F(x) to process the image a'obtained from a', and use F(x) to process the non-textured image sample b to obtain The image b'of, the image b” obtained by processing b'with F(x), calculate the first loss value based on the following formula:
    Lg=-(log10(Dg(G(a)))-log10(Df(F(b)))+Lcyc,Lg=-(log10(Dg(G(a)))-log10(Df(F(b)))+Lcyc,
    Lcyc=L1 Loss(a”,a)×lambda_a+L1 Loss(b”,b)×lambda_b+Lcyc=L1Loss(a”,a)×lambda_a+L1Loss(b”,b)×lambda_b+
    L1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_dL1 Loss(a,b')×lambda_c+L1 Loss(b,a')×lambda_d
    其中,Lg为第一损失值,L1 Loss(x,y)表示两张图像的欧氏距离,lambda_a、lambda_b、lambda_c和lambda_d表示预设权值。Among them, Lg is the first loss value, L1 Loss(x, y) represents the Euclidean distance of the two images, and lambda_a, lambda_b, lambda_c, and lambda_d represent preset weights.
  20. 如权利要求17所述的计算机可读存储介质,其特征在于,所述基于Dg(G(a)和Df(F(b))计算Dg(x)和Df(x)对应的第二损失值和第三损失值,包括:The computer-readable storage medium of claim 17, wherein the second loss value corresponding to Dg(x) and Df(x) is calculated based on Dg(G(a) and Df(F(b)) And the third loss value, including:
    基于下式计算第二损失值Ldg和第三损失值Ldf:Calculate the second loss value Ldg and the third loss value Ldf based on the following formula:
    Ldg=-log10(Dg(G(a))-0.5)+log10(1.5-Dg(G(a)))Ldg=-log10(Dg(G(a))-0.5)+log10(1.5-Dg(G(a)))
    Ldf=-log10(Df(F(b))-0.5)+log10(1.5-Df(F(b)))。Ldf=-log10(Df(F(b))-0.5)+log10(1.5-Df(F(b))).
PCT/CN2019/118652 2019-08-09 2019-11-15 Reticulate pattern-containing image recognition method and apparatus, and terminal device and medium WO2021027163A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910736543.2A CN110647805B (en) 2019-08-09 2019-08-09 Reticulate pattern image recognition method and device and terminal equipment
CN201910736543.2 2019-08-09

Publications (1)

Publication Number Publication Date
WO2021027163A1 true WO2021027163A1 (en) 2021-02-18

Family

ID=68990095

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/118652 WO2021027163A1 (en) 2019-08-09 2019-11-15 Reticulate pattern-containing image recognition method and apparatus, and terminal device and medium

Country Status (2)

Country Link
CN (1) CN110647805B (en)
WO (1) WO2021027163A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819016A (en) * 2021-02-19 2021-05-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5787186A (en) * 1994-03-21 1998-07-28 I.D. Tec, S.L. Biometric security process for authenticating identity and credit cards, visas, passports and facial recognition
CN106548159A (en) * 2016-11-08 2017-03-29 中国科学院自动化研究所 Reticulate pattern facial image recognition method and device based on full convolutional neural networks
CN107766844A (en) * 2017-11-13 2018-03-06 杭州有盾网络科技有限公司 Method, apparatus, equipment of a kind of reticulate pattern according to recognition of face
CN109426775A (en) * 2017-08-25 2019-03-05 株式会社日立制作所 The method, device and equipment of reticulate pattern in a kind of detection facial image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760859B (en) * 2016-03-22 2018-12-21 中国科学院自动化研究所 Reticulate pattern facial image recognition method and device based on multitask convolutional neural networks
CN107818308A (en) * 2017-10-31 2018-03-20 平安科技(深圳)有限公司 A kind of recognition of face intelligence comparison method, electronic installation and computer-readable recording medium
CN108734673B (en) * 2018-04-20 2019-11-15 平安科技(深圳)有限公司 Descreening systematic training method, descreening method, apparatus, equipment and medium
CN109871755A (en) * 2019-01-09 2019-06-11 中国平安人寿保险股份有限公司 A kind of auth method based on recognition of face
CN110032931B (en) * 2019-03-01 2023-06-13 创新先进技术有限公司 Method and device for generating countermeasure network training and removing reticulation and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5787186A (en) * 1994-03-21 1998-07-28 I.D. Tec, S.L. Biometric security process for authenticating identity and credit cards, visas, passports and facial recognition
CN106548159A (en) * 2016-11-08 2017-03-29 中国科学院自动化研究所 Reticulate pattern facial image recognition method and device based on full convolutional neural networks
CN109426775A (en) * 2017-08-25 2019-03-05 株式会社日立制作所 The method, device and equipment of reticulate pattern in a kind of detection facial image
CN107766844A (en) * 2017-11-13 2018-03-06 杭州有盾网络科技有限公司 Method, apparatus, equipment of a kind of reticulate pattern according to recognition of face

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819016A (en) * 2021-02-19 2021-05-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110647805B (en) 2023-10-31
CN110647805A (en) 2020-01-03

Similar Documents

Publication Publication Date Title
US11403876B2 (en) Image processing method and apparatus, facial recognition method and apparatus, and computer device
CN108875732B (en) Model training and instance segmentation method, device and system and storage medium
US10726244B2 (en) Method and apparatus detecting a target
WO2019200702A1 (en) Descreening system training method and apparatus, descreening method and apparatus, device, and medium
CN109829506B (en) Image processing method, image processing device, electronic equipment and computer storage medium
WO2021258699A1 (en) Image identification method and apparatus, and electronic device and computer-readable medium
CN110738236B (en) Image matching method and device, computer equipment and storage medium
CN110032583B (en) Fraudulent party identification method and device, readable storage medium and terminal equipment
CN111401521B (en) Neural network model training method and device, and image recognition method and device
WO2020056968A1 (en) Data denoising method and apparatus, computer device, and storage medium
CN112580668B (en) Background fraud detection method and device and electronic equipment
CN111079816A (en) Image auditing method and device and server
WO2023065744A1 (en) Face recognition method and apparatus, device and storage medium
CN115631112B (en) Building contour correction method and device based on deep learning
CN111914908A (en) Image recognition model training method, image recognition method and related equipment
CN113269149A (en) Living body face image detection method and device, computer equipment and storage medium
CN108875502B (en) Face recognition method and device
CN113052577A (en) Method and system for estimating category of virtual address of block chain digital currency
WO2021042544A1 (en) Facial verification method and apparatus based on mesh removal model, and computer device and storage medium
WO2021027163A1 (en) Reticulate pattern-containing image recognition method and apparatus, and terminal device and medium
CN115545103A (en) Abnormal data identification method, label identification method and abnormal data identification device
TWI803243B (en) Method for expanding images, computer device and storage medium
CN113128278A (en) Image identification method and device
CN111369489A (en) Image identification method and device and terminal equipment
CN113989632A (en) Bridge detection method and device for remote sensing image, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19941356

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19941356

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19941356

Country of ref document: EP

Kind code of ref document: A1