CN108230269B - Grid removing method, device and equipment based on depth residual error network and storage medium - Google Patents

Grid removing method, device and equipment based on depth residual error network and storage medium Download PDF

Info

Publication number
CN108230269B
CN108230269B CN201711458971.0A CN201711458971A CN108230269B CN 108230269 B CN108230269 B CN 108230269B CN 201711458971 A CN201711458971 A CN 201711458971A CN 108230269 B CN108230269 B CN 108230269B
Authority
CN
China
Prior art keywords
network
image
convolutional layer
grid
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711458971.0A
Other languages
Chinese (zh)
Other versions
CN108230269A (en
Inventor
杨东
王栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Athena Eyes Co Ltd
Original Assignee
Athena Eyes Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Athena Eyes Co Ltd filed Critical Athena Eyes Co Ltd
Priority to CN201711458971.0A priority Critical patent/CN108230269B/en
Publication of CN108230269A publication Critical patent/CN108230269A/en
Application granted granted Critical
Publication of CN108230269B publication Critical patent/CN108230269B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a method, a device, equipment and a storage medium for removing grids based on a depth residual error network, wherein the method for removing grids adopts a full convolution network based on the depth residual error as a basic network and comprises the following steps: training a basic network by adopting a training image set to obtain a trained basic network; and carrying out grid removing processing on the image to be subjected to grid removing by utilizing the trained basic network to obtain the grid-removed image. By adopting the full convolution network based on the depth residual error as the basic network, the experience field of convolution is enlarged, so that multi-scale information (more high-frequency and low-frequency information of an image) is introduced into the grid removing method to effectively improve the effect of the grid removing algorithm of deep learning, the contradiction that the grid removing effect needs to be improved due to large calculation amount of the existing grid removing algorithm or limited information scale is avoided, the application of the deep learning algorithm in the field of image grid removing is enhanced, and the method has wide popularization and application value.

Description

Grid removing method, device and equipment based on depth residual error network and storage medium
Technical Field
The present invention relates to the field of face recognition, and in particular, to a method, an apparatus, a device, and a storage medium for removing a mesh based on a deep residual error network.
Background
With the development of deep learning, face recognition is more and more widely popularized in various application scenes, and particularly in the financial payment industry, the face recognition is simple and quick as a card-free password-removing application and is more and more favored by various banks with small adaptability. However, in some application scenarios, in order to protect the privacy of the user, the official network certificate taken by the bank is added with the grid watermark, which can seriously affect the face recognition effect, so various grid removing algorithms are produced. In addition to the grid removing algorithm of the conventional image processing idea, a grid removing algorithm based on deep learning also appears. However, these algorithms are generally improved based on the previous classification network, for example, CN107424131A discloses an image descreening method and device based on deep learning, which constructs a grid image on line through a grid template, generates multi-class grid data corresponding to the grid template, and trains a classification network and a full convolution network respectively using the multi-class grid data as training data; and classifying the grid images to be removed by using the trained classification network, and performing grid removal processing on the classified grid images to be removed by using the trained full convolution network according to the classification processing result. Because the grid removing algorithm uses information redundancy of some pixels of the image with far-spaced current pixels, the existing full convolution network has limited information scale due to small receptive field, cannot comprehensively sample low-frequency information of the image, and causes the grid removing effect to be improved.
Disclosure of Invention
The invention provides a grid removing method, device, equipment and storage medium based on a depth residual error network, and aims to solve the technical problems that the grid removing effect needs to be improved due to low processing efficiency caused by large calculation amount of the existing grid removing algorithm or limited information scale.
The technical scheme adopted by the invention is as follows:
according to one aspect of the present invention, there is provided a method for removing a mesh based on a depth residual error network, the method for removing a mesh of the present invention using a full convolution network based on a depth residual error as a base network, the method for removing a mesh of the present invention comprising:
training a basic network by adopting a training image set to obtain a trained basic network;
and carrying out grid removing processing on the image to be subjected to grid removing by utilizing the trained basic network to obtain the grid-removed image.
Further, the basic network is a meta-network added with an extended convolution on the basis of the full convolution network, the basic network comprises a series of meta-networks, and every two adjacent meta-networks are directly connected through a residual error.
Furthermore, the meta-network comprises a first network architecture and a second network architecture, and any meta-network is one of the two network architectures;
the first network architecture comprises a first convolution layer, a first relu nonlinear layer, a second convolution layer and a second relu nonlinear layer which are connected in sequence;
the second network architecture comprises a mixed convolution layer, a concatee layer and a third relu nonlinear layer which are connected in sequence, wherein the concatee layer is used for correlating the output of the mixed convolution layer, and the mixed convolution layer is composed of the third convolution layer and the fourth convolution layer according to a certain proportion.
Further, the first convolutional layer is a 2-scaled 3x3 convolution with I ═ 2, the second convolutional layer is a 3x3 convolutional layer, the third convolutional layer is a 2-scaled 3x3 convolution with I ═ 2, the fourth convolutional layer is a 3x3 convolutional layer, the ratio of the third convolutional layer to the fourth convolutional layer is a scale _ ratio, and the scale range of the scale _ ratio is [0,0.5 ].
Further, before inputting the image of the grid to be removed, the grid removing method of the present invention further comprises:
and preprocessing the image of the grid to be removed to enable the size of the preprocessed image to meet the preset size requirement.
Further, a training image set is adopted to train the basic network, a penalty function is introduced in the step of obtaining the trained basic network, and the penalty function is obtained by multiplying Euler distance obtained by subtracting each pixel on the grid image with the preset size obtained by network reconstruction and the corresponding original image which meets the requirement of the preset size by a MASK matrix corresponding to the face area on the original image.
According to another aspect of the present invention, there is also provided a depth residual error network-based de-meshing device, including:
the basic network unit adopts a full convolution network based on the depth residual error as a basic network;
the training unit is used for training the basic network by adopting the training image set to obtain a trained basic network;
and the grid removing unit is used for performing grid removing processing on the image to be subjected to grid removal by utilizing the trained basic network to obtain a grid-removed image.
Further, the basic network is a meta-network added with an extended convolution on the basis of the full convolution network, the basic network comprises a series of meta-networks, and every two adjacent meta-networks are directly connected through a residual error.
According to another aspect of the present invention, there is also provided an image descreening apparatus based on a depth residual error network, including a processor, configured to execute a program, where the program executes to perform the descreening method based on the depth residual error network according to the present invention.
According to another aspect of the present invention, there is also provided a storage medium, which includes a stored program, and the program controls, when executed, an apparatus on which the storage medium is located to perform the depth residual network-based descreening method of the present invention.
The invention has the following beneficial effects:
the grid removing method, the device, the equipment and the storage medium based on the depth residual error network expand the experience field of convolution by adopting the full convolution network based on the depth residual error as the basic network, thereby effectively improving the effect of a deep learning grid removing algorithm by introducing multi-scale information (more high-frequency and low-frequency information of an image) in the grid removing method, avoiding the contradiction that the grid removing effect needs to be improved due to large calculation amount of the existing grid removing algorithm or limited information scale, enhancing the application of the deep learning algorithm in the field of image grid removing, and having wide popularization and application values.
In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram illustrating the steps of a depth residual network-based descreening method according to a preferred embodiment of the present invention;
FIG. 2 is a schematic diagram of the network structure of the descreened base network in the preferred embodiment of the present invention;
fig. 3 is a schematic structural diagram of a first network structure corresponding to a meta-network in a preferred embodiment of the present invention;
fig. 4 is a schematic structural diagram of a second network structure corresponding to a meta network in the preferred embodiment of the present invention;
FIG. 5 is a diagram of the corresponding receptive fields of a conventional 3x3 convolution;
fig. 6 is a diagram of the corresponding receptive field of the 2-scaled 3x3 convolution with I ═ 2 in the preferred embodiment of the present invention;
FIG. 7 is a schematic illustration of an original image in a preferred embodiment of the present invention;
FIG. 8 is a schematic illustration of the image of FIG. 7 after pre-processing;
FIG. 9 is a diagram of MASK matrix corresponding to a face region in an image;
fig. 10 is a schematic block diagram of a depth residual network-based desmesher according to a preferred embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
The preferred embodiment of the present invention provides a grid removing method based on a depth residual error network, where the grid removing method of this embodiment uses a full convolution network based on a depth residual error as a basic network, and with reference to fig. 1, the grid removing method of this embodiment includes:
s100, training a basic network by adopting a training image set to obtain a trained basic network;
and S200, carrying out grid removing processing on the image to be subjected to grid removing by using the trained basic network to obtain a grid-removed image.
In the embodiment, the full convolution network based on the depth residual is used as the basic network, so that the experience field of convolution is enlarged, multi-scale information (more high-frequency and low-frequency information of the image) is introduced into the grid removing method to effectively improve the effect of the deep learning grid removing algorithm, the contradiction that the grid removing effect needs to be improved due to the fact that the processing efficiency is low or the information scale is limited due to the fact that the calculation amount of the existing grid removing algorithm is large is avoided, the application of the deep learning algorithm in the field of image grid removing is enhanced, and the method has wide popularization and application values.
In this embodiment, the basic network is a meta-network that adds an extended convolution (also called convolution kernel dilation) on the basis of a full convolution network, so as to expand the field of experience of convolution and introduce multi-scale information to improve the effect of a deep learning de-gridding algorithm. Referring to fig. 2, in this embodiment, the basic network includes a series of mata-network element networks, and short-cut direct connection is performed between each two adjacent element networks through residual errors.
In this embodiment, the meta-network includes a first network architecture and a second network architecture, and any meta-network is one of the two network architectures.
In this embodiment, the first network architecture includes a first convolutional layer, a first relu nonlinear layer, a second convolutional layer, and a second relu nonlinear layer, which are connected in sequence. Preferably, referring to fig. 3, in this embodiment, the first convolutional layer is a 2-scaled 3x3 convolutional layer with I ═ 2, the second convolutional layer is a 3x3 convolutional layer, the input signal is first convolved by a layer of 2-scaled 3x3 with I ═ 2, the receptive field of each convolution is 7x7, then passes through a relu nonlinear layer, a conventional 3x3 convolutional layer (the receptive field is 3x3), and finally passes through a relu nonlinear layer for output. In this embodiment, the meta network is a minimum network structure unit, which is beneficial to changing the depth and complexity of the network, such as an acceptance layer in a googlenet, by simply changing the number of the meta network when implementing a network structure.
In this embodiment, the second network architecture includes a hybrid convolutional layer, a concatee layer for associating outputs of the hybrid convolutional layer, and a third relu nonlinear layer, which are sequentially connected, where the hybrid convolutional layer is composed of the third convolutional layer and the fourth convolutional layer in a certain proportion. Referring to fig. 4, in the present embodiment, the third convolutional layer is a 2-scaled 3x3 convolution with I ═ 2, the fourth convolutional layer is a 3x3 convolutional layer, the ratio of the third convolutional layer to the fourth convolutional layer is a scale _ ratio, and the scale range of the scale _ ratio is [0,0.5 ]. Preferably, the deeper the network, the smaller the value of the partition _ ratio. In this embodiment, the ratio of the number of feature maps output by the third convolutional layer to the number of feature maps output by the fourth convolutional layer is "double _ ratio". Preferably, meta-a (first network architecture) is used at a shallower layer, so that the overall perception field of the deeper network is larger, then meta-b (second network architecture) is used at a deeper layer, and the deeper layer uses a smaller proportion of dilate conv3x3, which is beneficial to enhance the descreening effect.
The grid removing method of the embodiment enlarges the reception field of convolution by using expansion convolution, and introduces multi-scale information to improve the effect of a deep learning grid removing algorithm. Fig. 5 shows a diagram of the receptive field corresponding to the conventional 3x3 convolution, fig. 6 shows the receptive field corresponding to the 2-scaled 3x3 convolution with I ═ 2 in the preferred embodiment of the present invention, and it can be known from the comparison between fig. 5 and fig. 6 that the receptive field of the convolution kernel is changed from the original 3x3 to 7x7 without increasing the calculation amount in 3x3 after scaling.
Preferably, before inputting the image of the grid to be removed, the method for removing the grid according to the present embodiment further includes:
and preprocessing the image of the grid to be removed to enable the size of the preprocessed image to meet the preset size requirement.
FIG. 7 shows a schematic diagram of an original image in a preferred embodiment of the present invention; FIG. 8 is a schematic illustration of the image of FIG. 7 after pre-processing.
The present embodiment uses a full convolution network based on a depth residual network as a base network, the input image size of the network being 224x 224. The resolution of the grid identification photo is 178x220 or 96x 118. In the process of adding the grid, strong ringing effect is generated around the grid due to picture compression and the like. In the process of grid removal in this embodiment, the image size is expanded to 224 × 224 (see fig. 8) by first using a black border filling manner, so that the input size of the network is unified. The corresponding face position in the pre-processed image is marked as a MASK matrix, see the white area shown in fig. 9.
Preferably, in this embodiment, a Penalty function (Penalty function) is introduced in the step of training the basic network by using the training image set to obtain the trained basic network, and the Penalty function is obtained by multiplying an euler distance L2loss obtained by subtracting each pixel on the grid image with a preset size obtained by network reconstruction and the corresponding original image which meets the requirement of the preset size by a MASK matrix corresponding to the face region on the original image. The penalty function in this embodiment is the optimized objective function trained in step S100.
By comparing the grid removing method of the embodiment with the traditional fcn (full convolution network, full convolution is 3x3) grid removing method, the PSNR peak signal-to-noise ratio of the test image and the reconstructed image is used as a judgment standard, which is specifically shown in the following table:
Figure BDA0001529782170000051
as can be seen from the above table, the method for removing the mesh has the highest PSNR and the best mesh removing effect. The grid removing evaluation effect evaluation standard is mainly subjective and glad; in addition, the PSNR of the descreened image obtained in this example and the PSNR of the original non-grid certificate photo are improved by 27-24 to 3dB compared with the baseline method.
According to another aspect of the present invention, there is also provided a depth residual error network-based de-meshing device, referring to fig. 10, the de-meshing device of this embodiment includes:
a basic network unit 100, which uses a full convolution network based on a depth residual as a basic network;
a training unit 200, configured to train a basic network by using a training image set to obtain a trained basic network;
and a grid removing unit 300, configured to perform grid removing processing on the image to be subjected to grid removal by using the trained basic network, so as to obtain a grid-removed image.
Preferably, the basic network of this embodiment is a meta-network added with an extended convolution based on a full convolution network, and the basic network includes a series of meta-networks, and every two adjacent meta-networks are directly connected via a residual error.
It should be noted that, the descreening apparatus based on the depth residual error network in this embodiment is used to execute the descreening method in the foregoing embodiment, and the specific implementation process refers to the description of the method in the foregoing embodiment.
According to another aspect of the present invention, there is also provided an image descreening apparatus based on a depth residual error network, including a processor, where the processor is configured to execute a program, and the program executes the descreening method based on the depth residual error network according to an embodiment of the present invention.
According to another aspect of the present invention, a storage medium is further provided, where the storage medium includes a stored program, and the program, when executed, controls a device on which the storage medium is located to perform the depth residual network-based descreening method according to the embodiment of the present invention.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
The functions described in the method of the present embodiment, if implemented in the form of software functional units and sold or used as independent products, may be stored in one or more storage media readable by a computing device. Based on such understanding, part of the contribution of the embodiments of the present invention to the prior art or part of the technical solution may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computing device (which may be a personal computer, a server, a mobile computing device, a network device, or the like) to execute all or part of the steps of the method described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. A grid removing method based on a depth residual error network is characterized in that the grid removing method adopts a full convolution network based on the depth residual error as a basic network, and comprises the following steps:
training the basic network by adopting a training image set to obtain a trained basic network;
carrying out grid removing processing on the image to be subjected to grid removing by using the trained basic network to obtain a grid-removed image;
the basic network is a meta-network added with extended convolution on the basis of a full convolution network, the basic network comprises a series of meta-networks, and every two adjacent meta-networks are directly connected once through residual errors;
the meta-network comprises a first network architecture and a second network architecture, wherein any meta-network is one of the two network architectures, the first network architecture comprises a first convolutional layer, a first relu nonlinear layer, a second convolutional layer and a second relu nonlinear layer which are sequentially connected, the first convolutional layer is a 2-scaled 3x3 convolution with I being 2, and the second convolutional layer is a 3x3 convolutional layer;
the second network architecture comprises a mixed convolutional layer, a configure layer and a third relu nonlinear layer which are connected in sequence, wherein the configure layer is used for correlating the output of the mixed convolutional layer, the mixed convolutional layer is composed of a third convolutional layer and a fourth convolutional layer according to a certain proportion, the third convolutional layer is a 2-scaled 3x3 convolution with the I being 2, the fourth convolutional layer is a 3x3 convolutional layer, the proportion of the third convolutional layer to the fourth convolutional layer is a scale _ ratio, and the value range of the scale _ ratio is [0,0.5 ];
the meta network uses a first network architecture at a shallow layer and a second network architecture at a deep layer, and the deeper the scale of used dilate conv3x3 is smaller;
before inputting the image of the grid to be removed, the grid removing method further comprises:
preprocessing an image of a grid to be removed to enable the size of the preprocessed image to meet the preset size requirement, wherein the size of an input image of a basic network is 224x224, expanding the size of the image to 224x224 by adopting a black edge supplementing mode, and marking the position of a corresponding face in the preprocessed image as a MASK matrix;
and a penalty function is introduced in the step of training the basic network by adopting a training image set to obtain the trained basic network, wherein the penalty function is obtained by multiplying an Euler distance obtained by subtracting each pixel on a corresponding original image which meets the requirement of the preset size from a grid image which is obtained by network reconstruction and meets the requirement of the preset size by a MASK matrix corresponding to a face area on the original image.
2. A de-gridding apparatus based on a depth residual error network, for performing the de-gridding method of claim 1, comprising:
a base network unit, which uses a full convolution network based on a depth residual as a base network;
the training unit is used for training the basic network by adopting a training image set to obtain a trained basic network;
the grid removing unit is used for performing grid removing processing on the image to be subjected to grid removing by using the trained basic network to obtain a grid-removed image;
the basic network is a meta-network added with extended convolution on the basis of a full convolution network, the basic network comprises a series of meta-networks, and every two adjacent meta-networks are directly connected once through residual errors;
the meta-network comprises a first network architecture and a second network architecture, wherein any meta-network is one of the two network architectures, the first network architecture comprises a first convolutional layer, a first relu nonlinear layer, a second convolutional layer and a second relu nonlinear layer which are sequentially connected, the first convolutional layer is a 2-scaled 3x3 convolution with I being 2, and the second convolutional layer is a 3x3 convolutional layer;
the second network architecture comprises a mixed convolutional layer, a configure layer and a third relu nonlinear layer which are connected in sequence, wherein the configure layer is used for correlating the output of the mixed convolutional layer, the mixed convolutional layer is composed of a third convolutional layer and a fourth convolutional layer according to a certain proportion, the third convolutional layer is a 2-scaled 3x3 convolution with the I being 2, the fourth convolutional layer is a 3x3 convolutional layer, the proportion of the third convolutional layer to the fourth convolutional layer is a scale _ ratio, and the value range of the scale _ ratio is [0,0.5 ];
the meta network uses a first network architecture at a shallow layer and a second network architecture at a deep layer, and the deeper the scale of used dilate conv3x3 is smaller;
before inputting an image of a grid to be removed, preprocessing the image of the grid to be removed to enable the size of the preprocessed image to meet the preset size requirement, wherein the size of the input image of a basic network is 224x224, expanding the size of the image to 224x224 by adopting a black edge supplementing mode, and marking the position of a corresponding face in the preprocessed image as a MASK matrix;
the training unit introduces a penalty function, wherein the penalty function is obtained by multiplying an Euler distance obtained by subtracting each pixel on a corresponding original image which meets the requirement of the preset size from a grid image with the preset size obtained by network reconstruction by a MASK matrix corresponding to a face area on the original image.
3. A depth residual network based descreening apparatus comprising a processor for executing a program, wherein the program when executed performs the depth residual network based descreening method of claim 1.
4. A storage medium comprising a stored program, wherein the program when executed controls a device on which the storage medium resides to perform the deep residual network based descreening method of claim 1.
CN201711458971.0A 2017-12-28 2017-12-28 Grid removing method, device and equipment based on depth residual error network and storage medium Active CN108230269B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711458971.0A CN108230269B (en) 2017-12-28 2017-12-28 Grid removing method, device and equipment based on depth residual error network and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711458971.0A CN108230269B (en) 2017-12-28 2017-12-28 Grid removing method, device and equipment based on depth residual error network and storage medium

Publications (2)

Publication Number Publication Date
CN108230269A CN108230269A (en) 2018-06-29
CN108230269B true CN108230269B (en) 2021-02-09

Family

ID=62646535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711458971.0A Active CN108230269B (en) 2017-12-28 2017-12-28 Grid removing method, device and equipment based on depth residual error network and storage medium

Country Status (1)

Country Link
CN (1) CN108230269B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241982B (en) * 2018-09-06 2021-01-29 广西师范大学 Target detection method based on deep and shallow layer convolutional neural network
CN109472733A (en) * 2018-10-22 2019-03-15 天津大学 Image latent writing analysis method based on convolutional neural networks
CN111062854B (en) * 2019-12-26 2023-08-25 Oppo广东移动通信有限公司 Method, device, terminal and storage medium for detecting watermark
CN112884666B (en) * 2021-02-02 2024-03-19 杭州海康慧影科技有限公司 Image processing method, device and computer storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424131A (en) * 2017-07-14 2017-12-01 北京智慧眼科技股份有限公司 Image based on deep learning removes grid method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424131A (en) * 2017-07-14 2017-12-01 北京智慧眼科技股份有限公司 Image based on deep learning removes grid method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Dilated Residual Networks;Fisher Yu;《2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;20171109;第636-644页 *
Multi-task ConvNet for Blind Face Inpainting with Application to Face Verification;Shu Zhang;《2016 International Conference on Biometrics (ICB)》;20160825 *
Shu Zhang.Multi-task ConvNet for Blind Face Inpainting with Application to Face Verification.《2016 International Conference on Biometrics (ICB)》.2016, *

Also Published As

Publication number Publication date
CN108230269A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
CN108230269B (en) Grid removing method, device and equipment based on depth residual error network and storage medium
CN112348783B (en) Image-based person identification method and device and computer-readable storage medium
US20220092882A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
Zhang et al. The application of visual saliency models in objective image quality assessment: A statistical evaluation
Jiang et al. Unsupervised decomposition and correction network for low-light image enhancement
US20200279358A1 (en) Method, device, and system for testing an image
CN111275034B (en) Method, device, equipment and storage medium for extracting text region from image
CN103186894B (en) A kind of multi-focus image fusing method of self-adaptation piecemeal
CN111507909A (en) Method and device for clearing fog image and storage medium
JP2015065654A (en) Color document image segmentation using automatic recovery and binarization
CN104794685A (en) Image denoising realization method and device
CN111951172A (en) Image optimization method, device, equipment and storage medium
CN113781510A (en) Edge detection method and device and electronic equipment
JP2020197915A (en) Image processing device, image processing method, and program
CN115131797A (en) Scene text detection method based on feature enhancement pyramid network
Arulkumar et al. Super resolution and demosaicing based self learning adaptive dictionary image denoising framework
CN111260655A (en) Image generation method and device based on deep neural network model
US10521918B2 (en) Method and device for filtering texture, using patch shift
CN114677722A (en) Multi-supervision human face in-vivo detection method integrating multi-scale features
Cheng et al. Image quality analysis of a novel histogram equalization method for image contrast enhancement
CN115965844B (en) Multi-focus image fusion method based on visual saliency priori knowledge
CN109509237B (en) Filter processing method and device and electronic equipment
CN116245765A (en) Image denoising method and system based on enhanced depth expansion convolutional neural network
CN115809966A (en) Low-illumination image enhancement method and system
CN114387315A (en) Image processing model training method, image processing device, image processing equipment and image processing medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100097 Beijing Haidian District Kunming Hunan Road 51 C block two floor 207.

Applicant after: BEIJING ATHENA EYES SCIENCE & TECHNOLOGY CO.,LTD.

Address before: 100193 4, 403, block A, 14 building, 10 East North Road, Haidian District, Beijing.

Applicant before: BEIJING ATHENA EYES SCIENCE & TECHNOLOGY CO.,LTD.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 410205 14 Changsha Zhongdian Software Park Phase I, 39 Jianshan Road, Changsha High-tech Development Zone, Yuelu District, Changsha City, Hunan Province

Applicant after: Wisdom Eye Technology Co.,Ltd.

Address before: 100097 2nd Floor 207, Block C, 51 Hunan Road, Kunming, Haidian District, Beijing

Applicant before: BEIJING ATHENA EYES SCIENCE & TECHNOLOGY CO.,LTD.

GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Method, device, equipment and storage medium for grid removal based on deep residual network

Effective date of registration: 20221205

Granted publication date: 20210209

Pledgee: Agricultural Bank of China Limited Hunan Xiangjiang New Area Branch

Pledgor: Wisdom Eye Technology Co.,Ltd.

Registration number: Y2022430000107

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20231220

Granted publication date: 20210209

Pledgee: Agricultural Bank of China Limited Hunan Xiangjiang New Area Branch

Pledgor: Wisdom Eye Technology Co.,Ltd.

Registration number: Y2022430000107

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: No. 205, Building B1, Huigu Science and Technology Industrial Park, No. 336 Bachelor Road, Bachelor Street, Yuelu District, Changsha City, Hunan Province, 410000

Patentee after: Wisdom Eye Technology Co.,Ltd.

Address before: 410205 building 14, phase I, Changsha Zhongdian Software Park, No. 39, Jianshan Road, Changsha high tech Development Zone, Yuelu District, Changsha City, Hunan Province

Patentee before: Wisdom Eye Technology Co.,Ltd.