CN109313795B - Method and apparatus for super-resolution processing - Google Patents

Method and apparatus for super-resolution processing Download PDF

Info

Publication number
CN109313795B
CN109313795B CN201680084409.3A CN201680084409A CN109313795B CN 109313795 B CN109313795 B CN 109313795B CN 201680084409 A CN201680084409 A CN 201680084409A CN 109313795 B CN109313795 B CN 109313795B
Authority
CN
China
Prior art keywords
image
super
dense
trained model
resolution processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680084409.3A
Other languages
Chinese (zh)
Other versions
CN109313795A (en
Inventor
汤晓鸥
朱施展
李�诚
吕健勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sensetime Group Ltd
Original Assignee
Sensetime Group Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sensetime Group Ltd filed Critical Sensetime Group Ltd
Publication of CN109313795A publication Critical patent/CN109313795A/en
Application granted granted Critical
Publication of CN109313795B publication Critical patent/CN109313795B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

A method and apparatus for super-resolution processing are disclosed. According to one embodiment, a method for super-resolution processing includes: estimating a dense corresponding domain based on the first image and the trained model; performing super-resolution processing over a bi-directional network based on the first image, the estimated dense corresponding domain, and the trained model to obtain a second image; and updating the first image with the second image, wherein the steps of estimating, executing and updating are repeated until the obtained second image has a desired resolution, or until the steps of estimating, executing and updating are performed a predetermined number of times.

Description

Method and apparatus for super-resolution processing
Technical Field
The present disclosure relates to image processing, and in particular, to a method and apparatus for super-resolution processing (face hashing).
Background
There is an increasing interest in detecting smaller face images with low image resolution, for example, using image low image resolution down to 10 pixels in height. At the same time, face analysis techniques such as face alignment and verification are rapidly advancing. However, most prior art techniques perform poorly when low resolution facial images are presented, since such images themselves carry less information, and damaged images due to downsampling and blurring can interfere with the facial analysis process. Super-resolution processing aims to improve the resolution of face images and provides a practical means to improve low resolution face processing and analysis, such as in personal identification of surveillance videos and in face image enhancement.
Disclosure of Invention
In one aspect of the present application, there is provided a method for super-resolution processing, including: estimating a dense correlation field (dense correlation field) based on the first image and the trained model; performing super-resolution processing over the bi-directional network based on the first image, the estimated dense corresponding domain, and the trained model to obtain a second image; and updating the first image with the second image, wherein the steps of estimating, executing and updating are repeated until the obtained second image has a desired resolution, or until the steps of estimating, executing and updating are performed a predetermined number of times.
According to another aspect of the present application, there is provided an apparatus for super-resolution processing, including: an estimating unit for estimating a dense corresponding domain based on the first image and the trained model; and a super-resolution processing unit for performing super-resolution processing through the bidirectional network based on the first image, the estimated dense corresponding domain, and the trained model to obtain a second image; wherein the first image is iteratively updated with the second image and the estimation unit and the super-resolution processing unit perform the iterative operation a predetermined number of times or until the obtained second image has the desired resolution.
In another aspect of the present application, there is provided an apparatus for super-resolution processing, comprising a processor and a memory storing computer-readable instructions, wherein the processor is configured to perform the above-mentioned method for super-resolution processing when the instructions are executed by the processor.
In another aspect of the present application, there is provided a non-volatile storage medium containing computer-readable instructions, wherein the instructions, when executed by a processor, are used to perform the above-described method for super-resolution processing.
Drawings
Fig. 1 is a flowchart of a method for super-resolution processing according to an embodiment of the present disclosure.
Fig. 2 illustrates an apparatus for super-resolution processing according to an embodiment of the present disclosure.
Fig. 3 shows a flow chart of a training process of an estimation unit according to an embodiment of the application.
Fig. 4 shows a flow chart of a test procedure of an evaluation unit according to an embodiment of the application.
Fig. 5 shows a flowchart of a training process of a super-resolution processing unit according to an embodiment of the present application.
Fig. 6 shows a flowchart of a test procedure of a super-resolution processing unit according to an embodiment of the present application.
FIG. 7 is a schematic block diagram of an embodiment of a computer device provided by the present invention.
Detailed Description
According to one embodiment, a method for super-resolution processing is provided. Fig. 1 is a flow chart of a method 100 for super-resolution processing according to an embodiment of the present disclosure. According to another embodiment, there is provided an apparatus 200 for super-resolution processing. Fig. 1 is a flowchart of a method 100 for super-resolution processing according to an embodiment of the present disclosure. Fig. 2 illustrates an apparatus 200 for super-resolution processing according to an embodiment of the present disclosure. As shown in fig. 2, the apparatus 200 may include an estimation unit 201 and a super-resolution processing unit 202.
As shown in fig. 1, in step S101, based on the input first image 10 and the parameters from the trained model 20, a dense corresponding domain is estimated by the estimation unit 201. The first image input into the estimation unit may be a face image having a low resolution. The dense correspondence domain indicates a correspondence or mapping relationship of the first image to the warped image and represents a warping (warping) of each pixel from the first image to the warped image. The trained model contains various parameters that can be used to estimate the dense corresponding domain.
In step S102, based on the first image 10 and the estimated dense corresponding domain, super-resolution processing is performed by the super-resolution processing unit 202 to obtain the second image 30. The second image obtained after the super-resolution processing of the first image generally has a higher resolution than the first image. The super-resolution processing unit 202 is a bidirectional network including a first branch 2021 as a normal branch for super-resolution processing and a second branch 2022 as a high-frequency branch. The processing in the normal branch is similar to the super-resolution processing in the prior art. In the high frequency branch, the estimated dense corresponding domain and parameters from the trained model 20 are also considered in addition to the input image 10. The results obtained from these two branches are combined by the gate network 2023 to obtain the second image 30.
In step S103, the first image is updated with the second image such that the second image serves as an input to the estimation unit 201. Next, steps S101 to S103 are repeatedly performed. For example, these steps may be performed iteratively until the second image obtained has the desired image resolution. Alternatively, these steps may be performed a predefined number of times.
For example, the face image may be represented as a matrix I, and each pixel in the image may be represented as x with coordinates (x, y). The average face template for a face image may be denoted as M, which includes a plurality of pixels z. The dense correspondence domain indicates a mapping of pixel z in the average face template M to pixel x in the face image I, which may be represented by a warping function w (z) as x ═ w (z). It should be noted that the pixels in both images are considered to be in the 2D face domain. The warping function W (z) may be determined based on the warping factor p and the warping base B (z), and may be expressed as
W(z)=z+B(z)p (1)
Wherein
Figure GDA0001821365200000031
Represents a coefficient of deformation, and
Figure GDA0001821365200000032
representing the deformation cardinality. The cardinality is predefined and shared by all samples.
According to one embodiment, the warping base b (z) is predefined and shared by all samples, so the warping function is actually controlled by the warping coefficient p of each sample. For the initial input image, p is equal to 0, so the warping function w (z) is z, which indicates that the dense corresponding domain is an average face template.
Taking the number of iterations K (K from 1 to K) as an example, all symbols are denoted by an index KShowing iteration. Symbol Ik、Wk、BkAnd MkA larger k in (1) indicates a larger resolution and the same k indicates the same resolution. The whole process starts with0And p0In which I0Represents an input low resolution facial image, and p0Is a zero vector representing the deformation coefficients of the average face template. The final super-resolution processed face image output is IK. Updating the deformation coefficient p at each iterationkA distortion function Wk(z) and a second image Ik. For example, the deformation coefficient pkAnd a warping function Wk(z) is updated by:
Figure GDA0001821365200000041
wherein f iskIs a gaussian-Newton descent regression factor (Gauss-Newton device regressor) learned and stored in the trained model for predicting dense corresponding domain coefficients. Coefficient fkCan be additionally represented by a Gaussian-Newton steepest descent regression matrix RkIs represented by RkObtained by training. In equation (2), φ is a shape-indexed feature (shape-indexed feature) that concatenates local appearances from all L key feature points (landmark), and
Figure GDA0001821365200000042
is its average over all training samples.
In one embodiment, the dense correspondence domain coefficients are estimated based on each pixel in the image. Alternatively, according to another embodiment, dense corresponding domain coefficients are estimated based on feature points in the image, since using sparse sets of facial feature points is more robust and accurate at low resolution. In these cases, the key feature point reference sk (l) is also considered in the estimation. Specifically, two sets of morphotropic bases, i.e., dense domain morphotropic bases, are obtained
Figure GDA0001821365200000043
And key feature points of the feature pointsDatum
Figure GDA0001821365200000044
Where l is the feature point index. The cardinality of dense domains and feature points is one-to-one related, i.e., Bk(z) and Sk(l) Sharing the same deformation coefficient
Figure GDA0001821365200000045
Figure GDA0001821365200000051
Wherein
Figure GDA0001821365200000052
The coordinates of the i-th feature point are expressed, and its mean location is expressed.
For super-resolution processing, the normal branch conservatively restores texture details that can be detected only from low resolution inputs, similar to the typical super-resolution processing rate. In the current cascade, the high frequency branch processes the face previously warped by the estimated face-corresponding domain with additional high frequency super resolution. Thanks to the teachings of the prior art, this branch enables to recover and synthesize texture details not shown in the low resolution input image. The results from fusing these two branches are learned by a pixel-wise gate network (pixel-wise gate network).
According to one embodiment, the first image is up-sampled and then input to the super-resolution processing unit. In particular, the up-sampled image is simultaneously input to the normal branch and the high frequency branch. In the normal branch, the up-sampled image is adaptively processed, for example, under bi-cubic interpolation. In the high frequency branch, the estimated dense corresponding domain is further input, and the upsampled image is processed based on the estimated dense corresponding domain. The results produced by the two branches are combined in the portal network to obtain a second image. It should be noted that the processing in the normal branch is not limited to bi-cubic interpolation, but may be any suitable process for super-resolution processing.
For example, for the kth iteration, the obtained image IkObtained by the following formula:
Ik=↑Ik-1+gk(↑Ik-1;Wk(z)) (4)
wherein g iskA super-resolution processing bi-directional network learned and stored in a trained model for super-resolution processing is shown. Coefficient gkObtained by training.
It should be noted that both the estimation unit and the super-resolution processing unit may have a test mode and a training mode. The method 100 shown in fig. 1 shows the operation of the estimation unit and the super-resolution processing unit in the test mode. When running in the training mode, the estimation unit and the super-resolution processing unit may perform a training process to obtain and store the parameters required in the test mode into the trained model. Herein, as an example, an estimation unit and a super-resolution processing unit having both a test mode and a training mode are described. Alternatively, the training process and the testing process may be performed by separate devices or separate units.
In the training process, two training sets, i.e., a super-resolution processing training set and a corresponding domain training set, are provided. Each of the two training sets contains a plurality of images, and downsampled images having different scales for each of the images. The true-value coefficient (ground-channel coefficient) of the distortion coefficient p is also included in the corresponding field training set. The images input during the training process have a high resolution compared to the testing process as described above.
Fig. 3 shows a flow chart of a training process 300 of an estimation unit according to an embodiment of the application. As shown, in step S301, a dense radix B is obtainedk(z) Key feature Point reference Sk(l) And apparent eigenvector Φk. These parameters are predefined for the subsequent steps. At the same time, dense radix Bk(z) and Key feature Point fiducial Sk(l) Is stored in the trained model for subsequent use. At step 302, for example, by minimizing the following lossesMiniaturization to obtain average projection Jacobian matrix (average project-out Jacobian) Jk
Figure GDA0001821365200000061
Wherein
Figure GDA0001821365200000062
Phi is a shape-indexed feature that concatenates local appearances from all L feature points, and
Figure GDA0001821365200000063
is its average over all training samples.
In step S303, the Gaussian-Newton steepest descent regression matrix RkCalculated by the following formula:
Figure GDA0001821365200000064
in the above formula, the jacobian matrix J described above is obtained by constructing a project-out Hessian matrixkAnd gauss-newton steepest descent regression matrix Rk
Optionally, the process 300 may also include steps S304 and S305. At S304, the deformation coefficients for both the corresponding training set and the super-resolution processing training set are updated. In step S305, a dense corresponding domain for each position z of the super resolution processing training set is calculated. The deformation coefficients and dense corresponding fields obtained at steps S304 and S305 may be used in a subsequent training process.
Fig. 4 shows a flow chart of a test procedure 400 of an evaluation unit according to an embodiment of the application. As shown in the figure, in step S401, the position of each key feature point is obtained from the face image input to the estimation unit. In the first iteration, the input image is the original low resolution image. In the subsequent iteration (e.g., in the K-th iteration), the input image is the image obtained in the (K-1) -th iteration, and the deformation coefficient obtained in the (K-1) -th iteration. Based on the key feature point references stored in the trained model, the location of each key feature point in the input image is obtained.
In step S402, for each key feature point, SIFT features around the key feature point position are obtained. The SIFT feature is a shape-indexed feature described above. In step S403, features from all key feature points are combined into appearance eigenvectors. In step S404, the deformation coefficient is updated by regression according to the formula (2). In step S405, a dense corresponding domain for each position z is calculated.
Fig. 5 shows a flow chart of a training process 500 of a super-resolution processing unit according to an embodiment of the present application. As shown, in step S501, the images of the training set are upsampled by bi-cubic interpolation. In step S502, warped high frequency prior values EWk, H are obtained from the dense corresponding domain. In step S503, the deep bidirectional network is trained using three steps: pre-training the normal sub-network, pre-training the high frequency sub-network, and tuning the entire bi-directional network end-to-end. In this step, the bidirectional network coefficients may be stored in the trained model. Next, in step S504, a (pass) bidirectional network may be derived to calculate predicted images for both the super resolution processing training set and the estimation training set.
Fig. 6 shows a flowchart of a test procedure 600 of a super-resolution processing unit according to an embodiment of the present application. As shown, in step S601, input image I is interpolated by bi-cubic interpolationk-1Upsampling to obtain an upsampled image ≠ Ik-1. In step S602, warped high frequency prior values are obtained from dense corresponding domains
Figure GDA0001821365200000071
In step S603, the learned bidirectional network coefficient gkFor combining two inputs ≈ Ik-1And
Figure GDA0001821365200000072
forward-pushed (forward pass) deep two-way network to obtain image Ik
As can be understood from the above description, the two tasks, i.e., the high-level face correspondence estimation and the low-level super-resolution processing, are complementary and can be alternately refined under guidance of each other by a cascade framework of task alternation. Experiments have been performed and improved results have been achieved.
Exemplary algorithms for training and testing according to the present application are listed below. Algorithm 1 is an exemplary training algorithm for learning parameters by a device according to embodiments of the present application. Algorithm 2 is an exemplary test algorithm for super-resolution processing of low-resolution faces according to an embodiment of the present application.
Figure GDA0001821365200000081
Figure GDA0001821365200000082
Figure GDA0001821365200000091
FIG. 7 is a schematic block diagram of an embodiment of a computer device provided by the present invention.
Referring to fig. 7, a computer apparatus may be used to implement the super-resolution processing method provided in the above-described embodiments. Specifically, computer devices may vary widely in configuration or performance, and may include one or more processors (e.g., Central Processing Units (CPUs)) 710 and memory 720. The memory 720 may be volatile memory or nonvolatile memory. One or more programs may be stored in memory 720 and each program may comprise a sequence of instruction operations in a computer device. Additionally, the processor 710 may communicate with the memory 720 and execute a series of instruction operations in the memory 720 on the computer device. In particular, data for one or more operating systems, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc., are additionally stored in memory 720. The computer device may also include one or more power supplies 730, one or more wired or wireless network interfaces 740, one or more input/output interfaces 750, and the like.
The methods and apparatuses according to the present invention described above may be implemented in hardware or firmware, or as software or computer code that may be stored in a recording medium (e.g., CD, ROM, RAM, floppy disk, hard disk, or magneto-optical disk), or as computer code that is initially stored in a remote recording medium or a non-transitory machine-readable medium and may be downloaded over a network to be stored in a local recording medium, so that the methods described herein may be processed by such software stored in a recording medium in a general-purpose computer, a special-purpose processor, or programmable or special-purpose hardware (e.g., ASIC or FPGA). It is understood that a computer, processor, microprocessor controller, or programmable hardware contains storage components (e.g., RAM, ROM, flash memory, etc.) capable of storing or receiving software or computer code and when the software or computer code is accessed and executed by a computer, processor, or hardware, the processing methods described herein are implemented. Further, when a general-purpose computer accesses code for implementing the processes shown herein, execution of the code will transform the general-purpose computer into a special-purpose computer for performing the processes shown herein.
The foregoing description is of particular embodiments of the invention only, and the scope of the invention is not limited thereto. Any changes or substitutions that may be easily made by one of ordinary skill in the art within the technical scope of the present disclosure should be within the protective scope of the present invention. Therefore, the scope of the claims should be prioritized over the scope of the invention.

Claims (16)

1. A method for super-resolution processing, comprising:
estimating a dense corresponding domain based on a first image and a trained model, wherein the dense corresponding domain represents a warped mapping from an average face image to the first image, and the trained model stores parameters for estimating the dense corresponding domain;
performing a super-resolution process over a bi-directional network based on the first image, the estimated dense corresponding domain, and the trained model to obtain a second image, the second image having a resolution higher than a resolution of the first image; and
updating the first image with the second image,
wherein the steps of estimating, performing, and updating are repeated until the obtained second image has a desired resolution, or until the steps of estimating, performing, and updating are performed a predetermined number of times.
2. The method of claim 1, wherein the bidirectional network comprises a first branch and a second branch, and the performing step comprises:
performing super-resolution processing by the first branch based on the first image to obtain a first result;
performing super-resolution processing by the second branch based on the first image, the estimated dense corresponding domain, and the trained model to obtain a second result; and
merging the first result and the second result to obtain the second image.
3. The method of claim 1, wherein the trained model stores a dense radix, a key feature point benchmark, a gaussian-newton descent regression factor used to estimate the dense corresponding domain, and bi-directional network coefficients for super resolution processing, wherein the dense radix and the key feature point benchmark are predefined, and the gaussian-newton descent regression factor and the bi-directional network coefficients are learned by training.
4. The method of claim 1, wherein the estimated dense correspondence domain comprises a warping function w (z) and a deformation coefficient p for mapping a pixel z in an average face image to a pixel x in the first image, wherein x ═ w (z) ═ z + Bp, and B is a predefined dense base.
5. The method of claim 4, wherein the deformation coefficient p and the warping function W (z) are updated iteratively while repeating the steps of estimating, performing, and updating.
6. The method of claim 4, wherein the first image is represented as I for the kth iterationk-1And the distortion function is represented as Wk(z), the second image being represented as IkAnd is obtained by the following formula:
Ik=↑Ik-1+gk(↑Ik-1;Wk(z))
wherein ↓Ik-1Is an image Ik-1Is upsampled, and gkAre the bi-directional network coefficients for the kth iteration obtained from the trained model.
7. The method of claim 4, wherein the first image is represented as I for the kth iterationk-1Is represented by pkIs expressed as WkThe warping function of (z) is obtained by:
pk=pk-1+fk(Ik-1;pk-1)
Wk(z)=z+Bkpk
wherein p isk-1Is the deformation coefficient, f, obtained in the last iterationkIs a Gaussian-Newton descent regression factor obtained from the trained model, and BkIs a predefined dense basis for the kth iteration obtained from the trained model.
8. An apparatus for super-resolution processing, comprising:
an estimation unit for estimating a dense corresponding domain based on a first image and a trained model, wherein the dense corresponding domain represents a warped mapping from an average face image to the first image, and the trained model stores parameters for estimating the dense corresponding domain; and
a super-resolution processing unit for performing super-resolution processing through a bidirectional network based on the first image, the estimated dense corresponding domain, and the trained model to obtain a second image having a resolution higher than that of the first image;
wherein the first image is iteratively updated with the second image, and the estimation unit and the super-resolution processing unit perform an iterative operation for a predetermined number of times or until the obtained second image has a desired resolution.
9. The apparatus of claim 8, wherein the super-resolution processing unit comprises:
a first branch for performing super-resolution processing based on the first image to obtain a first result;
a second branch for performing a super-resolution process based on the first image, the estimated dense corresponding domain and the trained model to obtain a second result; and
a gate network to merge the first result and the second result to obtain the second image.
10. The device of claim 8, wherein the trained model stores a dense radix, a key feature point benchmark, a gaussian-newton descent regression factor for estimating the dense corresponding domain, and bi-directional network coefficients for super resolution processing, wherein the dense radix and the key feature point benchmark are predefined, and the gaussian-newton descent regression factor and the bi-directional network coefficients are learned by training.
11. The apparatus of claim 8, wherein the estimated dense correspondence domain comprises a warping function w (z) and a deformation coefficient p for mapping a pixel z in an average face image to a pixel x in the first image, wherein x ═ w (z) ═ z + Bp, and B is a predefined dense base.
12. The apparatus of claim 11, wherein the deformation coefficient p and the warping function w (z) are iteratively updated for each iteration.
13. The apparatus of claim 11, wherein the first image is represented as I for a kth iterationk-1Said distortion function is represented as Wk(z) and the second image obtained by the super-resolution processing unit is represented as IkAnd is obtained by the following formula:
Ik=↑Ik-1+gk(↑Ik-1;Wk(z))
wherein ↓Ik-1Is the image Ik-1Is upsampled, and gkAre the bi-directional network coefficients for the kth iteration obtained from the trained model.
14. The apparatus of claim 11, wherein for a kth iteration, the first image is represented as Ik-1Is represented by pkIs expressed as Wk(z) is obtained by the estimation unit according to:
pk=pk-1+fk(Ik-1;pk-1)
Wk(z)=z+Bkpk
wherein p isk-1Is the deformation coefficient, f, obtained in the last iterationkIs a Gaussian Newton descent regression factor obtained from the trained model, and BkIs a predefined dense basis for the kth iteration obtained from the trained model.
15. An apparatus for super-resolution processing, comprising:
a processor; and
a memory storing computer-readable instructions that,
wherein the computer readable instructions, when executed by the processor, perform the method of any of claims 1-7.
16. A non-transitory storage medium containing computer readable instructions, wherein the computer readable instructions, when executed by a processor, are configured to perform the method of any one of claims 1-7.
CN201680084409.3A 2016-04-11 2016-04-11 Method and apparatus for super-resolution processing Active CN109313795B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/078960 WO2017177363A1 (en) 2016-04-11 2016-04-11 Methods and apparatuses for face hallucination

Publications (2)

Publication Number Publication Date
CN109313795A CN109313795A (en) 2019-02-05
CN109313795B true CN109313795B (en) 2022-03-29

Family

ID=60041336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680084409.3A Active CN109313795B (en) 2016-04-11 2016-04-11 Method and apparatus for super-resolution processing

Country Status (2)

Country Link
CN (1) CN109313795B (en)
WO (1) WO2017177363A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008817B (en) * 2019-01-29 2021-12-28 北京奇艺世纪科技有限公司 Model training method, image processing method, device, electronic equipment and computer readable storage medium
CN112001861B (en) * 2020-08-18 2024-04-02 香港中文大学(深圳) Image processing method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530863A (en) * 2013-10-30 2014-01-22 广东威创视讯科技股份有限公司 Multistage reconstruction image super resolution method
CN104091320A (en) * 2014-07-16 2014-10-08 武汉大学 Noise human face super-resolution reconstruction method based on data-driven local feature conversion
CN105405113A (en) * 2015-10-23 2016-03-16 广州高清视信数码科技股份有限公司 Image super-resolution reconstruction method based on multi-task Gaussian process regression

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070103595A1 (en) * 2005-10-27 2007-05-10 Yihong Gong Video super-resolution using personalized dictionary
TWI419059B (en) * 2010-06-14 2013-12-11 Ind Tech Res Inst Method and system for example-based face hallucination
CN103208109B (en) * 2013-04-25 2015-09-16 武汉大学 A kind of unreal structure method of face embedded based on local restriction iteration neighborhood
WO2015192316A1 (en) * 2014-06-17 2015-12-23 Beijing Kuangshi Technology Co., Ltd. Face hallucination using convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530863A (en) * 2013-10-30 2014-01-22 广东威创视讯科技股份有限公司 Multistage reconstruction image super resolution method
CN104091320A (en) * 2014-07-16 2014-10-08 武汉大学 Noise human face super-resolution reconstruction method based on data-driven local feature conversion
CN105405113A (en) * 2015-10-23 2016-03-16 广州高清视信数码科技股份有限公司 Image super-resolution reconstruction method based on multi-task Gaussian process regression

Also Published As

Publication number Publication date
WO2017177363A1 (en) 2017-10-19
CN109313795A (en) 2019-02-05

Similar Documents

Publication Publication Date Title
WO2020063475A1 (en) 6d attitude estimation network training method and apparatus based on deep learning iterative matching
US10740881B2 (en) Deep patch feature prediction for image inpainting
CN110889325B (en) Multitasking facial motion recognition model training and multitasking facial motion recognition method
US11393092B2 (en) Motion tracking and strain determination
JP7252188B2 (en) Image processing system, image processing method and program
CN107610146B (en) Image scene segmentation method and device, electronic equipment and computer storage medium
WO2018227800A1 (en) Neural network training method and device
US20230230275A1 (en) Inverting Neural Radiance Fields for Pose Estimation
US9886746B2 (en) System and method for image inpainting
JP7030493B2 (en) Image processing equipment, image processing methods and programs
US11314989B2 (en) Training a generative model and a discriminative model
JP7078139B2 (en) Video stabilization methods and equipment, as well as non-temporary computer-readable media
US10817984B2 (en) Image preprocessing method and device for JPEG compressed file
JP2020527812A5 (en)
US9846974B2 (en) Absolute rotation estimation including outlier detection via low-rank and sparse matrix decomposition
JP6361775B2 (en) Method and apparatus for identifying target posture
CN109313795B (en) Method and apparatus for super-resolution processing
JP6283124B2 (en) Image characteristic estimation method and device
EP3340109A1 (en) Shape prediction for face alignment
CN109871249B (en) Remote desktop operation method and device, readable storage medium and terminal equipment
KR101586007B1 (en) Data processing apparatus and method
CN111145221A (en) Target tracking algorithm based on multi-layer depth feature extraction
JP6359985B2 (en) Depth estimation model generation device and depth estimation device
Khosravi et al. A new statistical technique for interpolation of landsat images
WO2012032747A1 (en) Feature point selecting system, feature point selecting method, feature point selecting program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant