CN110580687B - Data preprocessing method for improving filling quality of generated countermeasure network cavity - Google Patents

Data preprocessing method for improving filling quality of generated countermeasure network cavity Download PDF

Info

Publication number
CN110580687B
CN110580687B CN201910717564.XA CN201910717564A CN110580687B CN 110580687 B CN110580687 B CN 110580687B CN 201910717564 A CN201910717564 A CN 201910717564A CN 110580687 B CN110580687 B CN 110580687B
Authority
CN
China
Prior art keywords
hole
image
cavity
holes
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910717564.XA
Other languages
Chinese (zh)
Other versions
CN110580687A (en
Inventor
刘然
郑杨婷
田逢春
钱君辉
刘亚琼
赵洋
陈希
崔珊珊
王斐斐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201910717564.XA priority Critical patent/CN110580687B/en
Publication of CN110580687A publication Critical patent/CN110580687A/en
Application granted granted Critical
Publication of CN110580687B publication Critical patent/CN110580687B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a data preprocessing method for improving the filling quality of a generated confrontation network cavity, which comprises the following steps: binarizing a depth map corresponding to a target image after three-dimensional image transformation, and rotating and translating to generate a hole image; positioning connected areas in the hole image, and respectively defining different connected areas as small holes, large holes and edge holes; translating the large cavity: when the large cavity is overlapped with the foreground object at the corresponding position of the reference image, if the target image generated by three-dimensional image transformation is a right view, the large cavity is shifted to the right, otherwise, the large cavity is shifted to the left; and when the large cavity is not overlapped with the foreground object at the corresponding position of the reference image, if the target image generated by the three-dimensional image transformation is a right view, moving the large cavity to the left, and otherwise, moving the large cavity to the right. The method starts with data preprocessing, generates a mask for simulating the cavity in the target image, trains the mask and the reference image to generate the confrontation network, and can remarkably improve the cavity filling quality of the generated confrontation network.

Description

Data preprocessing method for improving filling quality of generated countermeasure network cavity
Technical Field
The invention relates to the technical field of image data processing, in particular to a data preprocessing method for improving the filling quality of a generated confrontation network cavity.
Background
Depth-Image-Based Rendering (DIBR) is a widely used virtual viewpoint synthesis technique that generates a new virtual viewpoint Image, i.e., a target Image, from a reference Image and a Depth map corresponding to the reference Image. This technology has received attention from numerous companies and research institutions due to its relative simplicity of implementation and relatively low cost requirements. However, the most important and difficult problem in the DIBR technique is that a hole exists in the generated target image, which seriously affects the quality of the virtual viewpoint image. These holes can be filled based on a generative model that generates a competing network. The countermeasure network is generated through reference image and mask (mask) training, and a generation model for filling the target image hole can be obtained. In the existing method, a mask is randomly applied to a picture in a training set to serve as training input, but the inventor of the invention finds that a hole filling model trained in the way often fills foreground pixels into a background, so that the quality of a target image is poor.
Disclosure of Invention
Aiming at the technical problem that the existing method for filling the target image holes is used for inputting training by randomly applying masks to a training set picture, however, the hole filling model trained in the way often fills foreground pixels into the background, so that the target image quality is poor, the invention provides a data preprocessing method for improving the filling quality of the generated confrontation network holes.
In order to solve the technical problems, the invention adopts the following technical scheme:
a data preprocessing method for improving the filling quality of a hole of a generation countermeasure network, comprising the following steps:
s1, referring to the image
Figure BDA0002155974640000021
And its corresponding depth map
Figure BDA0002155974640000022
Performing three-dimensional image transformation to obtain target image to be subjected to cavity filling processing
Figure BDA0002155974640000023
And corresponding depth map
Figure BDA0002155974640000024
Then the target image is
Figure BDA0002155974640000025
Corresponding depth map
Figure BDA0002155974640000026
Binarization is carried out, a hole area is marked as 1, a non-hole area is marked as 0, then the image after binarization is rotated and translated to obtain a reference image
Figure BDA0002155974640000027
Horizontally aligned hole images
Figure BDA0002155974640000028
Wherein t is ∈ [0, N]N is the frame number;
s2, finding out a connected region in the hole image, namely the hole region, and defining the holes in the hole image into three types, namely small holes, large holes and edge holes, specifically as follows:
when the number of pixel points of one connected region is smaller than the hole size, defining the connected region as a small hole;
when a connected region has pixel points positioned at the edge of the image, defining the connected region as an edge hole;
when the number of pixel points in a connected region is larger than or equal to the size of a hole _ size and is not a marginal hole, defining the connected region as a large hole;
large holes and edge holes are used for greatly influencing the filling quality of the holes;
s3, sequentially visiting the large holes, and processing the large holes as follows:
s31, finding the left edge and the right edge of the current hole, and defining the first hole point on the left side in the horizontal direction as the left edge point of the hole
Figure BDA0002155974640000029
Defining the last cavity point on the right in the horizontal direction as the right edge point of the cavity
Figure BDA00021559746400000210
Obtaining vector groups respectively representing left edge and right edge of the hole
Figure BDA00021559746400000211
And
Figure BDA00021559746400000212
wherein i ∈ [0, n ]]N is the number of rows of the current hole;
s32, subtracting the value of the reference image depth map corresponding to the left edge from the value of the reference image depth map corresponding to the right edge to obtain a vector d:
Figure BDA00021559746400000213
s33, if half of the values in the vector d are larger than the depth difference threshold diff, the large cavity is judged to be the first condition, namely the large cavity is overlapped with the foreground object at the corresponding position of the reference image; otherwise, the large cavity is judged to be the second condition, namely the large cavity is not overlapped with the foreground object at the corresponding position of the reference image; for the first case, the large cavity needs to be translated to the background direction to a position which is just not overlapped with the foreground object; for the second case, the large hole needs to be translated towards the front scene body to a position just not overlapping the foreground object.
Further, for the first case in step S33, if the three-dimensional image is transformed into the generated target image
Figure BDA00021559746400000313
The large cavity needs to be translated to the right when the view is a right view; if the three-dimensional image is transformed to generate a target image ISt is the left view, and the large cavity needs to be translated to the left.
Further, the distance of translation is:
Figure BDA0002155974640000031
Figure BDA0002155974640000032
wherein the content of the first and second substances,
Figure BDA0002155974640000033
indicating distance
Figure BDA0002155974640000034
On the right or
Figure BDA0002155974640000035
The x coordinate of the nearest background pixel to the left.
Further, for the second case in step S33, if the three-dimensional image is transformed into the generated target image
Figure BDA0002155974640000036
The large cavity needs to be translated to the left in a right view; if the three-dimensional image is transformed to generate a target image
Figure BDA0002155974640000037
For a left view, the large cavity needs to be translated to the right.
Further, the distance of translation is:
Figure BDA0002155974640000038
Figure BDA0002155974640000039
wherein the content of the first and second substances,
Figure BDA00021559746400000310
indicating distance
Figure BDA00021559746400000311
Left side or
Figure BDA00021559746400000312
Nearest to the rightThe x-coordinate of the foreground pixel.
Compared with the prior art, the data preprocessing method for improving the cavity filling quality of the generated countermeasure network generates a mask for simulating the cavity in the target image from the data preprocessing, applies the mask and the reference image to the training generated countermeasure network, can obtain a mode similar to the target image containing the cavity, and can learn a model for filling the cavity of the mode during network training, so that the cavity filling quality of the generated countermeasure network can be obviously improved, and a new data preprocessing method with practical value is provided for the application of a DIBR view synthesis technology in a three-dimensional television.
Drawings
FIG. 1 is a schematic flow chart of a data preprocessing method for improving the quality of filling holes in a generated countermeasure network according to the present invention.
Fig. 2 is a schematic diagram of a three-dimensional image transformation process according to an embodiment of the present invention.
FIG. 3 is a schematic diagram of connected regions in a hole image according to an embodiment of the present invention.
FIG. 4a is a schematic diagram of a large hole being applied to a reference image before translation according to an embodiment of the present invention.
FIG. 4b is a schematic diagram of a large hole being translated and then applied to a reference image according to an embodiment of the present invention.
Fig. 5a is a schematic diagram of an object image with holes according to an embodiment of the present invention.
FIG. 5b is a schematic diagram of a target image filling result after training with an existing random mask according to an embodiment of the present invention.
FIG. 5c is a schematic diagram of a target image filling result after mask training generated by the present invention according to an embodiment of the present invention.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further explained below by combining the specific drawings.
Referring to FIG. 1, a detailed embodiment of the present invention is shown as "Ballet"Target image generated by three-dimensional image transformation of video sequence as experimental object
Figure BDA0002155974640000041
For the right view as an example (a cam5 view generates a cam4 view, that is, a cam5 in a Ballet video sequence is used as a reference image sequence to generate a target image sequence cam4, PSNR values of images of the generated cam4 sequence and images of corresponding frames of a real cam4 sequence are calculated, the two images are closer as PSNR is larger, a hole size hole _ size threshold value is 200, and a depth difference threshold value diff is 20. Specifically, the invention provides a data preprocessing method for improving the filling quality of a generated countermeasure network hole, which comprises the following steps:
s1, referring to the image
Figure BDA0002155974640000051
And its corresponding depth map
Figure BDA0002155974640000052
The video sequence is subjected to three-dimensional image transformation to obtain a target image to be subjected to cavity filling processing
Figure BDA0002155974640000053
And corresponding depth map
Figure BDA0002155974640000054
Obtaining the target image containing the cavity
Figure BDA0002155974640000055
And corresponding depth map
Figure BDA0002155974640000056
Then the target image is
Figure BDA0002155974640000057
Corresponding depth map
Figure BDA0002155974640000058
Binary, hole area markMarking as 1, marking the non-hole area as 0, and then rotating the binarized image clockwise by 2 degrees to obtain a reference image
Figure BDA0002155974640000059
Horizontally aligned hole images
Figure BDA00021559746400000510
As shown in fig. 2, for the pellet video sequence in this embodiment, since the hole image is horizontally aligned with the reference image after being rotated by 2 degrees, the translation amount is 0, and for some other sequences, the hole image needs to be translated again after being rotated to be aligned with the reference image; wherein t is ∈ [0, N]N is the frame number; wherein, the three-dimensional image transformation process is a prior art well known to those skilled in the art, and specifically, reference may be made to the document "Liu R, Tan W, Wu Y, et al]The method in Journal of Electronic Imaging,2013,22(3):033031 ";
s2, finding out connected regions in the hole image, namely the hole regions, defining the holes in the hole image into three types, namely small holes, large holes and edge holes, wherein a schematic diagram of the connected regions in the hole image is shown in FIG. 3, and in the example, different connected regions can be marked into different colors for visual display; the specific three types of holes are defined as follows:
when the number of pixel points of one connected region is smaller than the hole size, defining the connected region as a small hole;
when a connected region has a pixel point located at the edge of the image, it is defined as an edge hole, as shown in E, F, G area in fig. 3;
when the number of pixel points of a connected region is greater than or equal to the hole size hole _ size and is not an edge hole, defining the connected region as a large hole, as shown by the A, B, C, D area in fig. 3;
among them are large voids and edge voids, which have a large impact on the void fill quality.
S3, sequentially visiting the large holes, and processing the large holes as follows:
s31, finding the left edge and the right edge of the current hole, and defining the first hole point on the left side in the horizontal direction as the left edge point of the hole
Figure BDA0002155974640000061
Defining the last cavity point on the right in the horizontal direction as the right edge point of the cavity
Figure BDA0002155974640000062
Obtaining vector groups respectively representing left edge and right edge of the hole
Figure BDA0002155974640000063
And
Figure BDA0002155974640000064
wherein i ∈ [0, n ]]N is the number of rows of the current hole;
s32, subtracting the value of the reference image depth map corresponding to the left edge from the value of the reference image depth map corresponding to the right edge to obtain a vector d:
Figure BDA0002155974640000065
s33, if half of the values in the vector d are larger than the depth difference threshold diff, the large cavity is judged to be the first condition, namely the large cavity is overlapped with the foreground object at the corresponding position of the reference image; otherwise, the large cavity is judged to be the second condition, namely the large cavity is not overlapped with the foreground object at the corresponding position of the reference image; for the first case, the large cavity needs to be translated to the background direction to a position which is just not overlapped with the foreground object; for the second case, the large hole needs to be translated towards the front scene body to a position just not overlapping the foreground object.
As a specific embodiment, for the first case in step S33, if the three-dimensional image is transformed into the generated target image
Figure BDA00021559746400000614
The large cavity needs to be translated to the right when the view is a right view; if the three-dimensional image is transformedGenerated target image
Figure BDA00021559746400000613
The large cavity needs to be translated to the left in a left view; specifically, the translation distance is as follows:
Figure BDA0002155974640000066
Figure BDA0002155974640000067
wherein the content of the first and second substances,
Figure BDA0002155974640000068
indicating distance
Figure BDA0002155974640000069
On the right or
Figure BDA00021559746400000610
The x coordinate of the nearest background pixel to the left.
As a specific embodiment, for the second case in step S33, if the three-dimensional image is transformed into the generated target image
Figure BDA00021559746400000611
The large cavity needs to be translated to the left in a right view; if the three-dimensional image is transformed to generate a target image
Figure BDA00021559746400000612
The large cavity needs to be translated to the right when the view is a left view; specifically, the translation distance is as follows:
Figure BDA0002155974640000071
Figure BDA0002155974640000072
wherein the content of the first and second substances,
Figure BDA0002155974640000073
indicating distance
Figure BDA0002155974640000074
Left side or
Figure BDA0002155974640000075
And the x coordinate of the nearest foreground pixel point on the right.
The step S3 is a specific processing method for a large hole, and the edge hole is still at the edge of the image after rotation, so the edge hole does not need to be processed after the image is rotated, and the conventional mode is maintained.
Specifically, for this embodiment, there are two situations in which the large hole is overlapped with the foreground object at the corresponding position of the reference image, and the other is not overlapped with the foreground object. For the first case, the large cavity needs to be translated to the right to a position just not overlapping with the foreground object; for the second case, the large cavity needs to be translated to the left to a position just not overlapping with the foreground object; the specific translation distance is implemented according to the balance distance formula of the two cases. In this embodiment, the schematic diagram of the large cavity applied to the reference image before and after translation is shown in fig. 4a and 4b, and it can be seen from fig. 4b that: the large cavity A, B, C region belongs to two cases, and the large cavity D region belongs to the first case.
The hole image processed in this way is applied to a reference image, and an image similar to a hole pattern in a target image is obtained, namely the reference image and the processed hole image (mask) are used as input for training a generation countermeasure network, and a generation model for filling the hole in the target image is obtained. In the present application, taking an EdgeConnect network (i.e. a generation countermeasure network in the document "Nazeri K, Ng E, Joseph T, et al. EdgeConnect: genetic Image Inpainting with adaptive Edge Learning [ J ]. 2019.") as an example, PSNR (Peak Signal to Noise Ratio) after target Image filling is compared when different mask training models are used. The larger the PSNR, the closer the target image is to the real image, and the better the restoration quality is considered. The PSNR value for the currently used random mask and reference images was 31.90, whereas the PSNR value for the mask and reference images generated using the present invention was 32.41. Referring to fig. 5a, 5b and 5c, as an example of the target image filling result, a model obtained by using the mask training generated by the present invention can be seen from the circled points in the figure, so that the quality of hole filling can be obviously improved.
Compared with the prior art, the data preprocessing method for improving the cavity filling quality of the generated countermeasure network generates a mask for simulating the cavity in the target image from the data preprocessing, applies the mask and the reference image to the training generated countermeasure network, can obtain a mode similar to the target image containing the cavity, and can learn a model for filling the cavity of the mode during network training, so that the cavity filling quality of the generated countermeasure network can be obviously improved, and a new data preprocessing method with practical value is provided for the application of a DIBR view synthesis technology in a three-dimensional television.
Finally, the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered in the claims of the present invention.

Claims (5)

1. A data preprocessing method for improving the filling quality of a hole of a generation countermeasure network is characterized by comprising the following steps:
s1, referring to the image
Figure FDA0002784939710000011
And its corresponding depth map
Figure FDA0002784939710000012
Performing three-dimensional image transformation to obtain target image to be subjected to cavity filling processing
Figure FDA0002784939710000013
And corresponding depth map
Figure FDA0002784939710000014
Then the target image is
Figure FDA0002784939710000015
Corresponding depth map
Figure FDA0002784939710000016
Binarizing, marking the hollow area as 1 and the non-hollow area as 0, and then marking the depth map
Figure FDA0002784939710000017
Rotating and translating the binarized image to obtain a reference image
Figure FDA0002784939710000018
Horizontally aligned hole images
Figure FDA0002784939710000019
Wherein t is ∈ [0, N]N is the frame number;
s2, finding out a connected region in the hole image, namely the hole region, and defining the holes in the hole image into three types, namely small holes, large holes and edge holes, specifically as follows:
when the number of pixel points of one connected region is smaller than the hole size, defining the connected region as a small hole;
when a connected region has pixel points positioned at the edge of the image, defining the connected region as an edge hole;
when the number of pixel points in a connected region is larger than or equal to the size of a hole _ size and is not a marginal hole, defining the connected region as a large hole;
large holes and edge holes are used for greatly influencing the filling quality of the holes;
s3, sequentially visiting the large holes, and processing the large holes as follows:
s31, finding the left edge and the right edge of the current hole, and defining the first hole point on the left side in the horizontal direction as the left edge point of the hole
Figure FDA00027849397100000110
Defining the last cavity point on the right in the horizontal direction as the right edge point of the cavity
Figure FDA00027849397100000111
Obtaining vector groups respectively representing left edge and right edge of the hole
Figure FDA00027849397100000112
And
Figure FDA00027849397100000113
wherein i ∈ [0, n ]]N is the number of rows of the current hole;
s32, subtracting the value of the reference image depth map corresponding to the left edge from the value of the reference image depth map corresponding to the right edge to obtain a vector d:
Figure FDA00027849397100000114
s33, if half of the values in the vector d are larger than the depth difference threshold diff, the large cavity is judged to be the first condition, namely the large cavity is overlapped with the foreground object at the corresponding position of the reference image; otherwise, the large cavity is judged to be the second condition, namely the large cavity is not overlapped with the foreground object at the corresponding position of the reference image; for the first case, the large cavity needs to be translated to the background direction to a position which is just not overlapped with the foreground object; for the second case, the large hole needs to be translated towards the front scene body to a position just not overlapping the foreground object.
2. The promotion generation of countermeasure network hole filling quality of claim 1Is characterized in that, for the first case in step S33, if the three-dimensional image is transformed into the generated target image
Figure FDA0002784939710000029
The large cavity needs to be translated to the right when the view is a right view; if the three-dimensional image is transformed to generate a target image
Figure FDA00027849397100000210
For a left view, the large cavity needs to be translated to the left.
3. The data preprocessing method for improving generation countermeasure network hole filling quality according to claim 2, wherein the distance of the translation is:
Figure FDA0002784939710000021
Figure FDA0002784939710000022
wherein the content of the first and second substances,
Figure FDA0002784939710000023
indicating distance
Figure FDA0002784939710000024
On the right or
Figure FDA0002784939710000025
The x coordinate of the nearest background pixel to the left.
4. The method for preprocessing data for improving the quality of filling holes in a generation countermeasure network as claimed in claim 1, wherein for the second case in step S33, if the three-dimensional image transforms the generated target image
Figure FDA0002784939710000026
The large cavity needs to be translated to the left in a right view; if the three-dimensional image is transformed to generate a target image
Figure FDA0002784939710000027
For a left view, the large cavity needs to be translated to the right.
5. The data preprocessing method for improving generation countermeasure network hole filling quality according to claim 4, wherein the distance of the translation is:
Figure FDA0002784939710000028
Figure FDA0002784939710000031
wherein the content of the first and second substances,
Figure FDA0002784939710000032
indicating distance
Figure FDA0002784939710000033
Left side or
Figure FDA0002784939710000034
And the x coordinate of the nearest foreground pixel point on the right.
CN201910717564.XA 2019-08-05 2019-08-05 Data preprocessing method for improving filling quality of generated countermeasure network cavity Expired - Fee Related CN110580687B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910717564.XA CN110580687B (en) 2019-08-05 2019-08-05 Data preprocessing method for improving filling quality of generated countermeasure network cavity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910717564.XA CN110580687B (en) 2019-08-05 2019-08-05 Data preprocessing method for improving filling quality of generated countermeasure network cavity

Publications (2)

Publication Number Publication Date
CN110580687A CN110580687A (en) 2019-12-17
CN110580687B true CN110580687B (en) 2021-02-02

Family

ID=68810914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910717564.XA Expired - Fee Related CN110580687B (en) 2019-08-05 2019-08-05 Data preprocessing method for improving filling quality of generated countermeasure network cavity

Country Status (1)

Country Link
CN (1) CN110580687B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102551097B1 (en) * 2021-10-08 2023-07-04 주식회사 쓰리아이 Hole filling method for virtual 3 dimensional model and computing device therefor

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102307312A (en) * 2011-08-31 2012-01-04 四川虹微技术有限公司 Method for performing hole filling on destination image generated by depth-image-based rendering (DIBR) technology
CN104683788A (en) * 2015-03-16 2015-06-03 四川虹微技术有限公司 Cavity filling method based on image reprojection
CN109462747A (en) * 2018-12-11 2019-03-12 成都美律科技有限公司 Based on the DIBR system gap filling method for generating confrontation network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9094660B2 (en) * 2010-11-11 2015-07-28 Georgia Tech Research Corporation Hierarchical hole-filling for depth-based view synthesis in FTV and 3D video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102307312A (en) * 2011-08-31 2012-01-04 四川虹微技术有限公司 Method for performing hole filling on destination image generated by depth-image-based rendering (DIBR) technology
CN104683788A (en) * 2015-03-16 2015-06-03 四川虹微技术有限公司 Cavity filling method based on image reprojection
CN109462747A (en) * 2018-12-11 2019-03-12 成都美律科技有限公司 Based on the DIBR system gap filling method for generating confrontation network

Also Published As

Publication number Publication date
CN110580687A (en) 2019-12-17

Similar Documents

Publication Publication Date Title
US10630956B2 (en) Image processing method and apparatus
Gu et al. Model-based referenceless quality metric of 3D synthesized images using local image description
CN110084757B (en) Infrared depth image enhancement method based on generation countermeasure network
US20180300937A1 (en) System and a method of restoring an occluded background region
CN109462747B (en) DIBR system cavity filling method based on generation countermeasure network
CN109753971B (en) Correction method and device for distorted text lines, character recognition method and device
Rahaman et al. Virtual view synthesis for free viewpoint video and multiview video compression using Gaussian mixture modelling
US9390511B2 (en) Temporally coherent segmentation of RGBt volumes with aid of noisy or incomplete auxiliary data
CN111325693B (en) Large-scale panoramic viewpoint synthesis method based on single viewpoint RGB-D image
CN111563908B (en) Image processing method and related device
WO2002091302A3 (en) Image sequence enhancement system and method
CN111179195B (en) Depth image cavity filling method and device, electronic equipment and storage medium thereof
Lie et al. 2D to 3D video conversion with key-frame depth propagation and trilateral filtering
Wang et al. View generation with DIBR for 3D display system
CN104378619B (en) A kind of hole-filling algorithm rapidly and efficiently based on front and back's scape gradient transition
US20210056668A1 (en) Image inpainting with geometric and photometric transformations
CN110660131A (en) Virtual viewpoint hole filling method based on depth background modeling
CN110807738A (en) Fuzzy image non-blind restoration method based on edge image block sharpening
CN110580687B (en) Data preprocessing method for improving filling quality of generated countermeasure network cavity
CN105791795A (en) Three-dimensional image processing method and device and three-dimensional video display device
US20240161388A1 (en) Hair rendering system based on deep neural network
Qiao et al. Color correction and depth-based hierarchical hole filling in free viewpoint generation
Sun et al. Seamless view synthesis through texture optimization
CN107103321B (en) The generation method and generation system of road binary image
EP1081654A3 (en) Method and apparatus for providing depth blur effects within a 3d videographics system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210202