CN110580687B - Data preprocessing method for improving filling quality of generated countermeasure network cavity - Google Patents
Data preprocessing method for improving filling quality of generated countermeasure network cavity Download PDFInfo
- Publication number
- CN110580687B CN110580687B CN201910717564.XA CN201910717564A CN110580687B CN 110580687 B CN110580687 B CN 110580687B CN 201910717564 A CN201910717564 A CN 201910717564A CN 110580687 B CN110580687 B CN 110580687B
- Authority
- CN
- China
- Prior art keywords
- hole
- image
- cavity
- holes
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000007781 pre-processing Methods 0.000 title claims abstract description 19
- 230000009466 transformation Effects 0.000 claims abstract description 9
- 238000013519 translation Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 7
- 239000000126 substance Substances 0.000 claims description 6
- 238000012549 training Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 9
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000008188 pellet Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides a data preprocessing method for improving the filling quality of a generated confrontation network cavity, which comprises the following steps: binarizing a depth map corresponding to a target image after three-dimensional image transformation, and rotating and translating to generate a hole image; positioning connected areas in the hole image, and respectively defining different connected areas as small holes, large holes and edge holes; translating the large cavity: when the large cavity is overlapped with the foreground object at the corresponding position of the reference image, if the target image generated by three-dimensional image transformation is a right view, the large cavity is shifted to the right, otherwise, the large cavity is shifted to the left; and when the large cavity is not overlapped with the foreground object at the corresponding position of the reference image, if the target image generated by the three-dimensional image transformation is a right view, moving the large cavity to the left, and otherwise, moving the large cavity to the right. The method starts with data preprocessing, generates a mask for simulating the cavity in the target image, trains the mask and the reference image to generate the confrontation network, and can remarkably improve the cavity filling quality of the generated confrontation network.
Description
Technical Field
The invention relates to the technical field of image data processing, in particular to a data preprocessing method for improving the filling quality of a generated confrontation network cavity.
Background
Depth-Image-Based Rendering (DIBR) is a widely used virtual viewpoint synthesis technique that generates a new virtual viewpoint Image, i.e., a target Image, from a reference Image and a Depth map corresponding to the reference Image. This technology has received attention from numerous companies and research institutions due to its relative simplicity of implementation and relatively low cost requirements. However, the most important and difficult problem in the DIBR technique is that a hole exists in the generated target image, which seriously affects the quality of the virtual viewpoint image. These holes can be filled based on a generative model that generates a competing network. The countermeasure network is generated through reference image and mask (mask) training, and a generation model for filling the target image hole can be obtained. In the existing method, a mask is randomly applied to a picture in a training set to serve as training input, but the inventor of the invention finds that a hole filling model trained in the way often fills foreground pixels into a background, so that the quality of a target image is poor.
Disclosure of Invention
Aiming at the technical problem that the existing method for filling the target image holes is used for inputting training by randomly applying masks to a training set picture, however, the hole filling model trained in the way often fills foreground pixels into the background, so that the target image quality is poor, the invention provides a data preprocessing method for improving the filling quality of the generated confrontation network holes.
In order to solve the technical problems, the invention adopts the following technical scheme:
a data preprocessing method for improving the filling quality of a hole of a generation countermeasure network, comprising the following steps:
s1, referring to the imageAnd its corresponding depth mapPerforming three-dimensional image transformation to obtain target image to be subjected to cavity filling processingAnd corresponding depth mapThen the target image isCorresponding depth mapBinarization is carried out, a hole area is marked as 1, a non-hole area is marked as 0, then the image after binarization is rotated and translated to obtain a reference imageHorizontally aligned hole imagesWherein t is ∈ [0, N]N is the frame number;
s2, finding out a connected region in the hole image, namely the hole region, and defining the holes in the hole image into three types, namely small holes, large holes and edge holes, specifically as follows:
when the number of pixel points of one connected region is smaller than the hole size, defining the connected region as a small hole;
when a connected region has pixel points positioned at the edge of the image, defining the connected region as an edge hole;
when the number of pixel points in a connected region is larger than or equal to the size of a hole _ size and is not a marginal hole, defining the connected region as a large hole;
large holes and edge holes are used for greatly influencing the filling quality of the holes;
s3, sequentially visiting the large holes, and processing the large holes as follows:
s31, finding the left edge and the right edge of the current hole, and defining the first hole point on the left side in the horizontal direction as the left edge point of the holeDefining the last cavity point on the right in the horizontal direction as the right edge point of the cavityObtaining vector groups respectively representing left edge and right edge of the holeAndwherein i ∈ [0, n ]]N is the number of rows of the current hole;
s32, subtracting the value of the reference image depth map corresponding to the left edge from the value of the reference image depth map corresponding to the right edge to obtain a vector d:
s33, if half of the values in the vector d are larger than the depth difference threshold diff, the large cavity is judged to be the first condition, namely the large cavity is overlapped with the foreground object at the corresponding position of the reference image; otherwise, the large cavity is judged to be the second condition, namely the large cavity is not overlapped with the foreground object at the corresponding position of the reference image; for the first case, the large cavity needs to be translated to the background direction to a position which is just not overlapped with the foreground object; for the second case, the large hole needs to be translated towards the front scene body to a position just not overlapping the foreground object.
Further, for the first case in step S33, if the three-dimensional image is transformed into the generated target imageThe large cavity needs to be translated to the right when the view is a right view; if the three-dimensional image is transformed to generate a target image ISt is the left view, and the large cavity needs to be translated to the left.
Further, the distance of translation is:
wherein the content of the first and second substances,indicating distanceOn the right orThe x coordinate of the nearest background pixel to the left.
Further, for the second case in step S33, if the three-dimensional image is transformed into the generated target imageThe large cavity needs to be translated to the left in a right view; if the three-dimensional image is transformed to generate a target imageFor a left view, the large cavity needs to be translated to the right.
Further, the distance of translation is:
wherein the content of the first and second substances,indicating distanceLeft side orNearest to the rightThe x-coordinate of the foreground pixel.
Compared with the prior art, the data preprocessing method for improving the cavity filling quality of the generated countermeasure network generates a mask for simulating the cavity in the target image from the data preprocessing, applies the mask and the reference image to the training generated countermeasure network, can obtain a mode similar to the target image containing the cavity, and can learn a model for filling the cavity of the mode during network training, so that the cavity filling quality of the generated countermeasure network can be obviously improved, and a new data preprocessing method with practical value is provided for the application of a DIBR view synthesis technology in a three-dimensional television.
Drawings
FIG. 1 is a schematic flow chart of a data preprocessing method for improving the quality of filling holes in a generated countermeasure network according to the present invention.
Fig. 2 is a schematic diagram of a three-dimensional image transformation process according to an embodiment of the present invention.
FIG. 3 is a schematic diagram of connected regions in a hole image according to an embodiment of the present invention.
FIG. 4a is a schematic diagram of a large hole being applied to a reference image before translation according to an embodiment of the present invention.
FIG. 4b is a schematic diagram of a large hole being translated and then applied to a reference image according to an embodiment of the present invention.
Fig. 5a is a schematic diagram of an object image with holes according to an embodiment of the present invention.
FIG. 5b is a schematic diagram of a target image filling result after training with an existing random mask according to an embodiment of the present invention.
FIG. 5c is a schematic diagram of a target image filling result after mask training generated by the present invention according to an embodiment of the present invention.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further explained below by combining the specific drawings.
Referring to FIG. 1, a detailed embodiment of the present invention is shown as "Ballet"Target image generated by three-dimensional image transformation of video sequence as experimental objectFor the right view as an example (a cam5 view generates a cam4 view, that is, a cam5 in a Ballet video sequence is used as a reference image sequence to generate a target image sequence cam4, PSNR values of images of the generated cam4 sequence and images of corresponding frames of a real cam4 sequence are calculated, the two images are closer as PSNR is larger, a hole size hole _ size threshold value is 200, and a depth difference threshold value diff is 20. Specifically, the invention provides a data preprocessing method for improving the filling quality of a generated countermeasure network hole, which comprises the following steps:
s1, referring to the imageAnd its corresponding depth mapThe video sequence is subjected to three-dimensional image transformation to obtain a target image to be subjected to cavity filling processingAnd corresponding depth mapObtaining the target image containing the cavityAnd corresponding depth mapThen the target image isCorresponding depth mapBinary, hole area markMarking as 1, marking the non-hole area as 0, and then rotating the binarized image clockwise by 2 degrees to obtain a reference imageHorizontally aligned hole imagesAs shown in fig. 2, for the pellet video sequence in this embodiment, since the hole image is horizontally aligned with the reference image after being rotated by 2 degrees, the translation amount is 0, and for some other sequences, the hole image needs to be translated again after being rotated to be aligned with the reference image; wherein t is ∈ [0, N]N is the frame number; wherein, the three-dimensional image transformation process is a prior art well known to those skilled in the art, and specifically, reference may be made to the document "Liu R, Tan W, Wu Y, et al]The method in Journal of Electronic Imaging,2013,22(3):033031 ";
s2, finding out connected regions in the hole image, namely the hole regions, defining the holes in the hole image into three types, namely small holes, large holes and edge holes, wherein a schematic diagram of the connected regions in the hole image is shown in FIG. 3, and in the example, different connected regions can be marked into different colors for visual display; the specific three types of holes are defined as follows:
when the number of pixel points of one connected region is smaller than the hole size, defining the connected region as a small hole;
when a connected region has a pixel point located at the edge of the image, it is defined as an edge hole, as shown in E, F, G area in fig. 3;
when the number of pixel points of a connected region is greater than or equal to the hole size hole _ size and is not an edge hole, defining the connected region as a large hole, as shown by the A, B, C, D area in fig. 3;
among them are large voids and edge voids, which have a large impact on the void fill quality.
S3, sequentially visiting the large holes, and processing the large holes as follows:
s31, finding the left edge and the right edge of the current hole, and defining the first hole point on the left side in the horizontal direction as the left edge point of the holeDefining the last cavity point on the right in the horizontal direction as the right edge point of the cavityObtaining vector groups respectively representing left edge and right edge of the holeAndwherein i ∈ [0, n ]]N is the number of rows of the current hole;
s32, subtracting the value of the reference image depth map corresponding to the left edge from the value of the reference image depth map corresponding to the right edge to obtain a vector d:
s33, if half of the values in the vector d are larger than the depth difference threshold diff, the large cavity is judged to be the first condition, namely the large cavity is overlapped with the foreground object at the corresponding position of the reference image; otherwise, the large cavity is judged to be the second condition, namely the large cavity is not overlapped with the foreground object at the corresponding position of the reference image; for the first case, the large cavity needs to be translated to the background direction to a position which is just not overlapped with the foreground object; for the second case, the large hole needs to be translated towards the front scene body to a position just not overlapping the foreground object.
As a specific embodiment, for the first case in step S33, if the three-dimensional image is transformed into the generated target imageThe large cavity needs to be translated to the right when the view is a right view; if the three-dimensional image is transformedGenerated target imageThe large cavity needs to be translated to the left in a left view; specifically, the translation distance is as follows:
wherein the content of the first and second substances,indicating distanceOn the right orThe x coordinate of the nearest background pixel to the left.
As a specific embodiment, for the second case in step S33, if the three-dimensional image is transformed into the generated target imageThe large cavity needs to be translated to the left in a right view; if the three-dimensional image is transformed to generate a target imageThe large cavity needs to be translated to the right when the view is a left view; specifically, the translation distance is as follows:
wherein the content of the first and second substances,indicating distanceLeft side orAnd the x coordinate of the nearest foreground pixel point on the right.
The step S3 is a specific processing method for a large hole, and the edge hole is still at the edge of the image after rotation, so the edge hole does not need to be processed after the image is rotated, and the conventional mode is maintained.
Specifically, for this embodiment, there are two situations in which the large hole is overlapped with the foreground object at the corresponding position of the reference image, and the other is not overlapped with the foreground object. For the first case, the large cavity needs to be translated to the right to a position just not overlapping with the foreground object; for the second case, the large cavity needs to be translated to the left to a position just not overlapping with the foreground object; the specific translation distance is implemented according to the balance distance formula of the two cases. In this embodiment, the schematic diagram of the large cavity applied to the reference image before and after translation is shown in fig. 4a and 4b, and it can be seen from fig. 4b that: the large cavity A, B, C region belongs to two cases, and the large cavity D region belongs to the first case.
The hole image processed in this way is applied to a reference image, and an image similar to a hole pattern in a target image is obtained, namely the reference image and the processed hole image (mask) are used as input for training a generation countermeasure network, and a generation model for filling the hole in the target image is obtained. In the present application, taking an EdgeConnect network (i.e. a generation countermeasure network in the document "Nazeri K, Ng E, Joseph T, et al. EdgeConnect: genetic Image Inpainting with adaptive Edge Learning [ J ]. 2019.") as an example, PSNR (Peak Signal to Noise Ratio) after target Image filling is compared when different mask training models are used. The larger the PSNR, the closer the target image is to the real image, and the better the restoration quality is considered. The PSNR value for the currently used random mask and reference images was 31.90, whereas the PSNR value for the mask and reference images generated using the present invention was 32.41. Referring to fig. 5a, 5b and 5c, as an example of the target image filling result, a model obtained by using the mask training generated by the present invention can be seen from the circled points in the figure, so that the quality of hole filling can be obviously improved.
Compared with the prior art, the data preprocessing method for improving the cavity filling quality of the generated countermeasure network generates a mask for simulating the cavity in the target image from the data preprocessing, applies the mask and the reference image to the training generated countermeasure network, can obtain a mode similar to the target image containing the cavity, and can learn a model for filling the cavity of the mode during network training, so that the cavity filling quality of the generated countermeasure network can be obviously improved, and a new data preprocessing method with practical value is provided for the application of a DIBR view synthesis technology in a three-dimensional television.
Finally, the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered in the claims of the present invention.
Claims (5)
1. A data preprocessing method for improving the filling quality of a hole of a generation countermeasure network is characterized by comprising the following steps:
s1, referring to the imageAnd its corresponding depth mapPerforming three-dimensional image transformation to obtain target image to be subjected to cavity filling processingAnd corresponding depth mapThen the target image isCorresponding depth mapBinarizing, marking the hollow area as 1 and the non-hollow area as 0, and then marking the depth mapRotating and translating the binarized image to obtain a reference imageHorizontally aligned hole imagesWherein t is ∈ [0, N]N is the frame number;
s2, finding out a connected region in the hole image, namely the hole region, and defining the holes in the hole image into three types, namely small holes, large holes and edge holes, specifically as follows:
when the number of pixel points of one connected region is smaller than the hole size, defining the connected region as a small hole;
when a connected region has pixel points positioned at the edge of the image, defining the connected region as an edge hole;
when the number of pixel points in a connected region is larger than or equal to the size of a hole _ size and is not a marginal hole, defining the connected region as a large hole;
large holes and edge holes are used for greatly influencing the filling quality of the holes;
s3, sequentially visiting the large holes, and processing the large holes as follows:
s31, finding the left edge and the right edge of the current hole, and defining the first hole point on the left side in the horizontal direction as the left edge point of the holeDefining the last cavity point on the right in the horizontal direction as the right edge point of the cavityObtaining vector groups respectively representing left edge and right edge of the holeAndwherein i ∈ [0, n ]]N is the number of rows of the current hole;
s32, subtracting the value of the reference image depth map corresponding to the left edge from the value of the reference image depth map corresponding to the right edge to obtain a vector d:
s33, if half of the values in the vector d are larger than the depth difference threshold diff, the large cavity is judged to be the first condition, namely the large cavity is overlapped with the foreground object at the corresponding position of the reference image; otherwise, the large cavity is judged to be the second condition, namely the large cavity is not overlapped with the foreground object at the corresponding position of the reference image; for the first case, the large cavity needs to be translated to the background direction to a position which is just not overlapped with the foreground object; for the second case, the large hole needs to be translated towards the front scene body to a position just not overlapping the foreground object.
2. The promotion generation of countermeasure network hole filling quality of claim 1Is characterized in that, for the first case in step S33, if the three-dimensional image is transformed into the generated target imageThe large cavity needs to be translated to the right when the view is a right view; if the three-dimensional image is transformed to generate a target imageFor a left view, the large cavity needs to be translated to the left.
3. The data preprocessing method for improving generation countermeasure network hole filling quality according to claim 2, wherein the distance of the translation is:
4. The method for preprocessing data for improving the quality of filling holes in a generation countermeasure network as claimed in claim 1, wherein for the second case in step S33, if the three-dimensional image transforms the generated target imageThe large cavity needs to be translated to the left in a right view; if the three-dimensional image is transformed to generate a target imageFor a left view, the large cavity needs to be translated to the right.
5. The data preprocessing method for improving generation countermeasure network hole filling quality according to claim 4, wherein the distance of the translation is:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910717564.XA CN110580687B (en) | 2019-08-05 | 2019-08-05 | Data preprocessing method for improving filling quality of generated countermeasure network cavity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910717564.XA CN110580687B (en) | 2019-08-05 | 2019-08-05 | Data preprocessing method for improving filling quality of generated countermeasure network cavity |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110580687A CN110580687A (en) | 2019-12-17 |
CN110580687B true CN110580687B (en) | 2021-02-02 |
Family
ID=68810914
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910717564.XA Expired - Fee Related CN110580687B (en) | 2019-08-05 | 2019-08-05 | Data preprocessing method for improving filling quality of generated countermeasure network cavity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110580687B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102551097B1 (en) * | 2021-10-08 | 2023-07-04 | 주식회사 쓰리아이 | Hole filling method for virtual 3 dimensional model and computing device therefor |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102307312A (en) * | 2011-08-31 | 2012-01-04 | 四川虹微技术有限公司 | Method for performing hole filling on destination image generated by depth-image-based rendering (DIBR) technology |
CN104683788A (en) * | 2015-03-16 | 2015-06-03 | 四川虹微技术有限公司 | Cavity filling method based on image reprojection |
CN109462747A (en) * | 2018-12-11 | 2019-03-12 | 成都美律科技有限公司 | Based on the DIBR system gap filling method for generating confrontation network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9094660B2 (en) * | 2010-11-11 | 2015-07-28 | Georgia Tech Research Corporation | Hierarchical hole-filling for depth-based view synthesis in FTV and 3D video |
-
2019
- 2019-08-05 CN CN201910717564.XA patent/CN110580687B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102307312A (en) * | 2011-08-31 | 2012-01-04 | 四川虹微技术有限公司 | Method for performing hole filling on destination image generated by depth-image-based rendering (DIBR) technology |
CN104683788A (en) * | 2015-03-16 | 2015-06-03 | 四川虹微技术有限公司 | Cavity filling method based on image reprojection |
CN109462747A (en) * | 2018-12-11 | 2019-03-12 | 成都美律科技有限公司 | Based on the DIBR system gap filling method for generating confrontation network |
Also Published As
Publication number | Publication date |
---|---|
CN110580687A (en) | 2019-12-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10630956B2 (en) | Image processing method and apparatus | |
Gu et al. | Model-based referenceless quality metric of 3D synthesized images using local image description | |
CN110084757B (en) | Infrared depth image enhancement method based on generation countermeasure network | |
US20180300937A1 (en) | System and a method of restoring an occluded background region | |
CN109462747B (en) | DIBR system cavity filling method based on generation countermeasure network | |
CN109753971B (en) | Correction method and device for distorted text lines, character recognition method and device | |
Rahaman et al. | Virtual view synthesis for free viewpoint video and multiview video compression using Gaussian mixture modelling | |
US9390511B2 (en) | Temporally coherent segmentation of RGBt volumes with aid of noisy or incomplete auxiliary data | |
CN111325693B (en) | Large-scale panoramic viewpoint synthesis method based on single viewpoint RGB-D image | |
CN111563908B (en) | Image processing method and related device | |
WO2002091302A3 (en) | Image sequence enhancement system and method | |
CN111179195B (en) | Depth image cavity filling method and device, electronic equipment and storage medium thereof | |
Lie et al. | 2D to 3D video conversion with key-frame depth propagation and trilateral filtering | |
Wang et al. | View generation with DIBR for 3D display system | |
CN104378619B (en) | A kind of hole-filling algorithm rapidly and efficiently based on front and back's scape gradient transition | |
US20210056668A1 (en) | Image inpainting with geometric and photometric transformations | |
CN110660131A (en) | Virtual viewpoint hole filling method based on depth background modeling | |
CN110807738A (en) | Fuzzy image non-blind restoration method based on edge image block sharpening | |
CN110580687B (en) | Data preprocessing method for improving filling quality of generated countermeasure network cavity | |
CN105791795A (en) | Three-dimensional image processing method and device and three-dimensional video display device | |
US20240161388A1 (en) | Hair rendering system based on deep neural network | |
Qiao et al. | Color correction and depth-based hierarchical hole filling in free viewpoint generation | |
Sun et al. | Seamless view synthesis through texture optimization | |
CN107103321B (en) | The generation method and generation system of road binary image | |
EP1081654A3 (en) | Method and apparatus for providing depth blur effects within a 3d videographics system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210202 |