CN112819726A - Light field rendering artifact removing method - Google Patents
Light field rendering artifact removing method Download PDFInfo
- Publication number
- CN112819726A CN112819726A CN202110176430.9A CN202110176430A CN112819726A CN 112819726 A CN112819726 A CN 112819726A CN 202110176430 A CN202110176430 A CN 202110176430A CN 112819726 A CN112819726 A CN 112819726A
- Authority
- CN
- China
- Prior art keywords
- image
- parametric
- artifact
- light field
- parametric image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 49
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000004590 computer program Methods 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 9
- 238000012545 processing Methods 0.000 abstract description 2
- 239000012634 fragment Substances 0.000 description 12
- 230000000007 visual effect Effects 0.000 description 8
- 238000005070 sampling Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 239000003086 colorant Substances 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000002939 conjugate gradient method Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Image Generation (AREA)
- Image Processing (AREA)
Abstract
A light field rendering artifact removal method comprises the following steps: generating a corresponding parametric image according to the light field source image and the parametric grid; marking artifact regions on the parametric image; acquiring a non-artifact image corresponding to the parametric image; fusing the parametric image and the artifact-free image thereof; modifying the light field source image based on the fused parametric image. According to the light field rendering artifact removing method, the light field source image is modified to be matched with the scene geometry accurately, the artifact phenomenon in light field rendering is removed, and a more real rendering result is presented to a user; the processing time is shortened, and the model reasoning overhead in rendering is saved.
Description
Technical Field
The invention relates to the technical field of three-dimensional computer graphics, in particular to a light field rendering artifact removing method based on edge estimation.
Background
In the field of real-time rendering of three-dimensional computer graphics, the light field rendering method can generate a photo-level real rendering result, can represent scenes which are difficult to represent by other rendering methods such as anisotropic materials, participating media and global illumination, can utilize the rasterization and texture mapping functions of the existing graphic hardware, and has wide application in the fields of movie and television entertainment, cultural transmission, cultural relic protection, smart cities and the like.
Existing light field rendering methods can be divided into two categories, both geometric-free and geometric-required.
The light field rendering method without geometry needs to sample the light field at an extremely high density, so that the range of the renderable view angle of the light field without geometry can be much smaller than that of the light field with geometry under the condition of the same data volume.
The light field rendering method requiring geometry can be further divided into a light field rendering method requiring precise geometry and a light field rendering method requiring fuzzy geometry.
The light field rendering method requiring precise geometry is suitable for a synthetic scene with known geometry, and when the method is applied to a real scene, because the precise geometry of the real world is difficult to obtain, the method can generate obvious rendering artifacts in the region with wrong geometry, and brings poor user experience.
The light field rendering method requiring the fuzzy geometry renders an image with artifacts first, and then generates an image without the artifacts by using a machine learning method. This method requires training of machine learning models for different kinds of scenes, respectively, requires collection of a large number of images as training data, and takes a long time (several hours to several days) for the training process. This method also takes considerable time (tens to hundreds of milliseconds) for model reasoning when rendering, and does not guarantee that the model can always generate artifact-free images.
Disclosure of Invention
In order to solve the defects in the prior art, the invention aims to provide a light field rendering artifact removing method, which removes the artifact phenomenon in light field rendering and presents a more real rendering result to a user by modifying a light field source image to enable the light field source image to be precisely matched with the scene geometry.
In order to achieve the above object, the present invention provides a light field rendering artifact removing method, which includes the following steps:
generating a corresponding parametric image according to the light field source image and the parametric grid;
marking artifact regions on the parametric image;
acquiring a non-artifact image corresponding to the parametric image;
fusing the parametric image and the artifact-free image thereof;
modifying the light field source image based on the fused parametric image.
Further, the step of generating a corresponding parametric image from the light field source image and the parametric mesh further comprises,
converting the fuzzy geometry of the scene into a mesh representation;
and parameterizing the grids, and calculating a corresponding parameterized image for each light field source image.
Further, the step of marking, for each parametric image, a region where an artifact occurs on the parametric image further comprises,
marking a region of the geometric self-occlusion edge on the parametric image, wherein the region belongs to a background;
marking a geometric edge region on the parametric image.
Further, the step of obtaining the artifact-free image corresponding to the parametric image further comprises selecting a plurality of reference parametric images for each parametric image;
and calculating the artifact-free image corresponding to the parametric image according to the color value of the reference parametric image and the artifact-free area.
Further, the step of calculating the artifact-free image corresponding to the parametric image further includes setting a weight of an area where the artifact occurs in each reference parametric image to 0 and a weight of an area where the artifact does not occur to 1, and calculating a weighted average image of the reference parametric image as the artifact-free image corresponding to the parametric image.
Further, the step of fusing the parametric image and the artifact-free image further comprises,
calculating the divergence of the gradient of the corresponding artifact-free image of the parametric image in the artifact-appearing region;
constructing a linear system by taking the pixel color of the parameterized image in the area where the artifact occurs as an unknown number, and enabling the divergence of each pixel gradient to be equal to the divergence of the corresponding pixel gradient of the artifact-free image;
and solving a least square solution of the linear system, and modifying the color of the corresponding pixel on the parametric image according to the value of the least square solution.
Further, the step of modifying the light field source image based on the fused parametric image further comprises performing mesh parametric inverse transformation on the fused parametric image to modify the light field source image.
Furthermore, the step of performing mesh parameterization inverse transformation on the fused parameterized image to modify the light field source image further comprises marking a region with an artifact on the corresponding parameterized image for each light field source image, and replacing a color value at a corresponding position of the parameterized image with a color value in the same region of the light field source image.
To achieve the above object, the present invention further provides an electronic device, including a memory and a processor, where the memory stores a computer program running on the processor, and the processor executes the computer program to perform the steps of the light field rendering artifact removing method as described above.
To achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed, performs the steps of the light field rendering artifact removal method as described above.
The light field rendering artifact removing method, the electronic device and the computer readable storage medium have the following beneficial effects:
1) the method for directly modifying the light field source image by using the fuzzy geometry to enable the image to be accurately matched with the geometry can be used for real-time rendering by using a light field rendering method needing accurate geometry, and the artifact phenomenon is removed while the required light field sampling density is reduced.
2) Artifact removal without depending on model training can be achieved, light field data can be directly processed, processing time consumption is shortened, and model reasoning overhead during rendering is omitted.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a light field rendering artifact removal method according to the present invention;
FIG. 2 is a schematic diagram of a region for marking the vicinity of a geometric self-occlusion edge on a parametric image and belonging to a background where artifacts may appear according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a region for marking a geometric edge region of a parametric image, where artifacts may occur according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Fig. 1 is a flowchart of a light field rendering artifact removing method according to the present invention, and the light field rendering artifact removing method according to the present invention will be described in detail with reference to fig. 1.
First, in step 101, a parameterization of a mesh representation of a scene is computed, and a corresponding parameterized image is generated from a light field source image and the parameterized mesh.
Preferably, the step of generating a parametric image is specifically executable to: converting the fuzzy geometry of the scene into a mesh representation; parameterizing the grid; and after the parameterization is finished, calculating a corresponding parameterized image for each light field source image.
In step 102, for each parametric image, regions where artifacts may occur on the parametric image, such as regions near the geometric self-occlusion edge and belonging to the background (as shown in fig. 2), and geometric edge regions (as shown in fig. 3) are marked.
Preferably, the step of marking regions of the parametric image where artifacts may occur may be specifically performed as: marking a region near the geometric self-occlusion edge and belonging to the background on the parametric image; marking a geometric edge region on the parametric image.
In step 103, color information of regions of the parametric image at each viewing angle where no artifact is likely to occur is fused, and for each parametric image, an artifact-free image corresponding to the parametric image is calculated.
Preferably, the step of calculating the artifact-free image corresponding to the parametric image may be specifically performed as: selecting a plurality of reference parametric images for each parametric image; and calculating the artifact-free image corresponding to the parametric image according to the color values of the parametric image and the region where the artifact is not likely to occur.
In step 104, the parametric image and the corresponding artifact-free image are fused, so that the fused image is ensured to retain the unique color information under the visual angle and remove the artifacts which may occur, and the transition is natural and seamless.
Preferably, the step of fusing the parametric image and the artifact-free image may be specifically performed as: calculating the divergence of the gradient of the corresponding artifact-free image of the parametric image in the area where the artifact is likely to occur; constructing a linear system by taking the pixel color of the parametric image in the area where the artifact possibly occurs as an unknown number, and enabling the divergence of the gradient of each pixel to be equal to the divergence of the gradient of the corresponding pixel of the artifact-free image; and solving a least square solution of the linear system, and modifying the color of the corresponding pixel on the parametric image according to the value of the solution.
In step 105, the light field source image is modified by inverse mesh parametric transformation based on the fused parametric image.
Preferably, the step of modifying the light field source image may be specifically performed as: for each light field source image, marking a region corresponding to a region of the parametric image where the artifact is likely to occur, and replacing the color value in the region of the light field source image with the color value at the corresponding position of the parametric image.
The light field rendering artifact removing method of the present invention is further described with reference to a specific embodiment.
(1) Converting the three-dimensional point cloud representation of the scene into a grid representation by using a Poisson grid reconstruction method, calculating the parameterization of the grid by using an open source tool UVatlas tool provided by Microsoft, and generating a corresponding parameterized image;
in this embodiment, the step of generating the parametric image may be specifically executed as: a light-field source image is selected, a viewport resolution, e.g., 1024x1024, is selected, and the parameterized mesh is rendered using OpenGL, with vertex input including vertex position and texture coordinates. In the vertex shader, the xy component of the gl _ Position is set to 2 x vertex texture coordinate-1, the z component is set to 0, the w component is set to 1, and the vertex three-dimensional Position is output at the same time. And calculating the position of the projected vertex corresponding to the fragment in the fragment shader according to the visual matrix and the projection matrix of the corresponding visual angle of the light field source image, outputting the color obtained by sampling the light field source image at the corresponding position, and obtaining a drawing result image which is a parameterized image corresponding to the light field source image.
(2) Marking a region which is near a geometric self-shielding edge and belongs to the background on the parametric image and a geometric edge region, wherein the regions jointly form a region which can generate an artifact on the parametric image;
in this embodiment, the step of marking the region near the geometric self-occlusion edge and belonging to the background on the parametric image may be specifically performed as: and selecting a visual angle and a corresponding parameterized image, and drawing a depth image of the grid under the visual angle by using OpenGL. A depth difference threshold is selected, such as 1% of the maximum depth of the scene. If the depth of the position corresponding to the fragment in the view angle is calculated in a fragment shader, the depth of the position corresponding to the fragment is projected to the position behind the depth image, if the depth of the position corresponding to the fragment is subtracted from the position corresponding to the fragment and is greater than a threshold value, 1 is output, otherwise 0 is output, and on a drawing result image, an edge is extracted by using a Canny edge extraction algorithm, and the extracted edge is called as a shielding edge. When the occlusion edge is extracted, a smaller depth difference threshold and a larger depth difference threshold (for example, twice the smaller threshold) are used for extraction twice, and pixels extracted as the occlusion edge in the two extraction processes are recorded, and the pixels are called as suspicious region seed points. Selecting a distance threshold, such as conservative estimation of geometric accuracy, and calculating the minimum distance between the three-dimensional position corresponding to the pixel on the parametric image and the seed point of the suspicious region, wherein all pixels with the minimum distance smaller than the distance threshold form a region which is near the geometric self-occlusion edge and belongs to the background.
In this embodiment, the step of marking the geometric edge region on the parametric image may be specifically implemented as: drawing a grid under a view angle corresponding to the parametric image by using OpenGL, setting gl _ Position as a Position of a vertex Position after visual transformation and projection transformation, outputting 1 by using a fragment shader, and extracting an edge on a drawing result image by using a Canny edge extraction algorithm, wherein the extracted edge is called a visual flat line. Selecting a pixel number threshold value, such as the diameter of a shape with the diameter being the diameter of a conservative estimation ball with geometric accuracy after being placed in the scene average depth projection, and calculating an image after the expansion of the eye level image by taking the threshold value as the size of an expansion kernel, wherein the image is called as an eye level region image. If the parameterized grid is drawn by the parameterized image generation method, in the fragment shader, the value of the position corresponding to the fragment projected to the position behind the eye-level region image is sampled and output, and the geometric edge region is formed by pixels with color values not 0 on the drawing result image.
(3) Selecting a plurality of reference parametric images for each parametric image, and calculating a weighted average image of the reference parametric images as an artifact-free image corresponding to the parametric image;
in this embodiment, the step of selecting the reference parameterized image may be specifically executed as: for each reference parametric image, a quantity threshold, e.g. 10, is selected, and 10 nearest neighbor views of its corresponding view are calculated, and the parametric image corresponding to these neighbor views is used as the reference parametric image of the parametric image.
In this embodiment, the step of calculating a weighted average image of the reference parametric image may be specifically performed as: and setting the weight of the area where the artifact is likely to appear in each reference parametric image to be 0 and the weight of the area where the artifact is not likely to appear to be 1, and calculating a weighted average image of the reference parametric images.
(4) Constructing a linear system of pixel colors of the parametric image in a region where the artifact possibly occurs, and solving the fused parametric image;
in this embodiment, the step of constructing a linear system of pixel colors of the parametric image in the region where the artifact may occur may be specifically performed as: and calculating the divergence of the gradient of the artifact-free image corresponding to the parametric image. For each pixel in the area where the artifact may appear, the divergence of the gradient is equal to the divergence of the gradient of the corresponding pixel of the image without the artifact, and a linear equation is obtained. The linear equations corresponding to all pixels in the region where the artifact may occur are connected, it should be noted that some invalid pixels exist on the parametric image, which are not located in the region where the artifact may occur or in the region where the artifact may not occur, and all the linear coefficients corresponding to these pixels are set to 0, so as to obtain the linear system.
In this embodiment, the step of solving the fused parameterized image may be specifically executed as: the least squares solution of the above linear system is solved using the conjugate gradient method, and the color of the pixel in the region where the artifact is likely to occur on the parametric image is set to a solution corresponding to the unknown number.
(5) Marking a region of the light field source image corresponding to a region of the parametric image where the artifact is likely to occur, and replacing color values in the region of the light field source image with color values at corresponding locations of the parametric image.
In this embodiment, the step of replacing the pixel color values corresponding to the regions where the artifacts may appear on the parametric image in the light field source image may be specifically performed as: setting the color value of a pixel in a region where the parametric image is likely to have the artifact as 1, setting the color values of other pixels as 0, and calling the obtained image as a suspicious mask image. And drawing the mesh under the view angle corresponding to the parametric image by using OpenGL, wherein the vertex input comprises a vertex position and texture coordinates. And in the vertex shader, the gl _ Position is set as the Position of the vertex after the visual transformation and the projection transformation, and the vertex texture coordinate is output at the same time. And sampling the suspicious mask image by using texture coordinates in the fragment shader, outputting the color obtained by sampling the parametric image by using the texture coordinates if the sampling value is greater than 0, otherwise, outputting the color of the light field source image corresponding to the fragment, wherein the drawing result image is the modified light field source image.
Aiming at the defect that the light field rendering method is easy to generate artifacts when the scene geometry is inaccurate, the invention provides the light field rendering artifact removing method based on edge estimation.
In an embodiment of the present invention, there is also provided an electronic device, including a memory and a processor, the memory having stored thereon a computer program running on the processor, the processor executing the steps of the light field rendering artifact removal method as described above when executing the computer program.
In an embodiment of the present invention, there is also provided a computer readable storage medium having stored thereon a computer program which when run performs the steps of the light field rendering artifact removal method as described above.
Those of ordinary skill in the art will understand that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A light field rendering artifact removing method is characterized by comprising the following steps:
generating a corresponding parametric image according to the light field source image and the parametric grid;
marking artifact regions on the parametric image;
acquiring a non-artifact image corresponding to the parametric image;
fusing the parametric image and the artifact-free image thereof;
modifying the light field source image based on the fused parametric image.
2. The light field rendering artifact removal method as defined in claim 1, wherein the step of generating a corresponding parametric image from the light field source image and the parametric grid further comprises,
converting the fuzzy geometry of the scene into a mesh representation;
and parameterizing the grids, and calculating a corresponding parameterized image for each light field source image.
3. The light field rendering artifact removal method as claimed in claim 1, wherein said step of marking, for each parametric image, a region of the parametric image where the artifact occurs further comprises,
marking a region of the geometric self-occlusion edge on the parametric image, wherein the region belongs to a background;
marking a geometric edge region on the parametric image.
4. The light field rendering artifact removal method of claim 1, wherein the step of obtaining artifact-free images corresponding to the parametric images further comprises,
selecting a plurality of reference parametric images for each parametric image;
and calculating the artifact-free image corresponding to the parametric image according to the color value of the reference parametric image and the artifact-free area.
5. The light field rendering artifact removal method according to claim 4, wherein the step of calculating the artifact-free images corresponding to the parametric images further comprises setting the weight of the area where the artifact occurs in each reference parametric image to 0 and the weight of the area where the artifact does not occur to 1, and calculating a weighted average image of the reference parametric images as the artifact-free images corresponding to the parametric images.
6. The light field rendering artifact removal method of claim 1, wherein the step of fusing the parametric image and the artifact-free image thereof further comprises,
calculating the divergence of the gradient of the corresponding artifact-free image of the parametric image in the artifact-appearing region;
constructing a linear system by taking the pixel color of the parameterized image in the area where the artifact occurs as an unknown number, and enabling the divergence of each pixel gradient to be equal to the divergence of the corresponding pixel gradient of the artifact-free image;
and solving a least square solution of the linear system, and modifying the color of the corresponding pixel on the parametric image according to the value of the least square solution.
7. The light field rendering artifact removal method as claimed in claim 1, wherein said step of modifying the light field source image based on the fused parametric image further comprises performing a mesh parametric inverse transformation on the fused parametric image to modify the light field source image.
8. The light field rendering artifact removal method as claimed in claim 7, wherein the step of performing mesh parametric inverse transformation on the fused parametric image to modify the light field source image further comprises, for each light field source image, marking a region of the corresponding parametric image where the artifact occurs, and replacing a color value at a corresponding position of the parametric image with a color value in the same region of the light field source image.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program for execution on the processor, the processor executing the steps of the light field rendering artifact removal method of any of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when running, performs the steps of the light field rendering artifact removal method of any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110176430.9A CN112819726A (en) | 2021-02-09 | 2021-02-09 | Light field rendering artifact removing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110176430.9A CN112819726A (en) | 2021-02-09 | 2021-02-09 | Light field rendering artifact removing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112819726A true CN112819726A (en) | 2021-05-18 |
Family
ID=75864542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110176430.9A Pending CN112819726A (en) | 2021-02-09 | 2021-02-09 | Light field rendering artifact removing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112819726A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283543A (en) * | 2021-06-24 | 2021-08-20 | 北京优锘科技有限公司 | WebGL-based image projection fusion method, device, storage medium and equipment |
CN113436325A (en) * | 2021-07-30 | 2021-09-24 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113470154A (en) * | 2021-07-30 | 2021-10-01 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140079336A1 (en) * | 2012-09-14 | 2014-03-20 | Pelican Imaging Corporation | Systems and methods for correcting user identified artifacts in light field images |
CN106373105A (en) * | 2016-09-12 | 2017-02-01 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | Multi-exposure image deghosting integration method based on low-rank matrix recovery |
CN106651986A (en) * | 2016-01-21 | 2017-05-10 | 上海联影医疗科技有限公司 | Computed tomography artifact correction method |
CN109816742A (en) * | 2018-12-14 | 2019-05-28 | 中国人民解放军战略支援部队信息工程大学 | Cone-Beam CT geometry artifact minimizing technology based on full connection convolutional neural networks |
CN111815730A (en) * | 2020-07-15 | 2020-10-23 | 大连东软教育科技集团有限公司 | Method, device and storage medium for generating CT image containing motion artifact |
-
2021
- 2021-02-09 CN CN202110176430.9A patent/CN112819726A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140079336A1 (en) * | 2012-09-14 | 2014-03-20 | Pelican Imaging Corporation | Systems and methods for correcting user identified artifacts in light field images |
CN106651986A (en) * | 2016-01-21 | 2017-05-10 | 上海联影医疗科技有限公司 | Computed tomography artifact correction method |
CN106373105A (en) * | 2016-09-12 | 2017-02-01 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | Multi-exposure image deghosting integration method based on low-rank matrix recovery |
CN109816742A (en) * | 2018-12-14 | 2019-05-28 | 中国人民解放军战略支援部队信息工程大学 | Cone-Beam CT geometry artifact minimizing technology based on full connection convolutional neural networks |
CN111815730A (en) * | 2020-07-15 | 2020-10-23 | 大连东软教育科技集团有限公司 | Method, device and storage medium for generating CT image containing motion artifact |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283543A (en) * | 2021-06-24 | 2021-08-20 | 北京优锘科技有限公司 | WebGL-based image projection fusion method, device, storage medium and equipment |
CN113436325A (en) * | 2021-07-30 | 2021-09-24 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113470154A (en) * | 2021-07-30 | 2021-10-01 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113436325B (en) * | 2021-07-30 | 2023-07-28 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113470154B (en) * | 2021-07-30 | 2023-11-28 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112819726A (en) | Light field rendering artifact removing method | |
CN108648269B (en) | Method and system for singulating three-dimensional building models | |
US9984498B2 (en) | Sparse GPU voxelization for 3D surface reconstruction | |
CN104331918B (en) | Based on earth's surface occlusion culling and accelerated method outside depth map real-time rendering room | |
CN111243071A (en) | Texture rendering method, system, chip, device and medium for real-time three-dimensional human body reconstruction | |
JP5005090B2 (en) | Cutting simulation display device, cutting simulation display method, and cutting simulation display program | |
JP7390497B2 (en) | Image processing methods, apparatus, computer programs, and electronic devices | |
CN106296790A (en) | For computer graphic image being carried out the method and apparatus of shadowed and veining | |
US10217259B2 (en) | Method of and apparatus for graphics processing | |
CN112652046B (en) | Game picture generation method, device, equipment and storage medium | |
CN111524100A (en) | Defect image sample generation method and device and panel defect detection method | |
EP1634248A1 (en) | Adaptive image interpolation for volume rendering | |
CN113781621A (en) | Three-dimensional reconstruction processing method, device, equipment and storage medium | |
CN115428027A (en) | Neural opaque point cloud | |
CN108197555B (en) | Real-time face fusion method based on face tracking | |
CN113240790B (en) | Rail defect image generation method based on 3D model and point cloud processing | |
JP5295044B2 (en) | Method and program for extracting mask image and method and program for constructing voxel data | |
CN117501313A (en) | Hair rendering system based on deep neural network | |
CN111882498A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN112002019B (en) | Method for simulating character shadow based on MR mixed reality | |
CN116012449A (en) | Image rendering method and device based on depth information | |
CN113838188A (en) | Tree modeling method based on single image, tree modeling device and equipment | |
CN117541755B (en) | RGB-D three-dimensional reconstruction-based rigid object virtual-real shielding method | |
CN116452459B (en) | Shadow mask generation method, shadow removal method and device | |
CN111626912B (en) | Watermark removing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230705 Address after: 1203, 1205, 12th floor, No. B6 Chaowai Street, Chaoyang District, Beijing, 100020 Applicant after: AOBEN WEILAI (BEIJING) TECHNOLOGY Co.,Ltd. Address before: 314500 room 205, building 3, 1156 Gaoqiao Avenue, Gaoqiao street, Tongxiang City, Jiaxing City, Zhejiang Province Applicant before: Jiaxing Fengniao Technology Co.,Ltd. Applicant before: AOBEN WEILAI (BEIJING) TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right |