CN115439587A - 2.5D rendering method based on object visual range - Google Patents

2.5D rendering method based on object visual range Download PDF

Info

Publication number
CN115439587A
CN115439587A CN202211388886.2A CN202211388886A CN115439587A CN 115439587 A CN115439587 A CN 115439587A CN 202211388886 A CN202211388886 A CN 202211388886A CN 115439587 A CN115439587 A CN 115439587A
Authority
CN
China
Prior art keywords
camera
visual
area
rendering
range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211388886.2A
Other languages
Chinese (zh)
Other versions
CN115439587B (en
Inventor
王炜
谢超平
姚仕元
张琪浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Sobey Digital Technology Co Ltd
Original Assignee
Chengdu Sobey Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Sobey Digital Technology Co Ltd filed Critical Chengdu Sobey Digital Technology Co Ltd
Priority to CN202211388886.2A priority Critical patent/CN115439587B/en
Publication of CN115439587A publication Critical patent/CN115439587A/en
Application granted granted Critical
Publication of CN115439587B publication Critical patent/CN115439587B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The invention discloses a 2.5D rendering method based on an object visual range, which belongs to the field of computer graphics and comprises the following steps: s1, calibrating a camera, and determining a shooting range angle of the camera; s2, calibrating an effective area and an ineffective area of each object; s3, defining a visible area; s4, solving the public visual range and the minimum included angle of all the visual areas; s5, shooting a visible area line of the object by adjusting the visible area design camera position; and S6, rendering and outputting the camera viewpoint shooting visual angle. According to the method, the whole scene and all objects in the scene do not need to be modeled by the scene object, and only the 2D image is formed at the view angle of the audience, so that a large amount of information acquisition such as complex light and shadow, reflection, multi-view angle and the like caused by multi-object rendering can be reduced, and the computational rendering resources are greatly reduced.

Description

2.5D rendering method based on object visual range
Technical Field
The invention relates to the field of computer graphics, in particular to a 2.5D rendering method based on an object visual range.
Background
The existing high and new video technologies such as free viewpoint video, interactive video, immersive video and the like become hot spots. The scene object reconstruction technique generally uses a rendering guide to model the whole scene and all objects in the scene, and because the scene objects are many and complex, a lot of resources are needed for light shadow, reflection and the like. In stage performance or some specific viewing angles, the scene reconstruction does not need to obtain the whole scene and the all-around information of all objects, so that a scene representation mode which is flexible and highly compatible and can reduce the computational rendering resources is desired, and the technical problem needs to be solved by the technical staff in the field.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a 2.5D rendering method based on an object visual range, which does not need a scene object to model the whole scene and all objects in the scene, only needs to form a 2D image at the view angle of audiences, can reduce acquisition of a large amount of information such as complex light and shadow, reflection, multiple view angles and the like caused by multi-object rendering, and greatly reduces computational rendering resources and the like.
The purpose of the invention is realized by the following scheme:
A2.5D rendering method based on object visual range includes the following steps:
s1, firstly, calibrating a camera, calculating internal parameters and external parameters of the camera, determining the focal length and the imaging size of a camera model, and determining a shooting range angle phi of the camera;
s2, calibrating an effective area alpha 1 and an ineffective area alpha 2 of each object according to the camera model in the step S1; the expression for an object is: alpha 1 is a visible part of the object, alpha 2 is an unknown part of the object, and the imaging range of the camera does not comprise an invalid region alpha 2 part;
s3, defining a visible region set omega of each object 1,2,3,4 \8230 \ 8230n under the same coordinate system: (U1, U2, U3, U4 \8230; un), the virtual camera does not photograph the non-effective area within the visible area; un is a visual area of the object n, is a user-defined three-dimensional space, is not fixed in shape, but ensures that the camera cannot shoot the invalid area alpha 2 part of the object n in the Un;
s4, solving the common visual range and the minimum included angle of all the visual areas U1, U2, U3 and U4 \8230, \8230andUn, wherein the minimum included angle is the common effective visual angle theta;
s5, when a scheme of shooting the visual area of the object at the camera position is designed, the method is designed as follows: ensuring that theta is greater than & lt SON, simultaneously requiring & lt SON to be less than phi, and if the condition is not met, adjusting a visual area of the object; the area enclosed by the included angle theta is a minimum visual area, O is the starting point of the minimum visual area, S is a camera viewpoint, and N is the central axis of the minimum visual area;
and S6, rendering and outputting the shooting visual angle of the camera viewpoint S, wherein each object does not need to be completely rendered in 3D, and the original five-dimensional expression of camera imaging (x, y, z, theta, phi) is reduced to three dimensions (x, y and z) through a visual interval according to the position and the imaging size of the camera.
Further, in step S1, camera calibration is performed in a modeling software environment.
Further, in step S2, the invalid region α 2 cannot be estimated from the shape of the valid region α 1, and belongs to an unknown region.
Further, in step S3, a world coordinate system of the virtual space is located under the same coordinate system.
Further, in step S4, the camera positions can be arbitrarily arranged within the common visible range area, without photographing the ineffective area α 2 of the subject.
Further, in step S5, the adjustment of the visible region of the adjustment target is performed in the manner described in step S3.
Further, in step S6, the rendering output includes a rendering output of a single frame image.
Further, in step S6, when the common view section is satisfied, the effective area is obtained regardless of how the camera takes the image, and the angle of the camera does not need to be considered.
Further, the modeling software environment is 3D max software.
Further, the modeling software environment is maya software or UE software.
The beneficial effects of the invention include:
according to the method, the whole scene and all objects in the scene do not need to be modeled by the scene object, and only the 2D image is formed at the view angle of the audience, so that a large amount of information acquisition such as complex light and shadow, reflection, multi-view angle and the like caused by multi-object rendering can be reduced, and the computational rendering resources are greatly reduced.
The method can greatly reduce the calculation required by the five-dimensional expression of the camera imaging (x, y, z, theta, phi), and can complete the camera parameter confirmation only by calculating the minimum visual angle. Meanwhile, the method supports various heterogeneous objects such as Mesh modeling, voxel, point cloud, NERF deep learning and the like, does not need to carry out object design and modeling again to adapt to the method, can support various heterogeneous objects existing in the prior art, and has good compatibility and usability.
The method provided by the invention provides a method for confirming the shooting range of the camera, ensures that all effective areas in all visible area sets can be shot simultaneously, realizes that the effective areas are shot no matter how the camera shoots in a common visible interval, and does not need to consider the angle of the camera.
Based on the method, various heterogeneous objects (which can be designed, collected, reconstructed, represented and rendered) can be represented by different methods, such as surface, voxel, point cloud, deep learning and the like, and under the requirements of pose and illumination of a unified scene, each object is rendered by the own representation method respectively, and a 2D image with channels is output.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of a focal length and a photographing angle of a camera according to an embodiment of the invention;
FIG. 2a is a first exemplary illustration of an invalid region according to an embodiment of the present invention;
FIG. 2b is a schematic diagram of an invalid region in the embodiment of the present invention;
FIG. 3a is a first schematic view of a visible area in an embodiment of the present invention;
FIG. 3b is a second schematic view of a visible area in an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating an arrangement of camera positions within a visible area according to an embodiment of the present invention;
fig. 5 is a schematic view of a visible area of a shooting object designed according to the position of the camera in the embodiment of the invention.
Detailed Description
All features disclosed in all embodiments of the present specification, or all methods or process steps implicitly disclosed, may be combined and/or expanded, or substituted, in any way, except for mutually exclusive features and/or steps.
In the process of seeking to solve the problems in the background, the invention finds that the current object modeling methods mainly comprise methods such as a camera array, mesh modeling, point cloud, neural rendering NERF and the like, and the methods have advantages and disadvantages in different scenes. Scene object reconstruction generally models the entire scene and all objects therein using rendering techniques, which typically costs a lot of resources due to the large and complex number of objects, shadows, reflections, etc. In some scenes, such as stage shows or scene reconstruction at some specific viewing angles, the scenes usually do not need to obtain full information of the whole scene or all objects, and only need to be visible at a given specific shooting angle. Therefore, the invention discloses a flexible and highly-compatible scene representation mode, and can meet the most real and vivid virtual-real combined picture presentation while controlling the manufacturing cost to be low enough.
In the specific implementation process, the invention defines a view range, the expression content of which is [2D 3D ], 2D is a single view, and 3D is an arbitrary view capable of freely rotating 360 degrees. Then 2.5D is between the common effective viewing angles of all objects.
As shown in fig. 1, in a modeling software environment like 3D max, maya, UE, etc., first, camera calibration is performed, camera internal reference and external reference are calculated, the focal length and imaging size of a camera model are determined, and a camera shooting range angle phi is determined.
As shown in fig. 2a and 2b, according to the actual model design, the valid area α 1 and the invalid area α 2 of each object are calibrated, and the expression for one object is as follows: alpha 1 is a visible part of the object, an invalid region alpha 2 is an unknown part of the object, the invalid region cannot be inferred through the shape of the valid region and belongs to an unknown region, and the imaging range of the camera does not include the invalid region part in the design idea of the invention.
Define the set of visible regions omega for each object 1,2,3,4 \8230; \ 8230n: (U1, U2, U3, U4 \8230; un) in which a three-dimensional space formed by rays emitted from a viewpoint determined by camera calibration at the rear of the subject is a visible region U. Un is the visual area of the object n, is a user-defined three-dimensional space, and is not fixed in shape, but the alpha 2 area of the object n cannot be shot by the camera in the Un, and the non-effective area cannot be shot by the virtual camera in the visual area. Fig. 3a is a first schematic view of a visible area in an embodiment of the present invention, and fig. 3b is a second schematic view of the visible area in the embodiment of the present invention.
As shown in fig. 4, the common visual range and the minimum angle of all the visual areas U1, U2, U3, U4 \8230 \8230andun are obtained, and the positions of the cameras in the visual areas can be randomly arranged, so that the invalid area α 2 of the object cannot be photographed. In fig. 4, the minimum included angle is the common effective viewing angle θ in the method of the present invention, the area surrounded by the included angle θ is the minimum visible area, O is the starting point of the minimum visible area, S is the viewpoint of the camera, N is the central axis of the minimum visible area, and L is the central axis of the shooting range of the camera.
As shown in fig. 5, when designing a scheme for shooting a visual region of an object at a camera position, according to the method of the present invention, it is required to ensure that the minimum included angle θ is greater than ≤ SON and that ≤ SON is less than Φ, and if this condition is not met, the visual region of the object needs to be adjusted (defined in step 3). In fig. 5, angle θ is greater than × < SON and × < SON is less than phi, then the camera can shoot the effective areas of U1, U2, U3 and U4 in the common effective area, as shown in fig. 5, if the camera is S ', angle θ is less than × < S ' ON, or × < S ' ON is less than phi, which may cause the camera to shoot an ineffective area.
And rendering and outputting the single-frame image to the shooting visual angle of the camera point S. Each object does not need to be completely rendered in a 3D mode, only the five-dimensional expression of the original camera imaging (x, y, z, theta, phi) is reduced to be three-dimensional (x, y, z) through the visual interval according to the position and the imaging size of the camera, how a camera takes a picture in the public visual interval meeting the method of the invention, the shot picture is an effective area, and the angle of the camera does not need to be considered.
Example 1
A2.5D rendering method based on object visual range includes the following steps:
s1, firstly, calibrating a camera, calculating internal parameters and external parameters of the camera, determining the focal length and the imaging size of a camera model, and determining a shooting range angle phi of the camera;
s2, calibrating an effective area alpha 1 and an ineffective area alpha 2 of each object according to the camera model in the step S1; the expression for one object is: alpha 1 is a visible part of the object, alpha 2 is an unknown part of the object, and the imaging range of the camera does not comprise an invalid region alpha 2 part;
s3, defining a set omega of visual regions of 1,2,3,4 \8230: \ 8230n for each object under the same coordinate system: (U1, U2, U3, U4 \8230; un), the virtual camera does not photograph the non-effective area within the visible area; un is a visible area of the object n, is a user-defined three-dimensional space, is not fixed in shape, but ensures that the camera cannot shoot the invalid area alpha 2 part of the object n in the Un;
s4, solving a common visual range and a minimum included angle of all the visual areas U1, U2, U3 and U4 \8230, and the minimum included angle is a common effective visual angle theta;
s5, when a scheme of shooting the visual area of the object at the camera position is designed, the method is designed as follows: ensuring that theta is greater than & lt SON, simultaneously requiring & lt SON to be less than phi, and if the condition is not met, adjusting the visual area of the object; the area enclosed by the included angle theta is a minimum visual area, O is the starting point of the minimum visual area, S is a camera viewpoint, and N is the central axis of the minimum visual area;
and S6, rendering and outputting the shooting visual angle of the camera viewpoint S, wherein each object does not need to be completely rendered in 3D, and the original five-dimensional expression of camera imaging (x, y, z, theta, phi) is reduced to three dimensions (x, y and z) through a visual interval according to the position and the imaging size of the camera.
Example 2
On the basis of embodiment 1, in step S1, camera calibration is performed in a modeling software environment.
Example 3
In addition to embodiment 1, in step S2, the invalid region α 2 cannot be estimated from the shape of the valid region α 1, and belongs to an unknown region.
Example 4
In addition to embodiment 1, in step S3, the world coordinate system of the virtual space is defined as the same coordinate system.
Example 5
On the basis of embodiment 1, in step S4, the camera positions can be arbitrarily arranged within the common visible range area without photographing the ineffective area α 2 of the subject.
Example 6
In addition to embodiment 1, in step S5, the visual area of the adjustment target is specifically adjusted in the manner described in step S3.
Example 7
On the basis of embodiment 1, in step S6, the rendering output includes a rendering output of a single frame image.
Example 8
In addition to embodiment 1, in step S6, when the common view zone is satisfied, the effective area is obtained regardless of how the camera takes the image, and the angle of the camera does not need to be considered.
Example 9
On the basis of embodiment 2, the modeling software environment is 3D max software.
Example 10
On the basis of embodiment 2, the modeling software environment is maya software or UE software.
The units described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations described above.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may be separate and not incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
The parts not involved in the present invention are the same as or can be implemented using the prior art.
The above-described embodiment is only one embodiment of the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be easily made based on the application and principle of the present invention disclosed in the present application, and the present invention is not limited to the method described in the above-described embodiment of the present invention, so that the above-described embodiment is only preferred, and not restrictive.
In addition to the foregoing examples, those skilled in the art, having the benefit of this disclosure, may derive other embodiments from the teachings of the foregoing disclosure or from modifications and variations utilizing knowledge or skill of the related art, which may be interchanged or substituted for features of various embodiments, and such modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the present invention as set forth in the following claims.

Claims (10)

1. A2.5D rendering method based on an object visual range is characterized by comprising the following steps:
s1, firstly, calibrating a camera, calculating internal parameters and external parameters of the camera, determining the focal length and the imaging size of a camera model, and determining a shooting range angle phi of the camera;
s2, calibrating an effective area alpha 1 and an ineffective area alpha 2 of each object according to the camera model in the step S1; the expression for an object is: alpha 1 is a visible part of the object, alpha 2 is an unknown part of the object, and the imaging range of the camera does not comprise a null region alpha 2 part;
s3, defining a set omega of visual regions of 1,2,3,4 \8230: \ 8230n for each object under the same coordinate system: (U1, U2, U3, U4 \8230; un), the virtual camera does not photograph the non-effective area within the visible area; un is a visual area of the object n, is a user-defined three-dimensional space, is not fixed in shape, but ensures that the camera cannot shoot the invalid area alpha 2 part of the object n in the Un;
s4, solving the common visual range and the minimum included angle of all the visual areas U1, U2, U3 and U4 \8230, \8230andUn, wherein the minimum included angle is the common effective visual angle theta;
s5, when a scheme of shooting the visual area of the object at the camera position is designed, the method is designed as follows: ensuring that theta is greater than & lt SON, simultaneously requiring & lt SON to be less than phi, and if the condition is not met, adjusting a visual area of the object; the area surrounded by the included angle theta is a minimum visible area, O is the initial point of the minimum visible area, S is a camera viewpoint, and N is a central axis of the minimum visible area;
and S6, rendering and outputting the shooting visual angle of the camera viewpoint S, wherein each object does not need to be completely rendered in 3D, and the original five-dimensional expression of camera imaging (x, y, z, theta, phi) is reduced to three dimensions (x, y and z) through a visual interval according to the position and the imaging size of the camera.
2. The object visual range-based 2.5D rendering method according to claim 1, wherein in step S1, camera calibration is performed in a modeling software environment.
3. The object visual range-based 2.5D rendering method according to claim 1, wherein in step S2, the invalid region α 2 cannot be inferred from the shape of the valid region α 1, and belongs to an unknown region.
4. The object visual range-based 2.5D rendering method according to claim 1, wherein in step S3, the same coordinate system is a world coordinate system of a virtual space.
5. The object visual range-based 2.5D rendering method according to claim 1, wherein in step S4, the camera positions can be randomly arranged within the common visual range area without photographing the ineffective area α 2 of the object.
6. The object-visibility-range-based 2.5D rendering method according to claim 1, wherein in step S5, the visible area of the adjustment object is specifically adjusted in the manner described in step S3.
7. The object visual range-based 2.5D rendering method according to claim 1, wherein in step S6, the rendering output includes a rendering output of a single frame image.
8. The object visual range-based 2.5D rendering method according to claim 1, wherein in step S6, when the common visual interval is satisfied, no matter how the camera takes, the effective region is captured, and the angle of the camera does not need to be considered.
9. The object visual range-based 2.5D rendering method of claim 2, wherein the modeling software environment is 3D max software.
10. The object visual scope-based 2.5D rendering method according to claim 2, wherein the modeling software environment is maya software or UE software.
CN202211388886.2A 2022-11-08 2022-11-08 2.5D rendering method based on object visual range Active CN115439587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211388886.2A CN115439587B (en) 2022-11-08 2022-11-08 2.5D rendering method based on object visual range

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211388886.2A CN115439587B (en) 2022-11-08 2022-11-08 2.5D rendering method based on object visual range

Publications (2)

Publication Number Publication Date
CN115439587A true CN115439587A (en) 2022-12-06
CN115439587B CN115439587B (en) 2023-02-14

Family

ID=84253040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211388886.2A Active CN115439587B (en) 2022-11-08 2022-11-08 2.5D rendering method based on object visual range

Country Status (1)

Country Link
CN (1) CN115439587B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10339716B1 (en) * 2016-09-19 2019-07-02 Occipital, Inc. System and method for dense, large scale scene reconstruction
CN111726640A (en) * 2020-07-03 2020-09-29 中图云创智能科技(北京)有限公司 Live broadcast method with 0-360 degree dynamic viewing angle
CN113808247A (en) * 2021-11-19 2021-12-17 武汉方拓数字科技有限公司 Method and system for rendering and optimizing three-dimensional model of massive three-dimensional scene
CN114529647A (en) * 2022-02-18 2022-05-24 北京市商汤科技开发有限公司 Object rendering method, device and apparatus, electronic device and storage medium
CN115209172A (en) * 2022-07-13 2022-10-18 成都索贝数码科技股份有限公司 XR-based remote interactive performance method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10339716B1 (en) * 2016-09-19 2019-07-02 Occipital, Inc. System and method for dense, large scale scene reconstruction
CN111726640A (en) * 2020-07-03 2020-09-29 中图云创智能科技(北京)有限公司 Live broadcast method with 0-360 degree dynamic viewing angle
CN113808247A (en) * 2021-11-19 2021-12-17 武汉方拓数字科技有限公司 Method and system for rendering and optimizing three-dimensional model of massive three-dimensional scene
CN114529647A (en) * 2022-02-18 2022-05-24 北京市商汤科技开发有限公司 Object rendering method, device and apparatus, electronic device and storage medium
CN115209172A (en) * 2022-07-13 2022-10-18 成都索贝数码科技股份有限公司 XR-based remote interactive performance method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HOSHI,A等: "Simulating of mid-air images using combination of physically based rendering and images processing", 《OPTICAL REVIEW》 *
朱广阳: "复杂三维场景的快速可视域分析技术", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Also Published As

Publication number Publication date
CN115439587B (en) 2023-02-14

Similar Documents

Publication Publication Date Title
CN110111262B (en) Projector projection distortion correction method and device and projector
CN109658365B (en) Image processing method, device, system and storage medium
US11354840B2 (en) Three dimensional acquisition and rendering
CN107993276B (en) Panoramic image generation method and device
CN109801374B (en) Method, medium, and system for reconstructing three-dimensional model through multi-angle image set
CN111243071A (en) Texture rendering method, system, chip, device and medium for real-time three-dimensional human body reconstruction
WO2019049421A1 (en) Calibration device, calibration system, and calibration method
CN103942754B (en) Panoramic picture complementing method and device
JP2006053694A (en) Space simulator, space simulation method, space simulation program and recording medium
JP2016537901A (en) Light field processing method
WO2019219014A1 (en) Three-dimensional geometry and eigencomponent reconstruction method and device based on light and shadow optimization
CN110191326A (en) A kind of optical projection system resolution extension method, apparatus and optical projection system
JPH11175762A (en) Light environment measuring instrument and device and method for shading virtual image using same
CN115439616B (en) Heterogeneous object characterization method based on multi-object image alpha superposition
US11810248B2 (en) Method for processing image data to provide for soft shadow effects using shadow depth information
CN111047709A (en) Binocular vision naked eye 3D image generation method
WO2023207452A1 (en) Virtual reality-based video generation method and apparatus, device, and medium
TW201707437A (en) Image processing device and image processing method
CN113643414A (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN111896032A (en) Calibration system and method for monocular speckle projector position
CN114004935A (en) Method and device for three-dimensional modeling through three-dimensional modeling system
WO2019042028A1 (en) All-around spherical light field rendering method
CN115439587B (en) 2.5D rendering method based on object visual range
CN116801115A (en) Sparse array camera deployment method
JP3387856B2 (en) Image processing method, image processing device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant