CN113382227A - Naked eye 3D panoramic video rendering device and method based on smart phone - Google Patents
Naked eye 3D panoramic video rendering device and method based on smart phone Download PDFInfo
- Publication number
- CN113382227A CN113382227A CN202110617804.6A CN202110617804A CN113382227A CN 113382227 A CN113382227 A CN 113382227A CN 202110617804 A CN202110617804 A CN 202110617804A CN 113382227 A CN113382227 A CN 113382227A
- Authority
- CN
- China
- Prior art keywords
- field
- depth
- picture
- mobile phone
- viewer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 41
- 238000000034 method Methods 0.000 title claims description 15
- 230000000007 visual effect Effects 0.000 claims abstract description 8
- 238000005070 sampling Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72439—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/327—Calibration thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
Abstract
The invention discloses a naked eye 3D panoramic video rendering device based on a smart phone, which comprises an eye orientation capturing module, a depth of field image stripping processing module, a depth of field and fixation point predicting module and an image correcting and assembling module, wherein the eye orientation capturing module is used for acquiring the spatial position of the eyes of a current viewer relative to a mobile phone screen; the depth of field image stripping processing module is used for processing the shot video images with different depths of field and processing the limited number of depth of field video images to form an infinite number of video images with different depths of field; the depth of field and fixation point prediction module is used for predicting the direction of eyes and the position of a visual fixation point when a current viewer watches the panoramic 3D video by using a smart phone, and pre-assembling and pre-rendering a picture by using a prediction result; and the picture correcting and assembling module is used for calculating and assembling the picture scene deep layer and the picture rendering area to be presented according to the viewing distance of the viewer and the gesture angle of the mobile phone into a final picture.
Description
Technical Field
The invention belongs to the technical field of naked eye 3D display technology and panoramic video rendering, and particularly relates to a naked eye 3D panoramic video rendering method based on a smart phone.
Background
Currently, the naked eye 3D technology is mainly represented by an optical barrier technology, a lenticular lens technology, and a distributed optical matrix technology, but these technologies all need to be matched with a specific 3D display device or realize a naked eye 3D video effect by means of an external film. Modern society smart mobile phones are used more and more, but 3D smart mobile phones are not yet applied, user experience and high-end use actual requirements of consumers cannot be met, and most of existing 3D technologies are applied to commercial scenes such as advertisement promotion and the like and are not suitable for daily entertainment use scenes of common users.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a naked eye 3D panoramic video rendering device based on a smart phone, which comprises an eye direction capturing module, a depth of field image stripping processing module, a depth of field and fixation point predicting module and an image correcting and assembling module,
the eye position capturing module is used for acquiring the spatial position of the eyes of the current viewer relative to the mobile phone screen;
the depth of field image stripping processing module is used for processing the shot video images with different depths of field and processing the limited number of depth of field video images to form an infinite number of video images with different depths of field;
the depth of field and fixation point prediction module is used for predicting the direction of eyes and the position of a visual fixation point when a current viewer watches the panoramic 3D video by using a smart phone, and pre-assembling and pre-rendering a 3D video picture by using a prediction result;
and the picture correcting and assembling module is used for calculating the picture scene deep layer and the picture rendering area to be presented according to the viewing distance of the viewer and the gesture angle of the mobile phone and assembling the picture scene deep layer and the picture rendering area into a final picture for presentation.
Furthermore, all the eye position capturing modules capture the spatial position of the eyes of the viewer through the front camera of the mobile phone to obtain the linear distance between the eyes of the viewer and the center of the screen of the mobile phone.
Further, a three-dimensional coordinate system is established by taking a front camera of the mobile phone as a coordinate origin and taking a screen of the mobile phone as an XOY plane, and the spatial orientation of eyes of a viewer, namely coordinates (x, y, z), is captured by using the front camera of the mobile phone; knowing the coordinates of the center of the screen of the mobile phone as (Δ x, Δ y,0), the linear distance d between the eyes of the viewer and the center of the screen of the mobile phone is:
further, the depth of field image stripping processing module acquires the distance between eyes of a viewer and a mobile phone screen as the depth of field through a camera, and different layers of images are sequentially superposed according to different depths of field watched by the viewer.
Further, the depth-of-field image peeling processing module uses the bilinear interpolation algorithm to perform value filling drawing on image pixel points with different depths of field, and finally obtains video images with different depths of field.
Furthermore, the depth of field and gaze point prediction module uses the current position of the eyes of the viewer and the mobile phone screen orientation as basic data bases and uses a prediction algorithm to predict the depth of field and gaze point.
Further, the prediction algorithm uses a mobile phone front camera and a mobile phone gyroscope to perform data sampling, and acquires the spatial orientation (x, y, z) of the current eyes of the viewer and the fixation point in the virtual reality video, namely the pitch angle pitch, the yaw angle yaw, the distance d between the eyes and the mobile phone screen, and the fixed depth of field value set in the video playerh, adding to obtain a depth of field value l watched by the viewer; setting a prediction period as a ms and a sampling period as b ms, and obtaining n groups of sampling results l in the prediction period1、l2、l3…lnThe depth of field in a single prediction cycle can be found to be:the point of regard in a single prediction cycle is:
further, the picture is corrected and assembled. The method comprises the steps of obtaining the gesture data of the mobile phone, converting the gesture data of the mobile phone into the direction angles, namely the pitch angle pitch and the yaw angle yaw, watched by a viewer to obtain the fixation point of the viewer, obtaining the final watching depth of field value by using the distance value obtained by calculation in the eye position capture module, and carrying out parallax processing and assembling on a presented picture according to the depth of field value and the fixation point.
Furthermore, according to the set size of the watching image area, a depth-of-field image with a depth-of-field value stripped in advance as the watching depth-of-field value is taken according to the current user watching point position to perform video image rendering.
The invention also provides a naked eye 3D panoramic video rendering method based on the smart phone, which comprises the following steps:
the method comprises the following steps of firstly, acquiring the spatial position of eyes of a current viewer relative to a mobile phone screen, wherein the spatial position comprises the viewing distance of the viewer and the posture angle of the mobile phone;
processing the shot video pictures with different depths of field, and processing the limited number of depth of field video pictures to form an unlimited number of video pictures with different depths of field;
predicting the direction of eyes and the position of a visual fixation point when a current viewer watches the panoramic 3D video by using a smart phone, and pre-assembling and pre-rendering a 3D video picture by using a prediction result;
and step four, calculating the deep layer of the picture scene and the picture rendering area to be presented according to the viewing distance of the viewer and the posture angle of the mobile phone, and assembling into a final picture for presentation.
According to the method, based on a mode of separately rendering by identifying the parallax information of left and right eyes, the video picture is subjected to layered picture stripping with different depths of field, the capturing, picture angle correction and subsequent viewing direction and angle prediction of the viewing direction and posture of a viewer are newly added by utilizing the front camera and the posture of the smart phone, so that the viewer can view the 3D panoramic video at a free angle and at a distance on the smart phone, the limitation of the 3D video in the aspects of a viewing angle and a viewing distance is effectively reduced, the rendering speed of the video 3D picture is improved, and the user experience is improved.
According to the invention, based on the smart phone of the viewer, a depth-of-field layered image processing mode and the capture of the eye position and the fixation point of the viewer are adopted, and 3D parallax processing and assembly are carried out on pictures of different depth-of-field levels through calculation, so that a naked eye 3D video effect on the screen of the smart phone is realized.
The method is suitable for the method for predicting the fixation point of the virtual reality scene, has low requirement on the computing capability of hardware, and improves the loading speed of the video picture and the user experience by pre-rendering the calculation result aiming at the picture.
Drawings
FIG. 1 is a schematic diagram of viewer and cell phone screen center coordinates;
FIG. 2 is a schematic diagram of a viewer performing layered frame stripping with different depths of field for a video frame at different points of gaze and distances;
fig. 3 is a schematic view of the gaze point pitch angle pitch and yaw angle yaw.
Detailed Description
The invention will be further explained with reference to the drawings.
The invention discloses a naked eye 3D panoramic video rendering device based on a smart phone, which comprises an eye orientation capturing module, a depth of field image stripping processing module, a depth of field and fixation point predicting module and an image correcting and assembling module.
The eye position capturing module is used for acquiring the spatial position of the eyes of the current viewer relative to the screen of the mobile phone. Firstly, the linear distance between the eyes of the viewer and the center of the screen of the mobile phone is obtained through the spatial direction of the eyes of the viewer captured by the front camera of the mobile phone.
As shown in fig. 1, a three-dimensional coordinate system is established with the mobile phone front camera as the origin of coordinates and the mobile phone screen as the XOY plane, and the spatial orientation of the eyes of the viewer, i.e. coordinates (x, y, z), is captured by using the mobile phone front camera; knowing the coordinates of the center of the screen of the mobile phone as (Δ x, Δ y,0), the linear distance d between the eyes of the viewer and the center of the screen of the mobile phone is:
the depth of field image stripping processing module is used for processing shot different depth of field video images, processing a limited number of depth of field video images through a bilinear interpolation algorithm to form an unlimited number of different depth of field video images, acquiring the distance between eyes of a viewer and a mobile phone screen as the depth of field through a camera, and sequentially overlapping different layers of images according to different depths of field watched by the viewer so as to achieve different effects of different images at the viewing position.
And the bilinear interpolation algorithm is used for performing value filling drawing on the picture pixel points with different depths of field to finally obtain the video pictures with different depths of field.
As shown in fig. 2, the depth-of-field and gaze point prediction module is configured to predict the position of the eye and the position of the visual gaze point when the current viewer watches the panoramic 3D video using the smart phone. And taking the position of the eyes of the current viewer and the screen orientation of the mobile phone as basic data bases, and predicting the depth of field and the fixation point by using a prediction algorithm. And pre-assembling and pre-rendering the 3D video picture by using the prediction result.
The prediction algorithm uses a mobile phone front camera and a mobile phone gyroscope to perform data sampling, and acquires the spatial orientation (x, y, z) of the current eyes of a viewer and the fixation point (pitch angle pitch and yaw angle yaw) in a virtual reality video, as shown in fig. 3; adding the distance d between the eyes and the screen of the mobile phone to a fixed depth of field value h set in the video player to obtain a depth of field value l watched by the viewer; setting a prediction period as a ms and a sampling period as b ms, and obtaining n groups of sampling results l in the prediction period1、l2、l3…lnThe depth of field in a single prediction cycle can be found to be:the point of regard in a single prediction cycle is:
the picture correcting and assembling module is a module used for calculating and assembling picture scene deep layers and picture rendering areas to be presented according to the viewing distance of a viewer and the posture angle of the mobile phone into a final picture presentation. The method comprises the steps of obtaining a fixation point of a viewer by converting the acquisition of the posture data of the mobile phone into a pointing angle watched by the viewer, namely (a pitch angle pitch, a yaw angle yaw), obtaining a final watching depth of field value by using a distance value calculated in an eye position capture module, and carrying out parallax processing and assembling on a presented picture according to the depth of field value and the fixation point.
And according to the set size of the watching picture area, taking a depth-of-field picture with a depth-of-field value which is stripped in advance as the watching depth-of-field value according to the position of the current user watching point to perform video picture rendering. The depth of field is different when the eyes of the viewer watch different directions, and the visual effect of the seen picture is different, so that the 3D watching effect is generated.
The invention discloses a naked eye 3D panoramic video rendering method based on a smart phone, which comprises the following steps:
the method comprises the following steps of firstly, acquiring the spatial position of eyes of a current viewer relative to a mobile phone screen, wherein the spatial position comprises the viewing distance of the viewer and the posture angle of the mobile phone;
processing the shot video pictures with different depths of field, and processing the limited number of depth of field video pictures to form an unlimited number of video pictures with different depths of field;
predicting the direction of eyes and the position of a visual fixation point when a current viewer watches the panoramic 3D video by using a smart phone, and pre-assembling and pre-rendering a 3D video picture by using a prediction result;
and step four, calculating the deep layer of the picture scene and the picture rendering area to be presented according to the viewing distance of the viewer and the posture angle of the mobile phone, and assembling into a final picture for presentation.
Claims (10)
1. The utility model provides a bore hole 3D panorama video rendering device based on smart mobile phone, includes that eyes position catches module, the scene of depth of field is peeled off processing module, the scene of depth of field and fixation point prediction module and picture correction equipment module, its characterized in that:
the eye position capturing module is used for acquiring the spatial position of the eyes of the current viewer relative to the mobile phone screen;
the depth of field image stripping processing module is used for processing the shot video images with different depths of field and processing the limited number of depth of field video images to form an infinite number of video images with different depths of field;
the depth of field and fixation point prediction module is used for predicting the direction of eyes and the position of a visual fixation point when a current viewer watches the panoramic 3D video by using a smart phone, and pre-assembling and pre-rendering a 3D video picture by using a prediction result;
and the picture correcting and assembling module is used for calculating the picture scene deep layer and the picture rendering area to be presented according to the viewing distance of the viewer and the gesture angle of the mobile phone and assembling the picture scene deep layer and the picture rendering area into a final picture for presentation.
2. The smartphone-based naked-eye 3D panoramic video rendering apparatus of claim 1, wherein:
all the eye position capturing modules capture the spatial positions of the eyes of the viewer through the front camera of the mobile phone to obtain the linear distance between the eyes of the viewer and the center of the screen of the mobile phone.
3. The smartphone-based naked eye 3D panoramic video rendering apparatus of claim 2, wherein:
establishing a three-dimensional coordinate system by taking the front camera of the mobile phone as a coordinate origin and the screen of the mobile phone as an XOY plane, and capturing the spatial orientation of the eyes of a viewer, namely coordinates (x, y and z) by using the front camera of the mobile phone; the coordinates of the center of the screen of the mobile phone are known as (deltax,Δ y,0), the straight-line distance d between the eyes of the viewer and the center of the screen of the mobile phone is obtained as:
4. the smartphone-based naked-eye 3D panoramic video rendering apparatus of claim 1, wherein:
the depth of field image stripping processing module acquires the distance between eyes of a viewer and a mobile phone screen as depth of field through a camera, and different layers of images are sequentially superposed according to different depths of field watched by the viewer.
5. The smartphone-based naked eye 3D panoramic video rendering apparatus of claim 4, wherein:
and the depth-of-field picture stripping processing module uses the bilinear interpolation algorithm to perform value filling drawing on picture pixel points with different depths of field, and finally obtains video pictures with different depths of field.
6. The smartphone-based naked-eye 3D panoramic video rendering apparatus of claim 1, wherein:
the depth of field and gaze point prediction module uses the current position of the eyes of the viewer and the mobile phone screen orientation as basic data bases and uses a prediction algorithm to predict the depth of field and gaze point.
7. The smartphone-based naked eye 3D panoramic video rendering apparatus of claim 6, wherein:
the prediction algorithm uses a mobile phone front camera and a mobile phone gyroscope to sample data, and acquires the space orientation (x, y, z) of the current eyes of a viewer and the fixation point (pitch angle pitch and yaw angle yaw) in a virtual reality video, and the distance d between the eyes and a mobile phone screen is added with a fixed depth of field value h set in a video player to obtain a depth of field value l watched by the viewer; setting a prediction period as a ms and a sampling period as b ms, and obtaining n groups of sampling results l in the prediction period1、l2、l3…lnCan obtain
the point of regard in a single prediction cycle is:
8. the smartphone-based naked-eye 3D panoramic video rendering apparatus of claim 1, wherein:
and the picture correction assembly module. The method comprises the steps of obtaining the gesture data of the mobile phone, converting the gesture data of the mobile phone into the direction angles, namely the pitch angle pitch and the yaw angle yaw, watched by a viewer to obtain the fixation point of the viewer, obtaining the final watching depth of field value by using the distance value obtained by calculation in the eye position capture module, and carrying out parallax processing and assembling on a presented picture according to the depth of field value and the fixation point.
9. The smartphone-based naked-eye 3D panoramic video rendering apparatus of claim 8, wherein:
and according to the set size of the watching picture area, taking a depth-of-field picture with a depth-of-field value which is stripped in advance as the watching depth-of-field value according to the position of the current user watching point to perform video picture rendering.
10. A naked eye 3D panoramic video rendering method based on a smart phone comprises the following steps:
the method comprises the following steps of firstly, acquiring the spatial position of eyes of a current viewer relative to a mobile phone screen, wherein the spatial position comprises the viewing distance of the viewer and the posture angle of the mobile phone;
processing the shot video pictures with different depths of field, and processing the limited number of depth of field video pictures to form an unlimited number of video pictures with different depths of field;
predicting the direction of eyes and the position of a visual fixation point when a current viewer watches the panoramic 3D video by using a smart phone, and pre-assembling and pre-rendering a 3D video picture by using a prediction result;
and step four, calculating the deep layer of the picture scene and the picture rendering area to be presented according to the viewing distance of the viewer and the posture angle of the mobile phone, and assembling into a final picture for presentation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110617804.6A CN113382227A (en) | 2021-06-03 | 2021-06-03 | Naked eye 3D panoramic video rendering device and method based on smart phone |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110617804.6A CN113382227A (en) | 2021-06-03 | 2021-06-03 | Naked eye 3D panoramic video rendering device and method based on smart phone |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113382227A true CN113382227A (en) | 2021-09-10 |
Family
ID=77575633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110617804.6A Pending CN113382227A (en) | 2021-06-03 | 2021-06-03 | Naked eye 3D panoramic video rendering device and method based on smart phone |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113382227A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114327343A (en) * | 2021-12-31 | 2022-04-12 | 珠海豹趣科技有限公司 | Naked eye 3D effect display optimization method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020175994A1 (en) * | 2001-05-25 | 2002-11-28 | Kuniteru Sakakibara | Image pickup system |
US20120274745A1 (en) * | 2011-04-29 | 2012-11-01 | Austin Russell | Three-dimensional imager and projection device |
CN103369467A (en) * | 2012-04-09 | 2013-10-23 | 英特尔公司 | Signal transmission of three-dimensional video information in communication networks |
US20130314406A1 (en) * | 2012-05-23 | 2013-11-28 | National Taiwan University | Method for creating a naked-eye 3d effect |
CN109982064A (en) * | 2019-03-18 | 2019-07-05 | 深圳岚锋创视网络科技有限公司 | A kind of virtual visual point image generating method and portable terminal of naked eye 3D |
-
2021
- 2021-06-03 CN CN202110617804.6A patent/CN113382227A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020175994A1 (en) * | 2001-05-25 | 2002-11-28 | Kuniteru Sakakibara | Image pickup system |
US20120274745A1 (en) * | 2011-04-29 | 2012-11-01 | Austin Russell | Three-dimensional imager and projection device |
CN103369467A (en) * | 2012-04-09 | 2013-10-23 | 英特尔公司 | Signal transmission of three-dimensional video information in communication networks |
US20130314406A1 (en) * | 2012-05-23 | 2013-11-28 | National Taiwan University | Method for creating a naked-eye 3d effect |
CN109982064A (en) * | 2019-03-18 | 2019-07-05 | 深圳岚锋创视网络科技有限公司 | A kind of virtual visual point image generating method and portable terminal of naked eye 3D |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114327343A (en) * | 2021-12-31 | 2022-04-12 | 珠海豹趣科技有限公司 | Naked eye 3D effect display optimization method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106101741B (en) | Method and system for watching panoramic video on network video live broadcast platform | |
US20150358539A1 (en) | Mobile Virtual Reality Camera, Method, And System | |
CN110798673B (en) | Free viewpoint video generation and interaction method based on deep convolutional neural network | |
US20110216160A1 (en) | System and method for creating pseudo holographic displays on viewer position aware devices | |
CN107341832B (en) | Multi-view switching shooting system and method based on infrared positioning system | |
US9961334B2 (en) | Simulated 3D image display method and display device | |
CN113574863A (en) | Method and system for rendering 3D image using depth information | |
CN110636276B (en) | Video shooting method and device, storage medium and electronic equipment | |
CN101631257A (en) | Method and device for realizing three-dimensional playing of two-dimensional video code stream | |
US20100302234A1 (en) | Method of establishing dof data of 3d image and system thereof | |
CN104599317A (en) | Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function | |
CN108616733B (en) | Panoramic video image splicing method and panoramic camera | |
CN107197135B (en) | Video generation method and video generation device | |
US20180249075A1 (en) | Display method and electronic device | |
KR101704362B1 (en) | System for real time making of panoramic video base on lookup table and Method for using the same | |
CN110870304B (en) | Method and apparatus for providing information to a user for viewing multi-view content | |
CN113382227A (en) | Naked eye 3D panoramic video rendering device and method based on smart phone | |
CN108564654B (en) | Picture entering mode of three-dimensional large scene | |
JP2018033107A (en) | Video distribution device and distribution method | |
US20190037200A1 (en) | Method and apparatus for processing video information | |
CN115861514A (en) | Rendering method, device and equipment of virtual panorama and storage medium | |
CN111629194B (en) | Method and system for converting panoramic video into 6DOF video based on neural network | |
CN115002442A (en) | Image display method and device, electronic equipment and storage medium | |
CN114793276A (en) | 3D panoramic shooting method for simulation reality meta-universe platform | |
CN114040184A (en) | Image display method, system, storage medium and computer program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210910 |
|
RJ01 | Rejection of invention patent application after publication |