CN110913198A - VR image transmission method - Google Patents

VR image transmission method Download PDF

Info

Publication number
CN110913198A
CN110913198A CN201811076200.XA CN201811076200A CN110913198A CN 110913198 A CN110913198 A CN 110913198A CN 201811076200 A CN201811076200 A CN 201811076200A CN 110913198 A CN110913198 A CN 110913198A
Authority
CN
China
Prior art keywords
image
area
edge
setting
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811076200.XA
Other languages
Chinese (zh)
Other versions
CN110913198B (en
Inventor
孟宪民
李小波
赵德贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING HENGXIN CAIHONG INFORMATION TECHNOLOGY Co Ltd
Original Assignee
BEIJING HENGXIN CAIHONG INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING HENGXIN CAIHONG INFORMATION TECHNOLOGY Co Ltd filed Critical BEIJING HENGXIN CAIHONG INFORMATION TECHNOLOGY Co Ltd
Priority to CN201811076200.XA priority Critical patent/CN110913198B/en
Publication of CN110913198A publication Critical patent/CN110913198A/en
Application granted granted Critical
Publication of CN110913198B publication Critical patent/CN110913198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application provides a VR image transmission method, which comprises the following steps: after the VR computer sends a first plane image of the movie to the VR helmet, the VR computer obtains a first visible area which can be seen by a wearer of the VR helmet on the first plane image; setting a second visual area comprising the first visual area, recording a second position of the second visual area on the first plane image, and setting an area except the second visual area in the first plane image as a background area; setting the edge of the second visual area as a first edge, and setting the edge of the high-definition area as a second edge, wherein the image is gradually blurred from the second edge to the first edge; compressing the first plane image under the condition that the image size of the first plane image is not changed to obtain a first compressed image; the image in the second viewable area is overlaid on the first compressed image at the second location, combining to form a second composite image. The method and the device enable the picture of the film to be more optimized, and the film watching experience of a user is improved.

Description

VR image transmission method
Technical Field
The application relates to the field of image transmission, in particular to a VR image transmission method.
Background
In the prior art, generally, a planar image is converted into a stereoscopic image, so that a viewer can see a stereoscopic three-dimensional film through a VR helmet, but the image of the film is generally a panoramic image, and the pixels can reach 4K, and during the transmission process with large capacity, a phenomenon of blocking is likely to occur, so that a mode that the viewer can clearly see an area to be viewed and the data volume of image transmission can be reduced is adopted, the mode enables the image in the area visible by the viewer to be clear and the image in the non-visible area to be blurred, but after the image is transmitted to the VR helmet, a fault may be generated between the clear image and the blurred image generated after compression, and therefore, the viewing experience of the user needs to be improved in a mode of more reasonably and optimally enabling the image.
Disclosure of Invention
In order to solve the above problem, the present application provides a VR image transmission method, including the steps of:
s1, after the VR computer sends the first plane image of the movie to the VR helmet, according to the change of the position of the VR helmet, the VR computer obtains a first visible area which can be seen by a wearer of the VR helmet on the first plane image;
s2, recording a first position of the first visual area on the first planar image; setting a second visual area comprising the first visual area, recording a second position of the second visual area on the first plane image, and setting an area except the second visual area in the first plane image as a background area; setting the edge of the second visual area as a first edge, and setting the edge of the high-definition area as a second edge, wherein the image is gradually blurred from the second edge to the first edge;
s3, compressing the first plane image under the condition that the image size is not changed to obtain a first compressed image; the first plane image is M pixels, and the pixels of the first compressed image obtained after compression are M/Q (Q is more than 1);
s4, overlaying the image in the second viewing area on the first compressed image at the second position, combining to form a second composite image, and outputting the second composite image to the VR headset.
Preferably, in step S1, when the area visible to the wearer of the VR headset on the first planar image is an irregular quadrilateral, the first visual area is set as a rectangular area with four sides parallel to four sides of the first planar image, wherein the first visual area includes the irregular quadrilateral area.
Preferably, step S2 further includes:
s21, setting a high-definition area inside a first visual area, wherein the first visual area comprises the high-definition area;
s22, setting the area between the first edge and the second edge as an edge area, and dividing the edge area from the first edge to the second edge into P edge transition areas; then the pixels of the image inside the nth edge transition region are:
Figure BDA0001800827900000021
preferably, the first visible area is rectangular, the high definition area is oval, the second visible area is oval, and four sides of the first visible area are tangent to the second edge; the first edge passes through four vertices of the first viewable area.
The present application also provides another VR image transmission method, including:
s1, after the VR computer sends the first plane image of the movie to the VR helmet, according to the change of the position of the VR helmet, the VR computer obtains the position of a first visible area which can be seen by a wearer of the VR helmet on the first plane image;
s2, recording a first position of the first visual area on the first planar image; shooting the first plane image with depth of field setting to obtain a first depth of field image:
when the first depth image is shot, the specific setting is as follows: setting the first visual area at a first position of the first depth image, setting a depth area in the first visual area, and obtaining a depth value m from the depth area;
the method comprises the steps of setting a third visual area comprising a first visual area, setting the position of the third area on a first depth image to be a fifth position, setting the edge of the third visual area to be a transition edge, setting the edge of the depth area to be a depth edge, gradually increasing the depth of the image from the depth edge to the transition edge when shooting is set, and setting parameters and formulas required to be used when shooting to be as follows:
setting the distance from the focal point of the lens to the close allowable circle of confusion as a front depth of field delta L1, setting the distance from the focal point to the far allowable circle of confusion as a rear depth of field delta L2, setting the focal length of the lens as F, the shooting aperture value of the lens as F, the focusing distance as L and the diameter of the allowable circle of confusion as delta;
then there is
Figure BDA0001800827900000031
Figure BDA0001800827900000032
Figure BDA0001800827900000033
Shooting through the setting to obtain an image in a third visual area on the first depth-of-field image;
s3, compressing the first plane image under the condition that the image size is not changed to obtain a first compressed image;
s4, overlaying the image in the third visible region on the first compressed image at a fifth position, combining to form a third composite image, and outputting the third composite image to the VR headset.
Preferably, when the pixel of the transition edge is J, the pixel of the second compressed image is set to be J, and when the pixel is J larger than the set threshold value pixel, parameters such as a focal length aperture and the like are adjusted, so that the pixel value J is reduced to be less than or equal to the threshold value under the condition that the depth of field is not changed.
This application has adopted the image in the visual region of viewer to set up high pixel, the image in the non-visual region sets up the method for making the data transmission volume of VR image reduce for low pixel, in order to guarantee video broadcast smoothness nature, simultaneously, we have adopted the mode of edge gradual change transition between the clear district of image and fuzzy district, it is more comfortable to make whole demonstration go up to feel, avoid producing the phenomenon of fault between clear district and the fuzzy district, make the picture of film more reasonable, customer's sight shadow effect has also obtained the improvement.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a schematic diagram of a VR headset corresponding to x, y, and z axes in a three-dimensional coordinate system in a virtual space.
Fig. 2 is a schematic diagram of x, y and z axes of a virtual camera in a three-dimensional coordinate system.
Fig. 3 is a schematic diagram of a viewing cone region that can be seen by a VR headset in virtual space.
Fig. 4 is a schematic diagram of UV texture coordinates on a planar image.
Fig. 5 is a schematic diagram of forming a second composite picture according to the second embodiment.
Fig. 6 is a schematic diagram of forming a third composite picture according to the third embodiment.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example one
In order to solve the above problems, the present application provides a VR image transmission system, including a VR computer, a VR headset, and a positioning sensor; the VR computer is used for establishing a virtual space, and the VR computer stores a reference position of the VR helmet and a reference position of a viewing cone area which can be observed in the virtual space by a person wearing the helmet when the VR helmet is at the reference position; the positioning sensor is fixed on the VR helmet and used for sensing the spatial position of the VR helmet; the VR computer is in communication connection with the positioning sensor, so that the VR computer can acquire the change of the position of the VR helmet through the positioning sensor; the VR computer is in communication connection with the VR helmet; the VR helmet is provided with a field angle, and the field angle of the VR helmet is an included angle formed by two sight lines which can be seen by a single eye through the helmet in the largest range; the field angle comprises a horizontal field angle and a vertical field angle, wherein an included angle formed by two sight lines which can be seen by a single eye of a person on a plane parallel to a plane on which the ground on which the person stands through the helmet in the maximum range is set as the horizontal field angle; the included angle formed by two sight lines which can be seen by a single eye of a person on a plane parallel to a standing plane of the person through the helmet and have the maximum range is set as a vertical field angle. The VR computer is stored with a movie, the movie is composed of a plurality of plane images, and an area which can be seen after the VR helmet rotates in a virtual space can be obtained through the position and the angle of view of the VR helmet in the virtual space.
In this embodiment, the VR helmet can be a mobile VR box or an external helmet.
To allow the wearer to see a stereoscopic image, the VR computer sends images to the screens of the left and right eyes of the VR headset, respectively.
As shown in fig. 3, in the virtual space of the VR computer, the VR headset is represented by a virtual camera, and in order to enable the wearer to view a three-dimensional stereoscopic image of virtual reality from the VR headset, the movie image in the virtual space is set to have the virtual camera as the center of sphere and the radius of R0The three-dimensional sphere of (a); the angle of view of the VR headset is the same as the virtual camera, i.e., the image seen by the wearer is equivalent to the image projected by the virtual camera on the sphere.
Images of the movie are stored in the VR computer in the form of plane images, and when a three-dimensional sphere is constructed, the plane images are attached to the three-dimensional sphere by means of UV mapping, thereby forming a three-dimensional sphere image of the movie.
In the process of UV mapping, a grid is arranged on a plane image, the plane image is divided into a plurality of squares by the grid, the vertex of the grid has a U value in the horizontal direction and a V value in the vertical direction, the U, V value is used as a plane image texture coordinate, after the plane image is attached to a three-dimensional sphere, the U, V value of the vertex on the plane image corresponds to the three-dimensional coordinate value of the vertex after the plane image is attached to the three-dimensional sphere.
UV texture coordinates As shown in FIG. 4, the top left corner U, V value of the planar image is (0, 0); the lower right corner U, V is (1, 1), where 0 < U < 1, O < V < 1;
therefore, in the virtual space, a three-dimensional coordinate system is established with the virtual camera as an origin, and x, y and z axes of the three-dimensional coordinate system in the direction of the virtual camera and the direction of the real VR helmet are respectively shown in fig. 1 and 2; an area that a wearer sees when watching a movie while wearing the VR headset is referred to as a viewing cone area, and as can be seen from fig. 3 in this embodiment, the viewing cone area is an area formed by the virtual camera projecting to the surface of the sphere in the three-dimensional coordinate system, where a field angle of the virtual camera is a horizontal field angle FOV of the VR headset.
Wherein, as shown in fig. 3, a reference coordinate system { a } of the three-dimensional sphere in the virtual space is set, on the reference coordinate system { a }, the viewing cone region and the sphere have four focal points of V1, V2, V3 and V4, a distance between V2 and V3 is set as a viewing cone width d, a distance between V1 and V2 is set as a viewing cone height h, and a radius of the sphere is known as R0(ii) a The ratio of the height to the width of the screen of a VR helmet is f, i.e. h/d ═ f
Then there is
Figure BDA0001800827900000061
h=d*f;
After the VR helmet rotates, the virtual camera also rotates in the three-dimensional coordinate system, so as to form a first coordinate system { B }, wherein the posture of the first coordinate system { B } relative to the reference coordinate system is: the first coordinate system { B } is rotated around the reference coordinate system { A }: assuming that the two coordinate systems are overlapped, firstly rotating the { B } around the X axis of the { A } by R3 degrees, then rotating the { B } around the Y axis of the { A } by R2 degrees, and finally rotating the { A } around the Z axis by R1 degrees, so that the current posture can be rotated;
wherein R1 °, R2 °, R3 ° are Euler angles of rotation of the first coordinate system about the X-Y-Z axes of the reference coordinate system, where R2 ° corresponds to Pitch Pitch, R1 ° corresponds to Yaw, and R3 ° corresponds to Roll;
in the reference coordinate system { a }, R1 ° -0, R2 ° -0, R3 ° -0;
the rotation of the first coordinate system relative to the reference coordinate system is:
R1°=α,R2°=β,R3°=γ;
the rotation matrix used to rotate from the reference coordinate system to the first coordinate system is:
Figure BDA0001800827900000071
setting Vector (x)i,yi,zi) Pointing the origin O of the three-dimensional coordinate system to a point V in the cone region on the spherei(xi,yi,zi) The vector of (a) is determined,
then there is Vi(xi,yi,zi)=Vector(xi,yi,zi)*Mxyz;
As shown in FIG. 3, it is known that in the reference coordinate system, four intersections of the viewing cone region and the spherical surface are respectively
Figure BDA0001800827900000072
V in the first coordinate system and the reference coordinate system can be obtained1、V2、V3、V4Corresponding point V1'、V2'、V3'、V4' is:
Figure BDA0001800827900000074
Figure BDA0001800827900000073
Figure BDA0001800827900000084
Figure BDA0001800827900000085
according to the calculation, the viewing cone area on the sphere in the virtual space can be obtained, and the image of the movie in the VR computer is stored in the form of a plane image, so that the corresponding position of the viewing cone area in the plane image needs to be obtained;
a point i (x) on the viewing cone region of the first coordinate systemi,yi,zi) Converted into UV value on the plane, the radius of the circle formed by the sphere on the plane of the viewing cone region is known as RiThen there are:
Figure BDA0001800827900000081
Figure BDA0001800827900000082
Figure BDA0001800827900000083
U=R1°/2π
V=R2°/π。
according to the above formula, it can be known that the coordinate system formed by changing the euler angle on the three-dimensional coordinate system with respect to the reference coordinate system can be quickly converted to the UV value, for example:
when R1 ° -0, x-1, y-0, corresponding to U-0;
when R2 ° -0, z ═ 1, corresponding to V ═ 0;
and when R2 ° -180 °, z ═ 1, corresponding to V ═ 1;
through the calculation, four intersection points V of the viewing cone region and the surface of the sphere are obtained1'、V2'、V3'、V4' in the first plane imageTo obtain a point UV on the first planar image1、UV2、UV3、UV4Passing point UV1、UV2、UV3、UV4Determining the range of the cone region on the first plane image corresponding to the cone region, namely the cone region is UV1、UV2、UV3、UV4The viewing cone area is enlarged to a rectangular area having four sides parallel to the flat image of the film, and the rectangular area is set as a first visual area, and the four sides of the first visual area are respectively subjected to UV1、UV2、UV3、UV4
Recording a first position of the first visual area on the first plane image;
compressing the first plane image under the condition that the image size of the first plane image is not changed to obtain a first compressed image;
overlaying the image in the first visible area on the first compressed image at the first location, combining to form a first composite image, outputting the first composite image to the VR headset.
Example two
Since the image formed in the first embodiment may generate a fault between the sharp image in the first visual area and the blurred image generated after compression, in order to optimize the picture more reasonably, we can adopt a mode:
as shown in fig. 5, a second visual area 3 including a first visual area 2 is set, a second position of the second visual area 3 on the first plane image is recorded, and an area other than the second visual area 3 in the first plane image 1 is set as a background area 6;
compressing the first plane image 1 under the condition of unchanging the image size to obtain a first compressed image; setting the first plane image as M pixels, and obtaining a first compressed image pixel as M/Q (Q is more than 1) after compression; wherein Q is a real number greater than 1.
Setting a high-definition area 4 in a first visual area 2, wherein the high-definition area is located in the center of a second visual area 3, the first visual area 2 includes the high-definition area 4, and the position of the high-definition area on a first plane image is set as a third position, in this embodiment, the first visual area 2 is rectangular, the high-definition area 4 is oval, the second visual area 3 is oval, and the edge of the second visual area 3 is set as a first edge; setting the edge of a high definition area 4 as a second edge, wherein four edges of the first visual area 2 are tangent to the second edge; the four vertices of the first viewable area 2 are located on the first edge.
Setting an area between the first edge and the second edge as an edge area, setting the position of the edge area on the first plane image as a fourth position, and equally dividing the edge area into P edge transition areas from the first edge to the second edge; then the pixels of the image inside the nth edge transition region are:
Figure BDA0001800827900000101
in this embodiment, as can be seen from fig. 5, the edge area from the first edge to the second edge is divided equally into 3 equal parts, and assuming that the original pixel of the first plane image is 4K and the pixel of the first compressed image is 1K, the pixel of the internal image of the 2 nd edge transition area represented by the shaded portion is calculated as 2.5K by the above formula;
and respectively covering the uncompressed image in the high-definition region on the first compressed image according to the third position and the image compressed in the ratio in the edge region according to the fourth position, combining to form a second composite image, outputting the second composite image to the VR helmet, and displaying the second composite image in front of the eyes of the wearer.
EXAMPLE III
According to the second embodiment, a module with a virtual shooting function can be added to the VR computer, so that the VR computer can shoot the image in the film, and through the setting, another way of rendering the background can be obtained:
according to the second embodiment, in order to make the picture finally seen by the wearer more reasonable, the image in the first visible region in the finally output composite image is a clear image, and the image outside the first visible region has a gradually blurred effect, so that the effect can be realized in a way of shooting the image with the depth of field for the first plane image, when the image with the depth of field is shot, an allowable circle of confusion is respectively arranged in front of and behind the focal point of the lens, the distance between the two circles of confusion is the depth of field and is set to be m, wherein the distance from the focal point of the lens to the near allowable circle of confusion is set to be a front depth of field Δ L1, the distance from the focal point to the far allowable circle of confusion is set to be a back depth of field Δ L2, the focal distance of the lens is set to be F, the shooting aperture value of the lens is F, the focusing distance is L;
then there is
Figure BDA0001800827900000111
Figure BDA0001800827900000112
Figure BDA0001800827900000113
As shown in fig. 6, in this embodiment, an elliptical region that is disposed in the first visual region 2 and is tangent to four sides of the first visual region 2 is used as a depth-of-field region selected during shooting, an edge of the depth-of-field region is set as a depth-of-field edge 12, a depth-of-field value is set at this time to be m, an elliptical region that passes through four vertices of the first visual region 2 is set as a third visual region, a position of the third visual region on the first plane image is a fifth position, an edge of the third visual region is set as a transition edge 11, when an image with depth of field is shot, a depth of field is set to be m, the depth of field is set from the depth-of-field edge 12 to the transition edge 11, and the depth of field gradually increases, so that an image with a clear image in the depth-of field region and an image blur degree gradually increases from the.
In this embodiment, when the pixel of the transition edge 11 is J, the pixel of the second compressed image is set to be J;
and covering the third visual area on the first plane image at a fifth position to form a third composite image, and outputting the third composite image.
When J is larger than the set threshold value pixel, parameters such as focal length aperture and the like can be adjusted, and the pixel value is reduced under the condition that the depth of field is not changed.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (6)

1. A VR image transmission method comprises the following steps:
s1, after the VR computer sends the first plane image of the movie to the VR helmet, according to the change of the position of the VR helmet, the VR computer obtains a first visible area which can be seen by a wearer of the VR helmet on the first plane image;
s2, recording a first position of the first visual area on the first planar image; setting a second visual area comprising the first visual area, recording a second position of the second visual area on the first plane image, and setting an area except the second visual area in the first plane image as a background area; setting the edge of the second visual area as a first edge, and setting the edge of the high-definition area as a second edge, wherein the image is gradually blurred from the second edge to the first edge;
s3, compressing the first plane image under the condition that the image size is not changed to obtain a first compressed image; the first plane image is M pixels, and the pixels of the first compressed image obtained after compression are M/Q (Q is more than 1);
s4, overlaying the image in the second viewing area on the first compressed image at the second position, combining to form a second composite image, and outputting the second composite image to the VR headset.
2. The VR image transmission method as claimed in claim 1, wherein, in step S1, when the VR computer obtains that an area that can be seen by the wearer of the VR headset on the first planar image is an irregular quadrilateral, the first visual area is set as a rectangular area having four sides parallel to four sides of the first planar image, respectively, wherein the first visual area includes the irregular quadrilateral area.
3. The VR image transmission method of claim 1, comprising:
step S2 further includes:
s21, setting a high-definition area inside a first visual area, wherein the first visual area comprises the high-definition area;
s22, setting the area between the first edge and the second edge as an edge area, and dividing the edge area from the first edge to the second edge into P edge transition areas; then the pixels of the image inside the nth edge transition region are:
Figure FDA0001800827890000021
4. the VR image transmission method of claim 3, wherein the first visible area is rectangular, the high definition area is elliptical, the second visible area is elliptical, and four sides of the first visible area are tangent to the second edge; the first edge passes through four vertices of the first viewable area.
5. A VR image transmission method, comprising:
s1, after the VR computer sends the first plane image of the movie to the VR helmet, according to the change of the position of the VR helmet, the VR computer obtains the position of a first visible area which can be seen by a wearer of the VR helmet on the first plane image;
s2, recording a first position of the first visual area on the first planar image; shooting the first plane image with depth of field setting to obtain a first depth of field image:
when the first depth image is shot, the specific setting is as follows: setting the first visual area at a first position of the first depth image, setting a depth area in the first visual area, and obtaining a depth value m from the depth area;
the method comprises the steps of setting a third visual area comprising a first visual area, setting the position of the third area on a first depth image to be a fifth position, setting the edge of the third visual area to be a transition edge, setting the edge of the depth area to be a depth edge, gradually increasing the depth of the image from the depth edge to the transition edge when shooting is set, and setting parameters and formulas required to be used when shooting to be as follows:
setting the distance from the focal point of the lens to the close allowable circle of confusion as a front depth of field delta L1, setting the distance from the focal point to the far allowable circle of confusion as a rear depth of field delta L2, setting the focal length of the lens as F, the shooting aperture value of the lens as F, the focusing distance as L and the diameter of the allowable circle of confusion as delta;
then there is
Figure FDA0001800827890000022
Figure FDA0001800827890000031
Figure FDA0001800827890000032
Shooting through the setting to obtain an image in a third visual area on the first depth-of-field image;
s3, compressing the first plane image under the condition that the image size is not changed to obtain a first compressed image;
s4, overlaying the image in the third visible region on the first compressed image at a fifth position, combining to form a third composite image, and outputting the third composite image to the VR headset.
6. The VR image transmission method as claimed in claim 5, wherein when the pixel at the transition edge is J, the pixel of the second compressed image is set to be J, and when the pixel is J greater than the set threshold value, the parameters such as focus aperture and the like are adjusted to reduce the pixel value J below the threshold value without changing the depth of field.
CN201811076200.XA 2018-09-14 2018-09-14 VR image transmission method Active CN110913198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811076200.XA CN110913198B (en) 2018-09-14 2018-09-14 VR image transmission method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811076200.XA CN110913198B (en) 2018-09-14 2018-09-14 VR image transmission method

Publications (2)

Publication Number Publication Date
CN110913198A true CN110913198A (en) 2020-03-24
CN110913198B CN110913198B (en) 2021-04-27

Family

ID=69812227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811076200.XA Active CN110913198B (en) 2018-09-14 2018-09-14 VR image transmission method

Country Status (1)

Country Link
CN (1) CN110913198B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112015273A (en) * 2020-08-26 2020-12-01 京东方科技集团股份有限公司 Data transmission method of virtual reality system and related device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150234189A1 (en) * 2014-02-18 2015-08-20 Merge Labs, Inc. Soft head mounted display goggles for use with mobile computing devices
CN108063946A (en) * 2017-11-16 2018-05-22 腾讯科技(成都)有限公司 Method for encoding images and device, storage medium and electronic device
CN108322727A (en) * 2018-02-28 2018-07-24 北京搜狐新媒体信息技术有限公司 A kind of panoramic video transmission method and device
CN108391133A (en) * 2018-03-01 2018-08-10 京东方科技集团股份有限公司 Processing method, processing equipment and the display equipment of display data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150234189A1 (en) * 2014-02-18 2015-08-20 Merge Labs, Inc. Soft head mounted display goggles for use with mobile computing devices
CN108063946A (en) * 2017-11-16 2018-05-22 腾讯科技(成都)有限公司 Method for encoding images and device, storage medium and electronic device
CN108322727A (en) * 2018-02-28 2018-07-24 北京搜狐新媒体信息技术有限公司 A kind of panoramic video transmission method and device
CN108391133A (en) * 2018-03-01 2018-08-10 京东方科技集团股份有限公司 Processing method, processing equipment and the display equipment of display data

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112015273A (en) * 2020-08-26 2020-12-01 京东方科技集团股份有限公司 Data transmission method of virtual reality system and related device
WO2022042039A1 (en) * 2020-08-26 2022-03-03 京东方科技集团股份有限公司 Data transmission method for virtual reality system and related apparatus
CN112015273B (en) * 2020-08-26 2024-05-24 京东方科技集团股份有限公司 Data transmission method and related device of virtual reality system

Also Published As

Publication number Publication date
CN110913198B (en) 2021-04-27

Similar Documents

Publication Publication Date Title
CN113382168B (en) Apparatus and method for storing overlapping regions of imaging data to produce an optimized stitched image
US11575876B2 (en) Stereo viewing
US10460459B2 (en) Stitching frames into a panoramic frame
CN105103034B (en) Display
US10257501B2 (en) Efficient canvas view generation from intermediate views
CN110956583B (en) Spherical image processing method and device and server
CN106688231A (en) Stereo image recording and playback
CN101843107A (en) OSMU(one source multi use)-type stereoscopic camera and method of making stereoscopic video content thereof
EP2954487A1 (en) Improvements in and relating to image making
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
US11812009B2 (en) Generating virtual reality content via light fields
US20190266802A1 (en) Display of Visual Data with a Virtual Reality Headset
KR20190134715A (en) Systems, methods, and software for generating virtual three-dimensional images that appear to be projected in front of or on an electronic display
US20210185299A1 (en) A multi-camera device and a calibration method
CN107005689B (en) Digital video rendering
CN105809729A (en) Spherical panorama rendering method for virtual scene
EP3057316B1 (en) Generation of three-dimensional imagery to supplement existing content
CN110913198B (en) VR image transmission method
US11036048B2 (en) Virtual reality system and method for displaying on a real-world display a viewable portion of a source file projected on an inverse spherical virtual screen
CN110913199B (en) VR image transmission method
CN110060349B (en) Method for expanding field angle of augmented reality head-mounted display equipment
JP2015043187A (en) Image generation device and image generation program
TWI683280B (en) Method and apparatus for generating three-dimensional panoramic video
US10110876B1 (en) System and method for displaying images in 3-D stereo
Bourke Omni-directional stereoscopic fisheye images for immersive hemispherical dome environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100007 204, 2nd floor, building 3, No.2, zanjingguan Hutong, Dongcheng District, Beijing

Applicant after: Oriental Dream Virtual Reality Technology Co., Ltd

Address before: 100097 Beijing city Haidian District landianchang Road No. 25 11-20

Applicant before: BEIJING HENGXIN RAINBOW INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant