WO2018126922A1 - 全景视频渲染方法、装置及电子设备 - Google Patents

全景视频渲染方法、装置及电子设备 Download PDF

Info

Publication number
WO2018126922A1
WO2018126922A1 PCT/CN2017/118256 CN2017118256W WO2018126922A1 WO 2018126922 A1 WO2018126922 A1 WO 2018126922A1 CN 2017118256 W CN2017118256 W CN 2017118256W WO 2018126922 A1 WO2018126922 A1 WO 2018126922A1
Authority
WO
WIPO (PCT)
Prior art keywords
angle
vertex
sphere model
sphere
display
Prior art date
Application number
PCT/CN2017/118256
Other languages
English (en)
French (fr)
Inventor
杨金锋
郭万永
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2018126922A1 publication Critical patent/WO2018126922A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs

Definitions

  • the present application relates to panoramic video technology, and in particular to a panoramic video rendering method.
  • the application also relates to a panoramic video rendering device, and an electronic device.
  • panoramic video technology users can watch video anywhere within 360 degrees of up, down, left and right. For example, the user wears a display helmet, and then through the rotation of the head, you can see the image of the panoramic video in different directions, thus bringing the user a A true sense of immersiveness.
  • the overall solution of panoramic video technology usually includes two stages of production and playback of panoramic video.
  • the panoramic video can be produced by the panoramic shooting device and the image synthesis software.
  • a special panoramic video player is usually required for playing. The panoramic video player reads the panoramic video image from the panoramic video and then renders the presentation by attaching the read panoramic video image to the surface of the spherical model.
  • the viewing position can usually be set at the center of the sphere model
  • the line of sight direction can be controlled by rotating a display helmet for viewing a panoramic video, or by moving a terminal device for viewing a panoramic video, and the panoramic video player controls the content of the panoramic video image displayed to the user according to the direction of the line of sight, thereby providing the user with Immersive viewing experience.
  • the present application provides a panoramic video rendering method to solve the problem that the existing panoramic video rendering technology causes the panoramic video image to be deformed due to inconsistent display ratios.
  • the embodiment of the present application further provides a panoramic video rendering device, and an electronic device.
  • the application provides a panoramic video rendering method, including:
  • each sphere model vertex in the display range corresponds to a position in a preset display plane according to the sphere model data, wherein the display range is according to the current viewpoint Data and sphere model data are determined;
  • the vertex For each sphere model vertex within the display range, the vertex corresponds to a position in the display plane corresponding to the corresponding pixel value in the panoramic video image, wherein the corresponding pixel value is according to the The texture coordinates obtained by the vertex are obtained.
  • the determining, by using the sphere model data, that each of the sphere model vertices in the display range corresponds to a position in the preset display plane according to the sphere model data includes:
  • each of the sphere model vertices in the display range corresponds to a position in the preset display plane according to the manner in which the first direction and the second direction are respectively uniformly distributed.
  • the determining, by using the sphere model data, that each of the sphere model vertices in the display range corresponds to a position in the preset display plane according to the sphere model data includes:
  • the manner of uniformly distributing at least in the first direction includes: a manner of uniformly distributing in the first direction and the second direction, respectively;
  • the method includes:
  • the sphere model is uniformly distributed in a first direction according to the second angle and an angle of view corresponding to the first direction. Adjusting the position of the vertex from the first position to the second position corresponding to the position in the display plane includes:
  • the position of the spherical model vertex corresponding to the display plane is horizontally adjusted from the first position to the second position in a direction away from the center of the display plane.
  • the sphere model is uniformly distributed in a first direction according to the second angle and an angle of view corresponding to the first direction. Adjusting the position of the vertex from the first position to the second position corresponding to the position in the display plane includes:
  • the position of the spherical model vertex corresponding to the display plane is vertically adjusted from the first position to the second position in a direction away from the center of the display plane.
  • the angle of view according to the third angle and the second direction is uniformly distributed in the second direction.
  • the manner in which the spheroid of the sphere model is adjusted from the second position to the third position corresponding to the position in the display plane comprises:
  • the position of the spherical model vertex corresponding to the display plane is vertically adjusted from the second position to the third position in a direction away from the center of the display plane.
  • the angle of view according to the third angle and the second direction is uniformly distributed in the second direction.
  • the manner in which the spheroid of the sphere model is adjusted from the second position to the third position corresponding to the position in the display plane comprises:
  • the position of the spherical model vertex corresponding to the display plane is horizontally adjusted from the second position to the third position in a direction away from the center of the display plane.
  • the method further includes: adopting The first direction is determined as follows: if the horizontal field of view angle is not less than the vertical field of view angle, the first direction is a horizontal direction, otherwise the first direction is a vertical direction.
  • the sphere model data includes: coordinates of vertices of each sphere model.
  • the method further determines that the vertices of each sphere model in the display range correspond to the positions in the preset display plane according to the sphere model data, and are in the display range.
  • Each of the sphere model vertices is executed by the GPU according to the step of rendering the vertex corresponding to the position in the display plane corresponding to the corresponding pixel value in the panoramic video image.
  • the method is implemented in a panoramic video playing device, including: a personal computer, a tablet computer, a smart phone, or a display helmet.
  • the present application further provides a panoramic video rendering device, including:
  • a model data and texture coordinate acquiring unit configured to acquire sphere model data for displaying the panoramic video and texture coordinates corresponding to the vertex of the sphere model
  • An image and viewpoint data acquiring unit configured to acquire a panoramic video image to be displayed and current viewpoint data
  • a position determining unit configured to determine, according to the sphere model data, a position of each sphere model vertex in the display range corresponding to a position in a preset display plane according to the manner of at least uniformly distributed in the first direction, wherein the display range Is determined according to the current viewpoint data and the sphere model data;
  • a rendering unit configured to render, for each of the sphere model vertices in the display range, the position corresponding to the vertex corresponding to the display plane according to the corresponding pixel value in the panoramic video image, wherein the corresponding The pixel value is obtained from the texture coordinates corresponding to the vertex.
  • the position determining unit is configured to determine, according to the spherical model data, each spherical model vertex in the display range corresponding to the preset display by using a manner of uniformly distributing in the first direction and the second direction respectively. The position in the plane.
  • the location determining unit includes:
  • a loop control subunit that triggers the following subunit work for each sphere model vertex that is within the display range:
  • a first angle determining subunit configured to determine a first angle between a line connecting the vertices of the sphere model to the center of the sphere and the current line of sight;
  • a first position determining subunit configured to determine, according to the first angle, a first position of the spherical model vertex at a central projection point of the display plane according to a central projection manner;
  • a second angle determining subunit configured to determine a second angle between the line connecting the vertices of the sphere model and the center of the sphere and the current line of sight in the first direction;
  • a first position adjusting subunit configured to, according to the second angle and the field of view corresponding to the first direction, the spherical model vertex corresponding to the display plane by using a uniform distribution in the first direction The position is adjusted from the first position to the second position.
  • the manner of uniformly distributing at least in the first direction includes: a manner of uniformly distributing in the first direction and the second direction, respectively;
  • the location determining unit further includes:
  • a third angle determining subunit configured to determine, after the first position adjusting subunit works, a third angle between the line connecting the vertices of the sphere model to the center of the sphere and the current line of sight in the second direction;
  • a second position adjustment subunit configured to, according to the third angle and the angle of view corresponding to the second direction, the spherical model vertex corresponding to the display plane by using a uniform distribution in the second direction The position is adjusted from the second position to the third position.
  • the device further includes:
  • a first direction determining unit configured to determine a first direction before the position determining unit works: if the horizontal field of view angle is not less than a vertical field of view angle, the first direction is a horizontal direction, otherwise The first direction is the vertical direction.
  • the application further provides an electronic device, including:
  • a memory for storing instructions
  • the processor is coupled to the memory for reading the instructions stored in the memory, and performing operations of: acquiring spherical model data for displaying a panoramic video and texture coordinates corresponding to a vertex of the sphere model; acquiring to be displayed The panoramic video image and the current viewpoint data; determining, in the manner of being evenly distributed in the first direction, determining, according to the sphere model data, each sphere model vertex in the display range corresponds to a position in the preset display plane, wherein The display range is determined according to the current view data and the sphere model data; for each sphere model vertex within the display range, the vertex corresponds to the display according to the corresponding pixel value in the panoramic video image according to the vertex The position in the plane is rendered, wherein the corresponding pixel value is obtained according to the texture coordinates corresponding to the vertex.
  • the determining, by using the sphere model data, that each of the sphere model vertices in the display range corresponds to a position in the preset display plane according to the sphere model data includes:
  • each of the sphere model vertices in the display range corresponds to a position in the preset display plane according to the manner in which the first direction and the second direction are respectively uniformly distributed.
  • the determining, by using the sphere model data, that each of the sphere model vertices in the display range corresponds to a position in the preset display plane according to the sphere model data includes:
  • the manner of uniformly distributing at least in the first direction includes: a manner of uniformly distributing in the first direction and the second direction, respectively;
  • the method includes:
  • the method further includes: adopting The first direction is determined as follows: if the horizontal field of view angle is not less than the vertical field of view angle, the first direction is a horizontal direction, otherwise the first direction is a vertical direction.
  • the panoramic video rendering method provided by the present application first obtains the sphere model data for displaying the panoramic video and the texture coordinates corresponding to the vertices, and the panoramic video image and the current viewpoint data to be displayed; and then, according to the obtained information, at least The manner in which the first direction is evenly distributed determines that each sphere model vertex within the display range corresponds to a position in the preset display plane and renders the position.
  • each of the spherical model vertices in the display range corresponds to a position in the preset display plane that is evenly distributed at least in the first direction, that is, at least in the first direction.
  • the display ratios of the scenes are all consistent, so that the problem of distortion of the scene caused by the inconsistent display ratio of the panoramic video image can be effectively improved or solved, so that the display effect of the panoramic video is more in line with the habit of the human eye, and the user is more realistic. Viewing experience.
  • FIG. 1 is a schematic diagram showing an inconsistent display ratio caused by the existing panoramic video rendering technology
  • Figure 2 is a schematic diagram of a horizontal field of view and a vertical field of view
  • FIG. 3 is a flow chart of an embodiment of a panoramic video rendering method of the present application.
  • FIG. 4 is a flowchart of a process for determining a position of a spheroid of a sphere model corresponding to a position in a display plane according to an embodiment of the present application;
  • FIG. 5 is a schematic diagram of a sphere model and a display plane provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of uniformly distributing points on the equatorial circumference of a sphere model to a display plane according to an embodiment of the present application;
  • FIG. 7 is a schematic structural diagram of a panoramic video display using a GPU according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of comparison of vertices distribution observed by a wulff net diagram provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of comparison of display effects of panoramic video images provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an embodiment of a panoramic video rendering device of the present application.
  • FIG. 11 is a schematic diagram of an embodiment of an electronic device of the present application.
  • the existing panoramic video rendering technology often causes the display ratios of the different display area scenes to be inconsistent in both the horizontal and vertical directions.
  • the present application provides a panoramic video rendering method different from the prior art.
  • the method can be applied to a panoramic video playing device, including a personal computer, a tablet computer, a smart phone, or a display helmet.
  • the technical solution provided by the present application determines that each of the sphere model vertices in the display range corresponds to a position in the preset display plane when the panoramic video image is rendered at least uniformly distributed in the first direction, thereby ensuring that the display is in the display.
  • the vertices of each sphere model in the range corresponding to the position in the preset display plane are evenly distributed at least in the first direction, that is, the consistency of the display ratio is ensured at least in the first direction, and the panoramic video image can be effectively improved.
  • the problem of inconsistent display shows a more realistic viewing experience for users.
  • the first direction includes a horizontal direction or a vertical direction.
  • the preset angle of view includes an angle of view corresponding to a horizontal direction (referred to as a horizontal field of view angle) and an angle of view corresponding to a vertical direction (referred to as a vertical field of view), see FIG. 2, where AOB is the horizontal field of view and ⁇ BOC is the vertical field of view.
  • the first direction can be determined as follows: horizontal field of view The direction corresponding to the larger of the angle and the vertical field of view as the first direction, that is, if the horizontal field of view angle is not less than the vertical field of view angle, the first direction is a horizontal direction, otherwise the first direction It is in the vertical direction.
  • the horizontal direction can be selected as the first direction.
  • a preferred embodiment in which the first direction and the second direction are respectively uniformly distributed may be employed, and it is determined that each sphere model vertex in the display range corresponds to the pre-pre- Set the position in the display plane before rendering.
  • the first direction and the second direction may be a horizontal direction and a vertical direction, respectively.
  • the technical solution can repair the display scale in only one direction, and can also repair in both directions.
  • the position of each spherical model vertex in the display range in the preset display plane may be adjusted first in the horizontal direction, then in the vertical direction, or in the reverse order. That is, the vertical direction is adjusted first, and then the horizontal direction is adjusted.
  • FIG. 3 is a flowchart of an embodiment of a panoramic video rendering method of the present application. The method includes the following steps:
  • Step 301 Acquire spherical model data for displaying a panoramic video and texture coordinates corresponding to a vertex of the sphere model.
  • the rendering process of the panoramic video usually needs to render the panoramic video image in the display range by attaching the panoramic video image to the surface of the sphere model. Therefore, for the panoramic video to be displayed, the corresponding sphere model can be preset, and the sphere model data and the texture coordinates corresponding to the vertices of the sphere model are determined. This step acquires the above information about the sphere model.
  • the sphere model data includes coordinates of various points on the surface of the sphere model, wherein each point on the surface of the sphere model, commonly referred to as a sphere model vertex, may be referred to as a vertex in the following description.
  • a sphere model vertex may be referred to as a vertex in the following description.
  • the coordinates of the vertices of each sphere model can be expressed in the form of (x, y, z).
  • the texture coordinates corresponding to the vertices of the sphere model refer to the coordinates of the sphere model corresponding to the coordinates in the panoramic video image, which can usually be expressed in the form of (x, y).
  • a panoramic video image is typically a two-dimensional image in which positional information for each pixel in the image can be represented by its coordinates. Since the panoramic video image is rendered and displayed in a manner attached to the surface of the sphere model, each sphere model vertex can correspond to a pixel point in the panoramic video image, and the coordinates of the pixel point are generally referred to as corresponding sphere model vertex correspondence. Texture coordinates.
  • the sphere model data and the texture coordinates corresponding to the vertex of each sphere model can be obtained.
  • Step 302 Acquire a panoramic video image to be displayed and current view data.
  • the panoramic video is usually composed of a series of panoramic video images.
  • the process of displaying the panoramic video is a process of sequentially rendering the panoramic video image in the display range to a preset display plane and displaying it on the display device. Therefore, this step acquires a frame of panoramic video image to be displayed.
  • this step also acquires the current viewpoint data, that is, acquires data indicating the current line of sight direction of the user.
  • the sphere model uses a spatial Cartesian coordinate system, and the viewing position is usually set at the center of the sphere. Therefore, the line of sight direction can be represented by an angle with any two axes in the space rectangular coordinate system.
  • Step 303 Determine, according to the spherical model data, a position of each sphere model vertex in the display range corresponding to a position in the preset display plane according to the manner that the first direction and the second direction are respectively uniformly distributed.
  • This step adopts a manner of uniformly distributing in the first direction and the second direction respectively, and determines, according to the sphere model data, that each of the sphere model vertices in the display range corresponds to a position in the preset display plane.
  • the first direction and the second direction are a horizontal direction and a vertical direction, respectively.
  • the step may determine the display range of the surface of the sphere model according to the current viewpoint data, the radius of the sphere model (which may be determined according to the sphere model data), and the preset horizontal field of view and the vertical field of view angle, that is, The range in which the user's line of sight can be viewed, and then the position of each of the sphere model vertices in the display range is uniformly distributed in the horizontal direction and the vertical direction, and is determined to correspond to the position in the preset display plane, thereby Proportional rendering is ready to display panoramic video images.
  • the process of determining that a certain spherical model vertex in the display range corresponds to a position in the display plane may include the following steps 303-1 to 303-6, which will be described below in conjunction with FIG. 4.
  • steps 303-1 to 303-6 which will be described below in conjunction with FIG. 4.
  • the model vertex (hereinafter referred to as the vertices of the sphere model), the V point is the intersection of the current line of sight direction corresponding to the current viewpoint data and the surface of the sphere model, the display plane is tangent to the sphere model, and the OV is perpendicular to the display plane.
  • Step 303-1 Determine a first angle between a line connecting the vertices of the sphere model to the center of the sphere and the current line of sight.
  • this step determines the size of the first angle between the line of the sphere model vertex A (hereinafter referred to as vertex A or A point) to the center O and OV, that is, the size of the ⁇ AOV.
  • the first angle is referred to as an angle.
  • an arc has a circumferential angle equal to half of its central angle and a circumferential angle to which the diameter is right angle, and the first angle can be determined by the following derivation process.
  • distance(A, V) represents the distance between two points calculated from the coordinates of point A and point V
  • R is the radius of the sphere model
  • Step 303-2 Determine, according to the first angle, a first position of the vertices of the sphere model at a central projection point of the display plane based on the central projection manner.
  • This step uses the central projection method to determine the position of the vertex A at the center projection point of the display plane as the first position. Please refer to FIG. 5, that is, the position of the intersection A' of the extension line of the OA and the display plane is determined. Specifically, the distance OA' between the center of the sphere O and the point A' can be calculated by the following derivation process.
  • step 303-1 The size of the angle has been calculated in step 303-1. Therefore, the length of the OA' can be calculated by using the above formula 2 in this step. On the basis of this, combined with the coordinates of the vertex A and the radius of the sphere model, the A' is calculated. Shows the position in the plane.
  • a plane rectangular coordinate system with V as the coordinate origin, the vertical direction as the y-axis, and the horizontal direction perpendicular to the y-axis as the x-axis can be set in the display plane, and then the coordinates of the vertex A can be used according to the coordinates of the vertex A.
  • the radius of the sphere model and the calculated OA' calculate the vertical distance between A' and the plane of the sphere model equator, ie: the ordinate of A' in the display plane, and then calculate A'V according to OA' and OV.
  • the abscissa of A' in the display plane can be calculated, thereby determining the position of the vertex A at the central projection point A' of the display plane, that is, the first position described in this embodiment. .
  • the first position is used as the vertex A corresponding to the position in the display plane, and is rendered.
  • the technical solution provided in this embodiment further performs the following steps after determining the first position.
  • 303-3 to 303-6 adjust the position to solve the problem of inconsistent display ratio.
  • Step 303-3 Determine a second angle between the line connecting the vertices of the sphere model and the center of the sphere and the current line of sight in the horizontal direction.
  • this step may first determine the projection point B of the vertex A on the equatorial circumference of the sphere model, and then calculate according to the coordinates of the projection point B in the sphere model, the coordinates of the V point, and the radius of the sphere model.
  • the second angle of ⁇ AOV in the horizontal direction is denoted as angle_H.
  • Point B is the projection point of vertex A on the equatorial circumference of the sphere model.
  • the line connecting the spherical point O and the vertex A at the orthogonal projection point of the spherical model equatorial plane can be extended from the center of the sphere to the outside of the sphere model.
  • the intersection with the circumference of the equator is the projection point of the vertex A on the equatorial circumference of the sphere model, that is, point B.
  • the coordinates of point B in the sphere model can be calculated.
  • the circumferential angle of an arc is equal to half of the central angle of the arc, and the circumferential angle of the diameter is a right angle, which can be determined by the following derivation process.
  • the second angle is angel_H.
  • distance(B,V) represents the distance between two points calculated from the coordinates of point B and point V
  • R is the radius of the sphere model
  • Step 303-4 adjusting, according to the second angle and the horizontal angle of view, the position of the spherical model vertex corresponding to the position in the display plane from the first position to the second manner by using a uniform distribution in the horizontal direction. position.
  • the panoramic video image attached to the surface of the sphere model In order to allow the panoramic video image attached to the surface of the sphere model to be in the display range, it can be evenly distributed on the display plane in the horizontal direction, and can be followed by the center angle and the arc mapping length for each vertex within the display range.
  • the proportional rule adjusts the position of the vertex corresponding to the position in the display plane in the horizontal direction.
  • FIG. 6 is a schematic diagram of uniformly distributing the vertices on the equatorial circumference of the sphere model onto the display plane.
  • the intersection of the equatorial circumference and the display plane is V point
  • the vertex C on the circumference of the equator adopts a central projection method corresponding to a projection point on the display plane as a point C'
  • a horizontal field of view angle is fov.
  • the point C corresponds to the position in the display plane as C 1
  • the center point corresponding to the point C and the point V is denoted by w
  • the distance between the point C 1 and the point V is denoted as y
  • the distance between the C' point and the V point is denoted by x
  • the total display length corresponding to the horizontal field of view angle fov is denoted as width
  • the arc corresponding to the arc angle and the arc mapping length ie, the arc corresponding to the central angle corresponds to the display
  • the step may be performed on the basis of the above formula 4, and the position of the vertex A corresponding to the position in the display plane is adjusted from the first position to the second position.
  • the following processing procedures are included, which are described below in conjunction with FIG. 5:
  • A'A 0 can be regarded as x, fov_H is taken as fov, and the second angle_H of the current line of sight and the current line of sight in the horizontal direction is taken as w, and is substituted into the above formula 4.
  • the obtained y value is that the vertex A corresponds to the horizontal distance between the position to be determined in the display plane and the first orthogonal projection point A 0 .
  • the vertex A is horizontally adjusted from the first position to the second position in a direction away from the center of the display plane corresponding to the position in the display plane.
  • the vertex A may be horizontally moved from the A′ point in a direction away from the center of the display plane to the A 1 point corresponding to the position in the display plane, wherein the distance between the A 1 point and the A 0 point is in the above 3) horizontal distance calculated, namely: the y-axis from the plane of the display point a 1, the information may be determined in accordance with the a 1 point in the display plane of the abscissa, the ordinate of the point a 1 and the center of the projection point a 'same.
  • Step 303-5 Determine a third angle between the line connecting the vertex of the sphere model and the center of the sphere and the current line of sight in the vertical direction.
  • this step may determine the third angle angle_V of the line connecting the vertex A to the center of the circle O and the current line of sight in the vertical direction by the following derivation process.
  • step 303-3 has determined the line connecting the vertex A to the center of the sphere O and the current line of sight.
  • the second angle angle_H in the horizontal direction so this step can calculate the third angle angle_V according to the above formula 5.
  • Step 303-6 adjusting, according to the third angle and the vertical field of view angle, the position of the spherical model vertex corresponding to the position in the display plane from the second position to the third manner by using a uniform distribution in the vertical direction. position.
  • This step can adjust the position of the vertex A corresponding to the position in the display plane from the second position determined in step 303-4 to the principle of uniformly distributing the vertex on the equatorial circumference of the sphere model to the display plane as described in step 303-4.
  • the third position so as to achieve uniform distribution in the vertical direction. Specifically, the following processing may be included, which is described below in conjunction with FIG. 5:
  • the vertex A is determined according to the rule that the central angle is proportional to the length of the arc mapping.
  • the vertex A is determined according to the rule that the central angle is proportional to the length of the arc mapping.
  • the vertex A is determined according to the rule that the central angle is proportional to the length of the arc mapping.
  • the vertex A is determined according to the rule that the central angle is proportional to the length of the arc mapping.
  • the vertex A is determined according to the rule that the central angle is proportional to the length of the arc mapping.
  • the vertex A is determined according to the rule that the central angle is proportional to the length of the arc mapping.
  • A'V' may be taken as x
  • fov_V is taken as fov
  • the third angle angle_V of the vertical line in the vertical direction of the line connecting the vertex A to the center O is taken as w, and is substituted into the above formula 4,
  • the obtained y value is that the vertex A corresponds to the vertical distance between the to-be-determined position in the display plane and the second orthogonal projection point V'.
  • the vertex A is vertically adjusted from the second position to the third position in a direction away from the center of the display plane corresponding to the position in the display plane.
  • the vertex A may be correspondingly moved from the A 1 point to the position away from the center of the display plane to the A 2 point, wherein the vertical distance between the A 2 point and the V′ point is 2) the calculated vertical distance, namely: the x-axis from the plane of the display point a 2, may be determined based on the information display plane a 2 points in the ordinate, and the abscissa point a 2 a same 1:00.
  • each of the sphere model vertices in the display range may be determined to correspond to the position in the display plane in the manner described in steps 303-1 to 303-6.
  • Step 304 For each sphere model vertex in the display range, render the vertex corresponding to the position in the display plane according to the corresponding pixel value in the panoramic video image.
  • the following operations are performed: acquiring pixel values of corresponding pixels in the panoramic video image to be displayed according to the texture coordinates corresponding to the vertex, for example, RGB pixel values, and then according to the pixel values.
  • the vertex is corresponding to the position in the display plane.
  • the pixel at the A2 position in the preset display plane may be compared according to the corresponding pixel value obtained from the panoramic video image based on the texture coordinate. Point to render.
  • the pixels that have not been rendered in the display plane may be rendered according to the surrounding pixels.
  • the pixel is rendered by interpolation calculation, or the pixel value of the corresponding pixel is obtained from the panoramic video image to be displayed according to the relative positional relationship with the surrounding rendered pixel. Since the vertices of each sphere model in the display range correspond to the positions in the display plane and are evenly distributed in the horizontal direction and the vertical direction, the images displayed based on the above-described rendering process are also uniform in the horizontal direction and the vertical direction. Distributed, there will be no deformation of the scene caused by the inconsistent proportion.
  • the display plane of the embodiment may be a display cache of the display device, and the display of the panoramic video image may be implemented by rendering the display plane; the display plane described in this embodiment may also be a display cache.
  • Corresponding memory area after rendering the display plane, the rendered content in the display plane can be written into the display buffer by a proportional scaling process, thereby realizing the display of the panoramic video image.
  • step 303 and step 304 may be performed by the GPU, for example, the sphere model data and texture coordinates acquired in steps 301 and 302, and the panoramic video image and the current viewpoint data to be displayed may be provided as input to the GPU, the GPU.
  • the following operations are performed on each vertex of the sphere model: if the vertex processing unit determines that the current processing vertex is in the display range, the vertex corresponding to the vertex is determined by uniformly distributing in the horizontal direction and the vertical direction respectively.
  • the position in the preset display plane is provided, and the vertex information and the corresponding position information are provided to the pixel processing unit, and the pixel processing unit performs rendering according to each received vertex information and corresponding position information.
  • the rendered content is output to the display device display.
  • FIG. 7 is a schematic structural diagram of a panoramic video display using a GPU according to an embodiment of the present disclosure.
  • the GPU is used for rendering, which not only can achieve uniform distribution of the first direction and the second direction, but also can obtain smooth panoramic video because of the high-speed image processing capability of the GPU. Display of results.
  • the position corresponding to the central projection point in the display plane is first determined, and then the horizontal direction and the vertical direction are sequentially performed.
  • Position adjustment In other embodiments, the vertical direction may be adjusted first, and then the horizontal direction is adjusted, that is, the first direction is a vertical direction, and the second direction is a horizontal direction.
  • the specific adjustment process includes 1) and 2) Two-part operation (it is to be noted that, in the embodiment in which only the vertical direction is adjusted, the operation in 2) may not be performed).
  • the method includes: determining a first angle between a line connecting the vertices of the currently processed sphere model to the center of the sphere and the current line of sight; determining, according to the first angle, the spheroid of the sphere model in the display according to the first angle a first position of a central projection point of the plane; a second angle determining a line connecting the vertices of the sphere model to the center of the sphere and a vertical line of the current line of sight; determining a second plane of the central projection point on a plane of the slab of the sphere model Orthogonal projection point; according to the distance between the central projection point and the second orthogonal projection point, and the second angle and the vertical field of view angle, following a rule that the central angle is proportional to the length of the arc mapping Determining that the sphere model vertex corresponds to a vertical distance between the to-be-determined position in the display plane and the second orthogonal projection point
  • the method includes: determining a third angle between a line connecting the vertices of the sphere model and the center of the sphere and a current line of sight in a horizontal direction; determining an intersection of a horizontal plane of the central projection point and a vertical axis of the sphere model; determining the intersection point in the a first orthogonal projection point on the display plane; according to a distance between the central projection point and the first orthogonal projection point, and the third angle and the horizontal angle of view, following a central angle and a circle a rule that is proportional to the length of the arc map, determining that the sphere model vertex corresponds to a horizontal distance between the to-be-determined position in the display plane and the first orthogonal projection point; and according to the horizontal distance, the sphere model vertex
  • the position corresponding to the display plane is horizontally adjusted from the second position to the third position in a direction away from the center of the display plane.
  • the embodiment of the panoramic video rendering method provided by the above text, according to the manner that the first direction and the second direction are respectively uniformly distributed, determining that each sphere model vertex in the display range corresponds to The position in the display plane is preset and rendered according to the determined position, so the distribution of each vertex on the display plane is uniform in both the first direction and the second direction.
  • FIG. 8 is a schematic diagram of a comparison of vertices distribution observed by a wulff net diagram according to an embodiment.
  • the intersection point in the figure represents the position distribution of the vertices of the sphere model in the display plane
  • Figure (a) is a schematic diagram of a vertex distribution obtained by using the prior art
  • Figure (b) is a schematic diagram of a vertex distribution obtained by using the rendering method provided by the embodiment. It is not difficult to see that the vertices in Fig. (b) are uniformly distributed in the horizontal direction and the vertical direction, respectively, while the prior art is obviously uneven.
  • the above-mentioned embodiment provided in this embodiment performs the rendering of the panoramic video, since the vertices are uniformly distributed in the first direction and the second direction, that is, the display ratio of the scenes in different display areas in the first direction and the second direction All of them are consistent, so that the problem of deformation of the scene caused by the inconsistent display ratio of the prior art can be effectively improved, so that the display effect of the panoramic video is more in line with the habit of the human eye, giving the user a more realistic viewing experience.
  • FIG. 9 which is a comparison diagram of a panoramic video image display effect provided by the embodiment
  • FIG. ( a ) and ( b ) show the same frame panoramic video image
  • (a) is a prior art
  • the display effect diagram is shown in the figure (b), which is a display effect diagram obtained by using the rendering method provided in this embodiment. It is not difficult to see that the scene at the edge of the image in Figure (a) has a more severe tensile deformation, resulting in more obvious distortion, while Figure (b) shows a consistent display effect, which can be brought to the user. Come to a more realistic viewing experience.
  • the vertices of the sphere model correspond to the positions in the display plane, and are adjusted from the horizontal direction and the vertical direction.
  • the adjustment may be performed only for the horizontal direction, or only Adjust in the vertical direction.
  • step 303-5 and step 303-6 provided in this embodiment may not be performed, and A1 determined in step 303-4 may be used as Vertex A corresponds to the position in the display plane. Since the display ratio in one direction is repaired, the consistency of the display ratio is achieved, and compared with the prior art, there is still a significant beneficial effect, which helps to improve the viewing experience of the user.
  • the panoramic video rendering method provided in this embodiment is that the position of each sphere model vertex in the display range corresponding to the position in the display plane is evenly distributed at least in the first direction, that is, at least in the first side.
  • the display ratios of the scenes in different display areas are consistent, so that the problem of scene distortion caused by the inconsistent display ratio of the panoramic video image can be effectively improved or solved, so that the display effect of the panoramic video is more in line with the habit of the human eye, and the user is given Bring a more realistic viewing experience.
  • FIG. 10 is a schematic diagram of an embodiment of a panoramic video rendering apparatus of the present application. Since the device embodiment is basically similar to the method embodiment, the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the device embodiments described below are merely illustrative.
  • the panoramic video rendering device of the embodiment includes: a model data and texture coordinate acquiring unit 1001, configured to acquire sphere model data for displaying a panoramic video and texture coordinates corresponding to a vertex of the sphere model; and an image and viewpoint data acquiring unit 1002 For obtaining the panoramic video image and the current viewpoint data to be displayed; the position determining unit 1003 is configured to determine, according to the sphere model data, each sphere model vertex in the display range by using at least a uniform distribution in the first direction Corresponding to a position in the preset display plane, wherein the display range is determined according to the current view data and the sphere model data; the rendering unit 1004 is configured to target each sphere model vertex within the display range, according to the The vertex corresponds to a corresponding pixel value in the panoramic video image that renders the vertex corresponding to a position in the display plane, wherein the corresponding pixel value is obtained according to texture coordinates corresponding to the vertex.
  • a model data and texture coordinate acquiring unit 1001 configured to acquire
  • the position determining unit is configured to determine, according to the spherical model data, each spherical model vertex in the display range corresponding to the preset display by using a manner of uniformly distributing in the first direction and the second direction respectively. The position in the plane.
  • the location determining unit includes:
  • a loop control subunit that triggers the following subunit work for each sphere model vertex that is within the display range:
  • a first angle determining subunit configured to determine a first angle between a line connecting the vertices of the sphere model to the center of the sphere and the current line of sight;
  • a first position determining subunit configured to determine, according to the first angle, a first position of the spherical model vertex at a central projection point of the display plane according to a central projection manner;
  • a second angle determining subunit configured to determine a second angle between the line connecting the vertices of the sphere model and the center of the sphere and the current line of sight in the first direction;
  • a first position adjusting subunit configured to, according to the second angle and the field of view corresponding to the first direction, the spherical model vertex corresponding to the display plane by using a uniform distribution in the first direction The position is adjusted from the first position to the second position.
  • the manner of uniformly distributing at least in the first direction includes: a manner of uniformly distributing in the first direction and the second direction, respectively;
  • the location determining unit further includes:
  • a third angle determining subunit configured to determine, after the first position adjusting subunit works, a third angle between the line connecting the vertices of the sphere model to the center of the sphere and the current line of sight in the second direction;
  • a second position adjustment subunit configured to, according to the third angle and the angle of view corresponding to the second direction, the spherical model vertex corresponding to the display plane by using a uniform distribution in the second direction The position is adjusted from the second position to the third position.
  • the first position adjustment subunit when the first direction is a horizontal direction, the first position adjustment subunit includes:
  • An intersection determining subunit configured to determine an intersection of a horizontal plane of the central projection point and a vertical axis of the sphere model
  • a first orthogonal projection point determining subunit configured to determine a first orthogonal projection point of the intersection point on the display plane
  • a horizontal distance calculation subunit for observing a distance between the central projection point and the first orthogonal projection point, and the second angle and the horizontal angle of view, following a circle center angle and an arc mapping length a proportional rule determining that the sphere model vertex corresponds to a horizontal distance between the to-be-determined position in the display plane and the first orthogonal projection point;
  • the first position horizontal adjustment performing subunit is configured to horizontally adjust the position of the spherical model apex corresponding to the display plane from the first position to the second position in a direction away from the center of the display plane according to the horizontal distance.
  • the first position adjustment subunit includes:
  • a second orthogonal projection point determining subunit configured to determine a second orthogonal projection point of the central projection point on a plane of the sphere model equator
  • a vertical distance determining subunit configured to follow a distance between the central projection point and the second orthogonal projection point, and the second angle and the vertical angle of view, in accordance with a central angle and an arc mapping length a proportional rule determining that the sphere model vertex corresponds to a vertical distance between the to-be-determined position in the display plane and the second orthogonal projection point;
  • the first position vertical adjustment execution subunit is configured to vertically adjust the position of the spherical model vertex corresponding to the position in the display plane from the first position in a direction away from the center of the display plane to the second position according to the vertical distance.
  • the second position adjustment subunit includes:
  • a second orthogonal projection point determining subunit configured to determine a second orthogonal projection point of the central projection point on a plane of the sphere model equator
  • a vertical distance determining subunit configured to follow a distance between the central projection point and the second orthogonal projection point, and the third angle and the vertical angle of view, according to a central angle of the circle and an arc mapping length a proportional rule determining that the sphere model vertex corresponds to a vertical distance between the to-be-determined position in the display plane and the second orthogonal projection point;
  • the second position vertical adjustment execution subunit is configured to vertically adjust the position of the sphere model vertex corresponding to the display plane from the second position in a direction away from the center of the display plane to the third position according to the vertical distance.
  • the second position adjustment subunit includes:
  • An intersection determining subunit configured to determine an intersection of a horizontal plane of the central projection point and a vertical axis of the sphere model
  • a first orthogonal projection point determining subunit configured to determine a first orthogonal projection point of the intersection point on the display plane
  • a horizontal distance calculation subunit for observing a distance between the central projection point and the first orthogonal projection point, and the third angle and the horizontal angle of view, following a circle center angle and an arc mapping length a proportional rule determining that the sphere model vertex corresponds to a horizontal distance between the to-be-determined position in the display plane and the first orthogonal projection point;
  • a second position horizontal adjustment performing subunit configured to horizontally adjust the position of the spherical model vertex corresponding to the display plane from the second position to the third position in a direction away from the center of the display plane according to the horizontal distance.
  • the device further includes:
  • a first direction determining unit configured to determine a first direction before the position determining unit works: if the horizontal field of view angle is not less than a vertical field of view angle, the first direction is a horizontal direction, otherwise The first direction is the vertical direction.
  • the sphere model data includes: coordinates of vertices of each sphere model.
  • the function of the location determining unit and the rendering unit is performed by a GPU.
  • the device is deployed in a panoramic video playing device, and the panoramic video playing device comprises: a personal computer, a tablet computer, a smart phone, or a display helmet.
  • the present application also provides an electronic device; the electronic device implementation is as follows:
  • FIG. 11 shows a schematic diagram of an embodiment of an electronic device of the present application.
  • the electronic device includes: a display 1101; a processor 1102; a memory 1103, configured to store an instruction; wherein the processor is coupled to the memory for reading an instruction stored by the memory, and performing the following operations:
  • Each sphere model vertex within the display range corresponds to a position in a preset display plane, wherein the display range is determined according to the current viewpoint data and the sphere model data; for each sphere model vertex within the display range And corresponding to the position in the display plane corresponding to the corresponding pixel value in the panoramic video image, wherein the corresponding pixel value is obtained according to the texture coordinate corresponding to the vertex.
  • the determining, by using the sphere model data, that each of the sphere model vertices in the display range corresponds to a position in the preset display plane according to the sphere model data includes:
  • each of the sphere model vertices in the display range corresponds to a position in the preset display plane according to the manner in which the first direction and the second direction are respectively uniformly distributed.
  • the determining, by using the sphere model data, that each of the sphere model vertices in the display range corresponds to a position in the preset display plane according to the sphere model data includes:
  • the manner of uniformly distributing at least in the first direction includes: a manner of uniformly distributing in the first direction and the second direction, respectively;
  • the method includes:
  • the method further includes: adopting The first direction is determined as follows: if the horizontal field of view angle is not less than the vertical field of view angle, the first direction is a horizontal direction, otherwise the first direction is a vertical direction.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include non-transitory computer readable media, such as modulated data signals and carrier waves.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了一种全景视频渲染方法、装置及电子设备。其中,所述全景视频渲染方法,包括:获取用于展示全景视频的球体模型数据以及球体模型顶点对应的纹理坐标;获取待展示的全景视频图像和当前视点数据;采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置;针对处于显示范围内的每个球体模型顶点,根据该顶点对应于所述全景视频图像中的相应像素值对该顶点对应于显示平面中的位置进行渲染。采用上述方法,可以有效地改善或者解决全景视频图像由于显示比例不一致导致的景物变形的问题,使得显示效果更符合人眼的习惯,给用户带来更好的观看体验。

Description

全景视频渲染方法、装置及电子设备
本申请要求2017年01月05日递交的申请号为201710007618.4、发明名称为“全景视频渲染方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及全景视频技术,具体涉及一种全景视频渲染方法。本申请同时涉及一种全景视频渲染装置,以及一种电子设备。
背景技术
近年来随着个人计算设备、移动智能终端的发展,特别是云计算技术的发展,全景视频技术在娱乐、游戏、交互等领域的应用得以快速推进。采用全景视频技术,用户可以在上下左右360度范围内任意观看视频,例如:用户戴上显示头盔,然后通过头部的转动就可以看到全景视频在不同方向的图像,从而给用户带来一种真正意义上的身临其境的感觉。
全景视频技术的整体方案通常包含全景视频的制作和播放这两大阶段。在制作阶段,可以通过全景拍摄设备配合图像合成软件来制作全景视频,在播放阶段,通常需要使用专门的全景视频播放器进行播放。全景视频播放器从全景视频中读取全景视频图像,然后通过将读取的全景视频图像贴附到球面模型表面的方式进行渲染展示,例如,通常可以将观看位置设置于球体模型的中心,用户可以通过转动用于观看全景视频的显示头盔、或者移动用于观看全景视频的终端设备来控制视线方向,全景视频播放器则根据视线方向控制向用户展示的全景视频图像的内容,从而为用户提供沉浸式的观看体验。
在播放全景视频时,由于球体模型本身有曲度,将二维的图像贴附到球体模型表面上、并采用中心投影等投影技术将处于显示范围内的图像渲染到显示平面上,会导致不同显示区域景物的显示比例不一致,即:在水平方向和垂直方向都会产生比例失调,靠近边界区域的景物往往被拉伸变形,参看图1,同样长度的球面区域映射到显示平面后,显示的长度越靠近边界越长。特别是随着视场角(field of view—fov)的变大,上述现象会更为严重,影响用户的观看体验。
发明内容
本申请提供一种全景视频渲染方法,以解决现有的全景视频渲染技术由于显示比例不一致、导致全景视频图像变形的问题。本申请实施例还提供一种全景视频渲染装置,以及一种电子设备。
本申请提供一种全景视频渲染方法,包括:
获取用于展示全景视频的球体模型数据以及球体模型顶点对应的纹理坐标;
获取待展示的全景视频图像和当前视点数据;
采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,其中,所述显示范围是根据所述当前视点数据以及球体模型数据确定的;
针对处于显示范围内的每个球体模型顶点,根据该顶点对应于所述全景视频图像中的相应像素值对该顶点对应于显示平面中的位置进行渲染,其中,所述相应像素值是根据该顶点对应的纹理坐标获取的。
可选的,所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,包括:
采用在第一方向和第二方向分别均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置。
可选的,所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,包括:
针对处于显示范围内的每个球体模型顶点,利用所述球体模型数据执行如下位置调整操作:
确定该球体模型顶点到球心的连线与当前视线之间的第一夹角;
根据所述第一夹角,基于中心投影方式确定该球体模型顶点在所述显示平面的中心投影点的第一位置;
确定该球体模型顶点到球心的连线与当前视线在所述第一方向的第二夹角;
根据所述第二夹角、以及对应于第一方向的视场角,采用在第一方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置。
可选的,所述至少在第一方向均匀分布的方式包括:在第一方向和第二方向分别均匀分布的方式;
在针对处于显示范围内的每个球体模型顶点执行的位置调整操作中,在将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置之后,包括:
确定该球体模型顶点到球心的连线与当前视线在所述第二方向的第三夹角;
根据所述第三夹角、以及对应于第二方向的视场角,采用在第二方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从所述第二位置调整至第三位置。
可选的,当所述第一方向为水平方向时,所述根据所述第二夹角、以及对应于第一方向的视场角,采用在第一方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置,包括:
确定所述中心投影点所在水平面与球体模型垂直轴线的交点;
确定所述交点在所述显示平面上的第一正交投影点;
根据所述中心投影点与所述第一正交投影点之间的距离、以及所述第二夹角和水平视场角,遵循圆心角与圆弧映射长度成正比的规则,确定该球体模型顶点对应于所述显示平面中的待确定位置与所述第一正交投影点之间的水平距离;
根据所述水平距离,将该球体模型顶点对应于所述显示平面中的位置从第一位置沿远离显示平面中心的方向水平调整至第二位置。
可选的,当所述第一方向为垂直方向时,所述根据所述第二夹角、以及对应于第一方向的视场角,采用在第一方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置,包括:
确定所述中心投影点在球体模型赤道所在平面的第二正交投影点;
根据所述中心投影点与所述第二正交投影点之间的距离,以及所述第二夹角和垂直视场角,遵循圆心角与圆弧映射长度成正比的规则,确定该球体模型顶点对应于所述显示平面中的待确定位置与所述第二正交投影点之间的垂直距离;
根据所述垂直距离,将该球体模型顶点对应于所述显示平面中的位置从第一位置沿远离显示平面中心的方向垂直调整至第二位置。
可选的,当所述第一方向为水平方向、第二方向为垂直方向时,所述根据所述第三夹角、以及对应于第二方向的视场角,采用在第二方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从所述第二位置调整至第三位置,包括:
确定所述中心投影点在球体模型赤道所在平面的第二正交投影点;
根据所述中心投影点与所述第二正交投影点之间的距离,以及所述第三夹角和垂直视场角,遵循圆心角与圆弧映射长度成正比的规则,确定该球体模型顶点对应于所述显示平面中的待确定位置与所述第二正交投影点之间的垂直距离;
根据所述垂直距离,将该球体模型顶点对应于所述显示平面中的位置从第二位置沿 远离显示平面中心的方向垂直调整至第三位置。
可选的,当所述第一方向为垂直方向、第二方向为水平方向时,所述根据所述第三夹角、以及对应于第二方向的视场角,采用在第二方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从所述第二位置调整至第三位置,包括:
确定所述中心投影点所在水平面与球体模型垂直轴线的交点;
确定所述交点在所述显示平面上的第一正交投影点;
根据所述中心投影点与所述第一正交投影点之间的距离、以及所述第三夹角和水平视场角,遵循圆心角与圆弧映射长度成正比的规则,确定该球体模型顶点对应于所述显示平面中的待确定位置与所述第一正交投影点之间的水平距离;
根据所述水平距离,将该球体模型顶点对应于所述显示平面中的位置从第二位置沿远离显示平面中心的方向水平调整至第三位置。
可选的,在所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置之前,还包括:采用如下方式确定第一方向:若水平视场角不小于垂直视场角,则所述第一方向为水平方向,否则所述第一方向为垂直方向。
可选的,所述球体模型数据包括:各球体模型顶点的坐标。
可选的,所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,以及针对处于显示范围内的每个球体模型顶点,根据该顶点对应于所述全景视频图像中的相应像素值对该顶点对应于显示平面中的位置进行渲染的步骤,由GPU执行。
可选的,所述方法在全景视频播放设备中实施,所述全景视频播放设备包括:个人电脑、平板电脑、智能手机、或显示头盔。
相应的,本申请还提供一种全景视频渲染装置,包括:
模型数据及纹理坐标获取单元,用于获取用于展示全景视频的球体模型数据以及球体模型顶点对应的纹理坐标;
图像及视点数据获取单元,用于获取待展示的全景视频图像和当前视点数据;
位置确定单元,用于采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,其中,所述显示范围是根据所述当前视点数据以及球体模型数据确定的;
渲染单元,用于针对处于显示范围内的每个球体模型顶点,根据该顶点对应于所述 全景视频图像中的相应像素值对该顶点对应于显示平面中的位置进行渲染,其中,所述相应像素值是根据该顶点对应的纹理坐标获取的。
可选的,所述位置确定单元,具体用于采用在第一方向和第二方向分别均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置。
可选的,所述位置确定单元包括:
循环控制子单元,用于针对处于显示范围内的每个球体模型顶点,触发以下子单元工作:
第一夹角确定子单元,用于确定该球体模型顶点到球心的连线与当前视线之间的第一夹角;
第一位置确定子单元,用于根据所述第一夹角,基于中心投影方式确定该球体模型顶点在所述显示平面的中心投影点的第一位置;
第二夹角确定子单元,用于确定该球体模型顶点到球心的连线与当前视线在所述第一方向的第二夹角;
第一位置调整子单元,用于根据所述第二夹角、以及对应于第一方向的视场角,采用在第一方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置。
可选的,所述至少在第一方向均匀分布的方式包括:在第一方向和第二方向分别均匀分布的方式;
所述位置确定单元还包括:
第三夹角确定子单元,用于在所述第一位置调整子单元工作后,确定该球体模型顶点到球心的连线与当前视线在所述第二方向的第三夹角;
第二位置调整子单元,用于根据所述第三夹角、以及对应于第二方向的视场角,采用在第二方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从所述第二位置调整至第三位置。
可选的,所述装置还包括:
第一方向确定单元,用于在所述位置确定单元工作前,采用如下方式确定第一方向:若水平视场角不小于垂直视场角,则所述第一方向为水平方向,否则所述第一方向为垂直方向。
此外,本申请还提供一种电子设备,包括:
显示器;
处理器;
存储器,用于存储指令;
其中,所述处理器耦合于所述存储器,用于读取所述存储器存储的指令,并执行如下操作:获取用于展示全景视频的球体模型数据以及球体模型顶点对应的纹理坐标;获取待展示的全景视频图像和当前视点数据;采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,其中,所述显示范围是根据所述当前视点数据以及球体模型数据确定的;针对处于显示范围内的每个球体模型顶点,根据该顶点对应于所述全景视频图像中的相应像素值对该顶点对应于显示平面中的位置进行渲染,其中,所述相应像素值是根据该顶点对应的纹理坐标获取的。
可选的,所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,包括:
采用在第一方向和第二方向分别均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置。
可选的,所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,包括:
针对处于显示范围内的每个球体模型顶点,利用所述球体模型数据执行如下位置调整操作:
确定该球体模型顶点到球心的连线与当前视线之间的第一夹角;
根据所述第一夹角,基于中心投影方式确定该球体模型顶点在所述显示平面的中心投影点的第一位置;
确定该球体模型顶点到球心的连线与当前视线在所述第一方向的第二夹角;
根据所述第二夹角、以及对应于第一方向的视场角,采用在第一方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置。
可选的,所述至少在第一方向均匀分布的方式包括:在第一方向和第二方向分别均匀分布的方式;
在针对处于显示范围内的每个球体模型顶点执行的位置调整操作中,在将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置之后,包括:
确定该球体模型顶点到球心的连线与当前视线在所述第二方向的第三夹角;
根据所述第三夹角、以及对应于第二方向的视场角,采用在第二方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从所述第二位置调整至第三位置。
可选的,在所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置之前,还包括:采用如下方式确定第一方向:若水平视场角不小于垂直视场角,则所述第一方向为水平方向,否则所述第一方向为垂直方向。
与现有技术相比,本申请具有以下优点:
本申请提供的全景视频渲染方法,首先获取用于展示全景视频的球体模型数据和各顶点对应的纹理坐标、以及待展示的全景视频图像和当前视点数据;然后根据上述获取的信息,采用至少在第一方向均匀分布的方式,确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置、并对所述位置进行渲染。
采用上述方法渲染全景视频图像,由于处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置至少在第一方向是均匀分布的,即:至少在第一方向上不同显示区域的景物的显示比例都是一致的,从而可以有效地改善或者解决全景视频图像由于显示比例不一致导致的景物变形的问题,使得全景视频的显示效果更符合人眼的习惯,给用户带来更为真实的观看体验。
附图说明
图1是现有全景视频渲染技术导致显示比例不一致的示意图;
图2是水平视场角和垂直视场角的示意图;
图3是本申请的一种全景视频渲染方法的实施例的流程图;
图4是本申请实施例提供的确定球体模型顶点对应于显示平面中的位置的处理流程图;
图5是本申请实施例提供的球体模型及显示平面的示意图;
图6是本申请实施例提供的将球体模型赤道圆周上的点均匀分布到显示平面上的示意图;
图7是本申请实施例提供的利用GPU进行全景视频显示的架构示意图;
图8是本申请实施例提供的用wulff net图观察到的顶点分布的对比示意图;
图9是本申请实施例提供的全景视频图像显示效果的对比示意图;
图10是本申请的一种全景视频渲染装置的实施例的示意图;
图11是本申请的一种电子设备的实施例的示意图。
具体实施方式
在下面的描述中阐述了很多具体细节以便于充分理解本申请。但是,本申请能够以很多不同于在此描述的其它方式来实施,本领域技术人员可以在不违背本申请内涵的情况下做类似推广,因此,本申请不受下面公开的具体实施的限制。
在本申请中,分别提供了一种全景视频渲染方法、一种全景视频渲染装置、以及一种电子设备。在下面的实施例中逐一进行详细说明。为了便于理解,先对本申请提供的技术方案作简要说明。
现有的全景视频渲染技术,往往会造成不同显示区域景物在水平和垂直两个方向的显示比例都不一致,为了改善这一问题,本申请提供了不同于现有技术的全景视频渲染方法,所述方法可以应用于全景视频播放设备中,所述全景视频播放设备包括:个人电脑、平板电脑、智能手机、或者显示头盔等。
本申请提供的技术方案在渲染全景视频图像时,采用至少在第一方向均匀分布的方式,确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,从而保证了处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置至少在第一方向是均匀分布的,即:至少在第一方向上保证了显示比例的一致性,可以有效地改善全景视频图像显示比例不一致的问题,给用户带来更为真实的观看体验。其中,所述第一方向包括:水平方向,或者垂直方向。
在进行全景视频展示时,通常显示给用户的并不是整个全景视频图像,而是处于球体模型显示范围内的全景视频图像,所述显示范围通常可以由观看者的视线方向、预设的视场角fov、以及球体模型半径(可以根据球体模型数据获知,关于球体模型数据请参见后面步骤301中的说明)确定。所述预设的视场角,包括对应于水平方向的视场角(简称水平视场角)和对应于垂直方向的视场角(简称垂直视场角),请参见图2,其中的∠AOB是水平视场角,∠BOC是垂直视场角。
优选地,考虑到随着fov角度的增大,靠近相应边界处的显示比例的差异会更为明显、导致图像变形更为严重,因此所述第一方向可以采用如下方式确定:将水平视场角和垂直视场角中较大者所对应的方向,作为第一方向,即:若水平视场角不小于垂直视场角,则所述第一方向为水平方向,否则所述第一方向为垂直方向。采用这种方式,通过对变形严重的方向进行比例修复,往往会对显示效果产生比较显著的改善。以图2为 例,可以选择水平方向作为第一方向。
优选地,为了提供更好的显示效果,在渲染全景视频图像时,可以采用在第一方向和第二方向分别均匀分布的优选实施方式,确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,然后再进行渲染。其中,所述第一方向和第二方向可以分别为水平方向和垂直方向。采用这种方式,可以同时修复两个方向上的显示比例不一致的问题,使不同区域的景物在两个方向的显示比例都一致,从而解决因为显示比例的差异导致图像变形的问题。
由此可见,本技术方案可以仅在一个方向上对显示比例进行修复,也可以在两个方向都进行修复。对于后者,在具体实施时,可以针对处于显示范围内的每个球体模型顶点在预设显示平面中的位置,先进行水平方向的调整,然后进行垂直方向的调整,也可以采用相反的顺序,即:先进行垂直方向的调整、然后进行水平方向的调整。
在下面的实施例中,重点以依次对水平方向和垂直方向进行调整为例(即:第一方向为水平方向,第二方向为垂直方向),对本申请提供的技术方案的实施方式进行说明。请参考图3,其为本申请的一种全景视频渲染方法的实施例的流程图。所述方法包括如下步骤:
步骤301、获取用于展示全景视频的球体模型数据以及球体模型顶点对应的纹理坐标。
为了提供可以从上下左右任意角度全方位观看的观景效果,全景视频的渲染过程,通常需要以将全景视频图像贴附在球体模型表面的方式、对处于显示范围的全景视频图像进行渲染。因此针对待展示的全景视频,通常可以预先设定相应的球体模型,确定球体模型数据及球体模型顶点对应的纹理坐标,本步骤则获取关于球体模型的上述信息。
所述球体模型数据包括:处于球体模型表面上的各个点的坐标,其中,球体模型表面上的每个点,通常称为球体模型顶点,在下文的描述中可以简称顶点。例如:在以球心为原点的空间直角坐标系中,每个球体模型顶点的坐标可以用(x,y,z)的形式表示。
所述球体模型顶点对应的纹理坐标,则是指球体模型顶点对应于全景视频图像中的坐标,通常可以用(x,y)的形式表示。全景视频图像通常是二维图像,其中每个像素点在图像中的位置信息可以用其坐标表示。由于全景视频图像以贴附于球体模型表面的方式进行渲染显示,因此每个球体模型顶点都可以与全景视频图像中的像素点相对应,而像素点的坐标通常就称为相应球体模型顶点对应的纹理坐标。
本步骤可以获取上述球体模型数据以及每个球体模型顶点对应的纹理坐标。
步骤302、获取待展示的全景视频图像和当前视点数据。
全景视频通常是由一系列的全景视频图像组成,全景视频的展示过程,就是依次将处于显示范围内的全景视频图像渲染到预设的显示平面、并在显示设备上显示的过程。因此,本步骤获取当前待展示的一帧全景视频图像。
此外,本步骤还要获取当前视点数据,即:获取表明用户当前视线方向的数据。例如:球体模型采用空间直角坐标系,而观看位置通常设置在球心处,因此,视线方向可以用与空间直角坐标系中的任意两个轴的夹角来表示。
步骤303、采用在第一方向和第二方向分别均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置。
本步骤采用在第一方向和第二方向分别均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置。其中,第一方向和第二方向分别为水平方向和垂直方向。
具体的,本步骤可以根据当前视点数据、球体模型的半径(可以根据球体模型数据确定)、并结合预设的水平视场角和垂直视场角等信息确定球体模型表面的显示范围,即:用户视线能够观看到的范围,然后针对处于显示范围内的每个球体模型顶点,按照在水平方向和垂直方向分别均匀分布的方式,确定其对应于预设显示平面中的位置,从而为按照相同比例渲染显示全景视频图像做好准备。
其中,确定处于显示范围内的某一球体模型顶点对应于显示平面中的位置的处理过程可以包括以下步骤303-1至303-6,下面结合图4进行描述。在描述的过程中,为了便于理解,请参见图5给出的球体模型及显示平面的示意图,其中,O为球体模型的球心,处于球体模型表面的A点为:当前待确定位置的球体模型顶点(以下简称该球体模型顶点),V点是对应于当前视点数据的当前视线方向与球体模型表面的交点,显示平面与球体模型相切,并且OV垂直于显示平面。
步骤303-1、确定该球体模型顶点到球心的连线与当前视线之间的第一夹角。
请参见图5,本步骤确定球体模型顶点A(以下简称顶点A或A点)到球心O的连线与OV之间第一夹角的大小,即∠AOV的大小。为了便于描述,将所述第一夹角记为angle。根据以下数学定律:一条弧所对圆周角等于它所对圆心角的一半、以及直径所对的圆周角是直角,可以通过以下推导过程确定所述第一夹角angle。
Figure PCTCN2017118256-appb-000001
Figure PCTCN2017118256-appb-000002
其中,distance(A,V)表示根据A点和V点的坐标计算得到的两点之间的距离,R为球体模型的半径,该信息可以根据球体模型数据获取。
步骤303-2、根据所述第一夹角,基于中心投影方式确定该球体模型顶点在所述显示平面的中心投影点的第一位置。
本步骤采用中心投影方式,确定顶点A在显示平面的中心投影点的位置,作为第一位置。请参见图5,即:确定OA的延长线与显示平面的交点A'的位置。具体的,可以通过以下推导过程计算出球心O与A'点之间的距离OA'。
Figure PCTCN2017118256-appb-000003
在步骤303-1中已经计算得到angle的大小,因此本步骤可以先利用上述公式2计算得到OA'的长度,在此基础上,结合顶点A的坐标和球体模型的半径,计算得到A'在显示平面中的位置。
通常在具体实施时,可以在显示平面中设置以V为坐标原点、以垂直方向为y轴、以垂直于y轴的水平方向为x轴的平面直角坐标系,那么可以根据顶点A的坐标与球体模型半径、以及已计算得到的OA',计算出A'与球体模型赤道所在平面的垂直距离,即:A'在显示平面中的纵坐标,然后根据OA'以及OV可以计算得到A'V,根据所述垂直距离和A'V可以计算得到A'在显示平面中的横坐标,从而确定了顶点A在显示平面的中心投影点A'的位置,即本实施例所述的第一位置。
现有技术在中心投影模式下,将第一位置作为顶点A对应于显示平面中的位置,并进行渲染;本实施例提供的技术方案,在确定第一位置后,还要进一步执行下述步骤303-3至303-6对该位置进行调整,从而解决显示比例不一致的问题。
步骤303-3、确定该球体模型顶点到球心的连线与当前视线在水平方向的第二夹角。
请参见图5,本步骤可以先确定顶点A在球体模型赤道圆周上的投影点B,然后根据所述投影点B在球体模型中的坐标、V点的坐标、以及球体模型的半径,计算出∠AOV在水平方向的第二夹角,记作angle_H。
B点是顶点A在球体模型赤道圆周上的投影点,具体的,可以将球心O点与顶点A在球体模型赤道平面的正交投影点的连线、从球心向球体模型外部延伸,与赤道圆周的 交点即为顶点A在球体模型赤道圆周上的投影点,即B点。根据顶点A在球体模型中的坐标、球体模型的半径,可以计算得到B点在球体模型中的坐标。
在得到投影点B在球体模型中的坐标后,根据以下数学定律:一条弧所对圆周角等于它所对圆心角的一半、以及直径所对的圆周角是直角,可以通过以下推导过程确定所述第二夹角angel_H。
Figure PCTCN2017118256-appb-000004
Figure PCTCN2017118256-appb-000005
其中,distance(B,V)表示根据B点和V点的坐标计算得到的两点之间的距离,R为球体模型的半径,该信息可以根据球体模型数据获取。
步骤303-4、根据所述第二夹角、以及水平视场角,采用在水平方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置。
为了让贴附在球体模型表面的、处于显示范围内的全景视频图像,能够在水平方向均匀分布在显示平面上,可以针对处于显示范围内的每个顶点,遵循圆心角与圆弧映射长度成正比的规则,对顶点对应于显示平面中的位置在水平方向上进行调整。
为了便于理解,下面以球体模型赤道圆周上的顶点为例,对位置调整过程进行说明。请参见图6,其为将球体模型赤道圆周上的顶点均匀分布到显示平面上的示意图。赤道圆周与显示平面的交点为V点,赤道圆周上的顶点C,采用中心投影方式对应于显示平面上的投影点为C'点,水平视场角为fov。
在满足均匀分布的条件下,C点对应于显示平面中的位置为C 1,C点与V点对应的圆心角记为w,将C 1点与V点之间的距离记为y,将C'点与V点之间的距离记为x,将水平视场角fov对应的总显示长度记为width,那么根据圆心角与圆弧映射长度(即:圆心角对应的圆弧对应于显示平面上的长度)成正比的规则,可以得到以下推导过程:
Figure PCTCN2017118256-appb-000006
Figure PCTCN2017118256-appb-000007
Figure PCTCN2017118256-appb-000008
具体到本实施例,本步骤可以在上述公式4的基础上,以在水平方向均匀分布为目 标,将顶点A对应于所述显示平面中的位置从第一位置调整至第二位置,具体可以包括以下处理过程,下面结合图5进行说明:
1)确定中心投影点A'所在水平面与球体模型垂直轴线的交点O 2
2)确定交点O 2在显示平面上的第一正交投影点A 0
3)根据中心投影点A'与第一正交投影点A 0之间的距离、以及第二夹角和水平视场角fov_H,遵循圆心角与圆弧映射长度成正比的规则,确定顶点A对应于显示平面中的待确定位置与第一正交投影点A 0之间的水平距离。
具体的,可以将A'A 0作为x,将fov_H作为fov,用顶点A到球心O的连线与当前视线在水平方向的第二夹角angle_H作为w,代入到上述公式4中计算,得到的y值即为顶点A对应于显示平面中的待确定位置与第一正交投影点A 0之间的水平距离。
4)根据所述水平距离,将顶点A对应于显示平面中的位置从第一位置沿远离显示平面中心的方向水平调整至第二位置。
具体的,可以将顶点A对应于显示平面中的位置从A'点沿远离显示平面中心的方向水平移动到A 1点,其中A 1点与A 0点之间的距离为在上述3)中计算得到的水平距离,即:A 1点与显示平面y轴的距离,根据该信息可以确定A 1点在显示平面的横坐标,A 1点的纵坐标与中心投影点A'相同。
通过上述处理过程,顶点A对应于显示平面中的位置从A'点调整到了A 1点,从而满足了水平方向均匀分布的要求。
步骤303-5、确定该球体模型顶点到球心的连线与当前视线在垂直方向的第三夹角。
请参见图5,本步骤可以通过以下推导过程确定顶点A到球心O的连线与当前视线在垂直方向的所述第三夹角angle_V。
Figure PCTCN2017118256-appb-000009
Figure PCTCN2017118256-appb-000010
Figure PCTCN2017118256-appb-000011
Figure PCTCN2017118256-appb-000012
其中,l为顶点A在球体模型赤道所在平面的正交投影点与球心O之间的距离,m为从顶点A向OV引垂线的交点与球心O之间的距离。由于在步骤303-1中已经确定了 顶点A到球心O的连线与当前视线之间的第一夹角angle,步骤303-3已经确定了顶点A到球心O的连线与当前视线在水平方向的第二夹角angle_H,因此本步骤可以根据上述公式5计算出所述第三夹角angle_V。
步骤303-6、根据所述第三夹角、以及垂直视场角,采用在垂直方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从第二位置调整至第三位置。
本步骤可以根据步骤303-4中描述的将球体模型赤道圆周上的顶点均匀分布到显示平面上的原理,将顶点A对应于显示平面中的位置从步骤303-4确定的第二位置调整至第三位置,从而达到在垂直方向均匀分布的目的。具体可以包括以下处理过程,下面结合图5进行说明:
1)确定中心投影点A'在球体模型赤道所在平面的第二正交投影点V'。
2)根据中心投影点A'与第二正交投影点V'之间的距离,以及第三夹角和垂直视场角fov_V,遵循圆心角与圆弧映射长度成正比的规则,确定顶点A对应于显示平面中的待确定位置与所述第二正交投影点V'之间的垂直距离。
具体的,可以将A'V'作为x,将fov_V作为fov,用顶点A到球心O的连线与当前视线在垂直方向的第三夹角angle_V作为w,代入到上述公式4中计算,得到的y值即为顶点A对应于显示平面中的待确定位置与第二正交投影点V'之间的垂直距离。
3)根据所述垂直距离,将顶点A对应于显示平面中的位置从第二位置沿远离显示平面中心的方向垂直调整至第三位置。
具体的,可以将顶点A对应于显示平面中的位置从A 1点,沿远离显示平面中心的方向垂直移动到A 2点,其中,A 2点与V'点的垂直距离为在上述2)中计算得到的垂直距离,即:A 2点与显示平面x轴的距离,根据该信息可以确定A 2点在显示平面的纵坐标,而A 2点的横坐标与A 1点相同。
至此,通过步骤303-1至303-6确定了处于显示范围内的球体模型顶点A对应于显示平面中的位置A 2。在具体实施时,可以针对处于显示范围内的每个球体模型顶点,都按照步骤303-1至步骤303-6描述的方式确定其对应于显示平面中的位置。
步骤304、针对处于显示范围内的每个球体模型顶点,根据该顶点对应于所述全景视频图像中的相应像素值对该顶点对应于显示平面中的位置进行渲染。
具体的,本步骤可以针对每个球体模型顶点,执行以下操作:根据该顶点对应的纹理坐标获取待展示的全景视频图像中的相应像素点的像素值,例如RGB像素值,然后根据该像素值对该顶点对应于显示平面中的位置进行渲染,以步骤303中的顶点A为例, 可以根据基于纹理坐标从全景视频图象中获取的相应像素值对预设显示平面中处于A2位置的像素点进行渲染。
在具体实施时,对于处于显示范围内的每个球体模型顶点都采用上述方式对显示平面中相应位置的像素点进行渲染后,对于显示平面中尚未渲染的像素点,可以根据其周围已渲染的像素点通过插值计算的方式进行渲染,或者根据与周围已渲染像素点的相对位置关系从待展示的全景视频图像中获取相应像素点的像素值进行渲染。由于处于显示范围内的每个球体模型顶点对应于显示平面中的位置,在水平方向和垂直方向都是均匀分布的,因此基于上述渲染过程显示出来的图像在水平方向和垂直方向也都是均匀分布的,不会出现因为比例不一致导致的景物变形。
在具体实施时,本实施例所述的显示平面可以是显示设备的显示缓存,通过对显示平面的渲染,可以实现全景视频图像的显示;本实施例所述的显示平面也可以是与显示缓存对应的内存区域,对显示平面渲染之后,可以通过等比例缩放过程将显示平面中渲染好的内容写入显示缓存中,从而实现全景视频图像的显示。
至此,通过上述步骤301-304对本实施例提供的全景视频渲染方法的实施方式进行了详细的说明。在具体实施时,步骤303和步骤304可以由GPU执行,例如步骤301和步骤302获取的球体模型数据和纹理坐标、以及待展示的全景视频图像和当前视点数据,可以作为输入提供给GPU,GPU根据输入的信息,针对球体模型的每个顶点执行下述操作:顶点处理单元若判断出当前处理顶点处于显示范围内,则采用在水平方向和垂直方向分别均匀分布的方式、确定所述顶点对应于预设显示平面中的位置,并将该顶点信息和相应的位置信息提供给像素处理单元,像素处理单元根据每次接收到的顶点信息和相应位置信息进行渲染。GPU完成渲染操作后,渲染好的内容被输出到显示设备显示。
请参见图7,其为本实施例提供的利用GPU进行全景视频显示的架构示意图。如图所示,针对每帧待展示的全景视频图像,都利用GPU进行渲染,不仅可以实现第一方向和第二方向的均匀分布,而且因为GPU的高速图像处理能力,可以获得流畅的全景视频展示效果。
需要说明的是,本实施例给出的上述实施方式中,对于处于显示范围内的每个顶点,先确定其对应于显示平面中的中心投影点的位置,然后依次对水平方向和垂直方向进行位置调整。在其他实施方式中,也可以先对垂直方向进行调整,然后对水平方向进行调整,即:第一方向为垂直方向,第二方向为水平方向,具体的调整过程包括如下所述的1)和2)两部分操作(需要说明的是,对于仅对垂直方向进行调整的实施方式,可以不 执行2)中的操作)。
1)确定中心投影点的位置,并对垂直方向进行调整。具体包括以下操作:确定当前处理的球体模型顶点到球心的连线与当前视线之间的第一夹角;根据所述第一夹角,基于中心投影方式确定该球体模型顶点在所述显示平面的中心投影点的第一位置;确定该球体模型顶点到球心的连线与当前视线在垂直方向的第二夹角;确定所述中心投影点在所述球体模型赤道所在平面的第二正交投影点;根据所述中心投影点与所述第二正交投影点之间的距离,以及所述第二夹角和垂直视场角,遵循圆心角与圆弧映射长度成正比的规则,确定该球体模型顶点对应于所述显示平面中的待确定位置与所述第二正交投影点之间的垂直距离;根据所述垂直距离,将该球体模型顶点对应于所述显示平面中的位置从第一位置沿远离显示平面中心的方向垂直调整至第二位置。
2)对水平方向进行调整。具体包括以下操作:确定该球体模型顶点到球心的连线与当前视线在水平方向的第三夹角;确定所述中心投影点所在水平面与球体模型垂直轴线的交点;确定所述交点在所述显示平面上的第一正交投影点;根据所述中心投影点与所述第一正交投影点之间的距离、以及所述第三夹角和水平视场角,遵循圆心角与圆弧映射长度成正比的规则,确定该球体模型顶点对应于所述显示平面中的待确定位置与所述第一正交投影点之间的水平距离;根据所述水平距离,将该球体模型顶点对应于所述显示平面中的位置从第二位置沿远离显示平面中心的方向水平调整至第三位置。
由此可见,本实施例通过上述文字提供的全景视频渲染方法的实施方式,由于采用了在第一方向和第二方向分别均匀分布的方式、确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,并根据确定好的位置进行渲染,因此,每个顶点在显示平面上的分布,在第一方向和第二方向上都是均匀的。
请参见图8,其为本实施例提供的用wulff net(乌尔夫网)图观察到的顶点分布的对比示意图,图中的交点代表球体模型顶点在显示平面中的位置分布状况,其中,图(a)为采用现有技术得到的顶点分布示意图,图(b)为采用本实施例提供的渲染方法得到的顶点分布示意图。不难看出,图(b)中的顶点分布在水平方向和垂直方向分别都是均匀分布的,而现有技术则明显是不均匀的。
采用本实施例提供的上述实施方式进行全景视频的渲染,由于顶点分布在第一方向和第二方向都是均匀的,即:在第一方向和第二方向上不同显示区域的景物的显示比例都是一致的,从而可以有效地改善现有技术由于显示比例不一致导致的景物变形的问题,使得全景视频的显示效果更符合人眼的习惯,给用户带来更为真实的观看体验。
请参见图9,其为本实施例提供的全景视频图像显示效果的对比示意图,图(a)和图(b)显示的是同一帧全景视频图像,其中,图(a)为现有技术的显示效果图,图(b)为采用本实施例提供的渲染方法得到的显示效果图。不难看出,图(a)中处于图像边缘处的景物发生了比较严重的拉伸变形,产生了比较明显的失真,而图(b)则展示出了比例一致的显示效果,可以给用户带来更为真实的观看体验。
本实施例提供的上述实施方式,对球体模型顶点对应于显示平面中的位置,从水平方向和垂直方向都进行了调整,在其他实施方式中,也可以仅针对水平方向进行调整,或者仅对垂直方向进行调整。以仅对水平方向进行调整为例:在确定顶点A对应于显示平面中的位置时,可以不执行本实施例提供的步骤303-5和步骤303-6,将步骤303-4确定的A1作为顶点A对应于显示平面中的位置。由于对一个方向上的显示比例进行了修复,实现了显示比例的一致性,与现有技术相比,依然具有明显的有益效果,有助于改善用户的观看体验。
综上所述,本实施例提供的全景视频渲染方法,由于处于显示范围内的每个球体模型顶点对应于显示平面中的位置至少在第一方向是均匀分布的,即:至少在第一方向上不同显示区域的景物的显示比例都是一致的,从而可以有效地改善或者解决全景视频图像由于显示比例不一致导致的景物变形的问题,使得全景视频的显示效果更符合人眼的习惯,给用户带来更为真实的观看体验。
在上述的实施例中,提供了一种全景视频渲染方法,与之相对应的,本申请还提供一种全景视频渲染装置。请参看图10,其为本申请的一种全景视频渲染装置的实施例示意图。由于装置实施例基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。下述描述的装置实施例仅仅是示意性的。
本实施例的一种全景视频渲染装置,包括:模型数据及纹理坐标获取单元1001,用于获取用于展示全景视频的球体模型数据以及球体模型顶点对应的纹理坐标;图像及视点数据获取单元1002,用于获取待展示的全景视频图像和当前视点数据;位置确定单元1003,用于采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,其中,所述显示范围是根据所述当前视点数据以及球体模型数据确定的;渲染单元1004,用于针对处于显示范围内的每个球体模型顶点,根据该顶点对应于所述全景视频图像中的相应像素值对该顶点对应于显示平面中的位置进行渲染,其中,所述相应像素值是根据该顶点对应的纹理坐标获取的。
可选的,所述位置确定单元,具体用于采用在第一方向和第二方向分别均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置。
可选的,所述位置确定单元包括:
循环控制子单元,用于针对处于显示范围内的每个球体模型顶点,触发以下子单元工作:
第一夹角确定子单元,用于确定该球体模型顶点到球心的连线与当前视线之间的第一夹角;
第一位置确定子单元,用于根据所述第一夹角,基于中心投影方式确定该球体模型顶点在所述显示平面的中心投影点的第一位置;
第二夹角确定子单元,用于确定该球体模型顶点到球心的连线与当前视线在所述第一方向的第二夹角;
第一位置调整子单元,用于根据所述第二夹角、以及对应于第一方向的视场角,采用在第一方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置。
可选的,所述至少在第一方向均匀分布的方式包括:在第一方向和第二方向分别均匀分布的方式;
所述位置确定单元还包括:
第三夹角确定子单元,用于在所述第一位置调整子单元工作后,确定该球体模型顶点到球心的连线与当前视线在所述第二方向的第三夹角;
第二位置调整子单元,用于根据所述第三夹角、以及对应于第二方向的视场角,采用在第二方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从所述第二位置调整至第三位置。
可选的,当所述第一方向为水平方向时,所述第一位置调整子单元包括:
交点确定子单元,用于确定所述中心投影点所在水平面与球体模型垂直轴线的交点;
第一正交投影点确定子单元,用于确定所述交点在所述显示平面上的第一正交投影点;
水平距离计算子单元,用于根据所述中心投影点与所述第一正交投影点之间的距离、以及所述第二夹角和水平视场角,遵循圆心角与圆弧映射长度成正比的规则,确定该球体模型顶点对应于所述显示平面中的待确定位置与所述第一正交投影点之间的水平距 离;
第一位置水平调整执行子单元,用于根据所述水平距离,将该球体模型顶点对应于所述显示平面中的位置从第一位置沿远离显示平面中心的方向水平调整至第二位置。
可选的,当所述第一方向为垂直方向时,所述第一位置调整子单元包括:
第二正交投影点确定子单元,用于确定所述中心投影点在球体模型赤道所在平面的第二正交投影点;
垂直距离确定子单元,用于根据所述中心投影点与所述第二正交投影点之间的距离,以及所述第二夹角和垂直视场角,遵循圆心角与圆弧映射长度成正比的规则,确定该球体模型顶点对应于所述显示平面中的待确定位置与所述第二正交投影点之间的垂直距离;
第一位置垂直调整执行子单元,用于根据所述垂直距离,将该球体模型顶点对应于所述显示平面中的位置从第一位置沿远离显示平面中心的方向垂直调整至第二位置。
可选的,当所述第一方向为水平方向、第二方向为垂直方向时,所述第二位置调整子单元,包括:
第二正交投影点确定子单元,用于确定所述中心投影点在球体模型赤道所在平面的第二正交投影点;
垂直距离确定子单元,用于根据所述中心投影点与所述第二正交投影点之间的距离,以及所述第三夹角和垂直视场角,遵循圆心角与圆弧映射长度成正比的规则,确定该球体模型顶点对应于所述显示平面中的待确定位置与所述第二正交投影点之间的垂直距离;
第二位置垂直调整执行子单元,用于根据所述垂直距离,将该球体模型顶点对应于所述显示平面中的位置从第二位置沿远离显示平面中心的方向垂直调整至第三位置。
可选的,当所述第一方向为垂直方向、第二方向为水平方向时,所述第二位置调整子单元,包括:
交点确定子单元,用于确定所述中心投影点所在水平面与球体模型垂直轴线的交点;
第一正交投影点确定子单元,用于确定所述交点在所述显示平面上的第一正交投影点;
水平距离计算子单元,用于根据所述中心投影点与所述第一正交投影点之间的距离、以及所述第三夹角和水平视场角,遵循圆心角与圆弧映射长度成正比的规则,确定该球体模型顶点对应于所述显示平面中的待确定位置与所述第一正交投影点之间的水平距 离;
第二位置水平调整执行子单元,用于根据所述水平距离,将该球体模型顶点对应于所述显示平面中的位置从第二位置沿远离显示平面中心的方向水平调整至第三位置。
可选的,所述装置还包括:
第一方向确定单元,用于在所述位置确定单元工作前,采用如下方式确定第一方向:若水平视场角不小于垂直视场角,则所述第一方向为水平方向,否则所述第一方向为垂直方向。
可选的,所述球体模型数据包括:各球体模型顶点的坐标。
可选的,所述位置确定单元以及所述渲染单元的功能由GPU完成。
可选的,所述装置部署于全景视频播放设备,所述全景视频播放设备包括:个人电脑、平板电脑、智能手机、或显示头盔。
此外,本申请还提供了一种电子设备;所述电子设备实施例如下:
请参考图11,其示出了本申请的一种电子设备的实施例的示意图。
所述电子设备,包括:显示器1101;处理器1102;存储器1103,用于存储指令;其中,所述处理器耦合于所述存储器,用于读取所述存储器存储的指令,并执行如下操作:
获取用于展示全景视频的球体模型数据以及球体模型顶点对应的纹理坐标;获取待展示的全景视频图像和当前视点数据;采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,其中,所述显示范围是根据所述当前视点数据以及球体模型数据确定的;针对处于显示范围内的每个球体模型顶点,根据该顶点对应于所述全景视频图像中的相应像素值对该顶点对应于显示平面中的位置进行渲染,其中,所述相应像素值是根据该顶点对应的纹理坐标获取的。
可选的,所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,包括:
采用在第一方向和第二方向分别均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置。
可选的,所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,包括:
针对处于显示范围内的每个球体模型顶点,利用所述球体模型数据执行如下位置调 整操作:
确定该球体模型顶点到球心的连线与当前视线之间的第一夹角;
根据所述第一夹角,基于中心投影方式确定该球体模型顶点在所述显示平面的中心投影点的第一位置;
确定该球体模型顶点到球心的连线与当前视线在所述第一方向的第二夹角;
根据所述第二夹角、以及对应于第一方向的视场角,采用在第一方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置。
可选的,所述至少在第一方向均匀分布的方式包括:在第一方向和第二方向分别均匀分布的方式;
在针对处于显示范围内的每个球体模型顶点执行的位置调整操作中,在将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置之后,包括:
确定该球体模型顶点到球心的连线与当前视线在所述第二方向的第三夹角;
根据所述第三夹角、以及对应于第二方向的视场角,采用在第二方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从所述第二位置调整至第三位置。
可选的,在所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置之前,还包括:采用如下方式确定第一方向:若水平视场角不小于垂直视场角,则所述第一方向为水平方向,否则所述第一方向为垂直方向。
本申请虽然以较佳实施例公开如上,但其并不是用来限定本申请,任何本领域技术人员在不脱离本申请的精神和范围内,都可以做出可能的变动和修改,因此本申请的保护范围应当以本申请权利要求所界定的范围为准。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器 (SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括非暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。

Claims (22)

  1. 一种全景视频渲染方法,其特征在于,包括:
    获取用于展示全景视频的球体模型数据以及球体模型顶点对应的纹理坐标;
    获取待展示的全景视频图像和当前视点数据;
    采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,其中,所述显示范围是根据所述当前视点数据以及球体模型数据确定的;
    针对处于显示范围内的每个球体模型顶点,根据该顶点对应于所述全景视频图像中的相应像素值对该顶点对应于显示平面中的位置进行渲染,其中,所述相应像素值是根据该顶点对应的纹理坐标获取的。
  2. 根据权利要求1所述的方法,其特征在于,所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,包括:
    采用在第一方向和第二方向分别均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置。
  3. 根据权利要求1所述的方法,其特征在于,所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,包括:
    针对处于显示范围内的每个球体模型顶点,利用所述球体模型数据执行如下位置调整操作:
    确定该球体模型顶点到球心的连线与当前视线之间的第一夹角;
    根据所述第一夹角,基于中心投影方式确定该球体模型顶点在所述显示平面的中心投影点的第一位置;
    确定该球体模型顶点到球心的连线与当前视线在所述第一方向的第二夹角;
    根据所述第二夹角、以及对应于第一方向的视场角,采用在第一方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置。
  4. 根据权利要求3所述的方法,其特征在于,所述至少在第一方向均匀分布的方式包括:在第一方向和第二方向分别均匀分布的方式;
    在针对处于显示范围内的每个球体模型顶点执行的位置调整操作中,在将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置之后,包括:
    确定该球体模型顶点到球心的连线与当前视线在所述第二方向的第三夹角;
    根据所述第三夹角、以及对应于第二方向的视场角,采用在第二方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从所述第二位置调整至第三位置。
  5. 根据权利要求3所述的方法,其特征在于,当所述第一方向为水平方向时,所述根据所述第二夹角、以及对应于第一方向的视场角,采用在第一方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置,包括:
    确定所述中心投影点所在水平面与球体模型垂直轴线的交点;
    确定所述交点在所述显示平面上的第一正交投影点;
    根据所述中心投影点与所述第一正交投影点之间的距离、以及所述第二夹角和水平视场角,遵循圆心角与圆弧映射长度成正比的规则,确定该球体模型顶点对应于所述显示平面中的待确定位置与所述第一正交投影点之间的水平距离;
    根据所述水平距离,将该球体模型顶点对应于所述显示平面中的位置从第一位置沿远离显示平面中心的方向水平调整至第二位置。
  6. 根据权利要求3所述的方法,其特征在于,当所述第一方向为垂直方向时,所述根据所述第二夹角、以及对应于第一方向的视场角,采用在第一方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置,包括:
    确定所述中心投影点在球体模型赤道所在平面的第二正交投影点;
    根据所述中心投影点与所述第二正交投影点之间的距离,以及所述第二夹角和垂直视场角,遵循圆心角与圆弧映射长度成正比的规则,确定该球体模型顶点对应于所述显示平面中的待确定位置与所述第二正交投影点之间的垂直距离;
    根据所述垂直距离,将该球体模型顶点对应于所述显示平面中的位置从第一位置沿远离显示平面中心的方向垂直调整至第二位置。
  7. 根据权利要求4所述的方法,其特征在于,当所述第一方向为水平方向、第二方向为垂直方向时,所述根据所述第三夹角、以及对应于第二方向的视场角,采用在第二方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从所述第二位置调整至第三位置,包括:
    确定所述中心投影点在球体模型赤道所在平面的第二正交投影点;
    根据所述中心投影点与所述第二正交投影点之间的距离,以及所述第三夹角和垂直视场角,遵循圆心角与圆弧映射长度成正比的规则,确定该球体模型顶点对应于所述显示平面中的待确定位置与所述第二正交投影点之间的垂直距离;
    根据所述垂直距离,将该球体模型顶点对应于所述显示平面中的位置从第二位置沿远离显示平面中心的方向垂直调整至第三位置。
  8. 根据权利要求4所述的方法,其特征在于,当所述第一方向为垂直方向、第二方向为水平方向时,所述根据所述第三夹角、以及对应于第二方向的视场角,采用在第二方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从所述第二位置调整至第三位置,包括:
    确定所述中心投影点所在水平面与球体模型垂直轴线的交点;
    确定所述交点在所述显示平面上的第一正交投影点;
    根据所述中心投影点与所述第一正交投影点之间的距离、以及所述第三夹角和水平视场角,遵循圆心角与圆弧映射长度成正比的规则,确定该球体模型顶点对应于所述显示平面中的待确定位置与所述第一正交投影点之间的水平距离;
    根据所述水平距离,将该球体模型顶点对应于所述显示平面中的位置从第二位置沿远离显示平面中心的方向水平调整至第三位置。
  9. 根据权利要求1所述的方法,其特征在于,在所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置之前,还包括:采用如下方式确定第一方向:若水平视场角不小于垂直视场角,则所述第一方向为水平方向,否则所述第一方向为垂直方向。
  10. 根据权利要求1所述的方法,其特征在于,所述球体模型数据包括:各球体模型顶点的坐标。
  11. 根据权利要求1所述的方法,其特征在于,所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,以及针对处于显示范围内的每个球体模型顶点,根据该顶点对应于所述全景视频图像中的相应像素值对该顶点对应于显示平面中的位置进行渲染的步骤,由GPU执行。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述方法在全景视频播放设备中实施,所述全景视频播放设备包括:个人电脑、平板电脑、智能手机、或显示头盔。
  13. 一种全景视频渲染装置,其特征在于,包括:
    模型数据及纹理坐标获取单元,用于获取用于展示全景视频的球体模型数据以及球体模型顶点对应的纹理坐标;
    图像及视点数据获取单元,用于获取待展示的全景视频图像和当前视点数据;
    位置确定单元,用于采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,其中,所述显示范围是根据所述当前视点数据以及球体模型数据确定的;
    渲染单元,用于针对处于显示范围内的每个球体模型顶点,根据该顶点对应于所述全景视频图像中的相应像素值对该顶点对应于显示平面中的位置进行渲染,其中,所述相应像素值是根据该顶点对应的纹理坐标获取的。
  14. 根据权利要求13所述的装置,其特征在于,所述位置确定单元,具体用于采用在第一方向和第二方向分别均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置。
  15. 根据权利要求13所述的装置,其特征在于,所述位置确定单元包括:
    循环控制子单元,用于针对处于显示范围内的每个球体模型顶点,触发以下子单元工作:
    第一夹角确定子单元,用于确定该球体模型顶点到球心的连线与当前视线之间的第一夹角;
    第一位置确定子单元,用于根据所述第一夹角,基于中心投影方式确定该球体模型顶点在所述显示平面的中心投影点的第一位置;
    第二夹角确定子单元,用于确定该球体模型顶点到球心的连线与当前视线在所述第一方向的第二夹角;
    第一位置调整子单元,用于根据所述第二夹角、以及对应于第一方向的视场角,采用在第一方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置。
  16. 根据权利要求15所述的装置,其特征在于,所述至少在第一方向均匀分布的方式包括:在第一方向和第二方向分别均匀分布的方式;
    所述位置确定单元还包括:
    第三夹角确定子单元,用于在所述第一位置调整子单元工作后,确定该球体模型顶点到球心的连线与当前视线在所述第二方向的第三夹角;
    第二位置调整子单元,用于根据所述第三夹角、以及对应于第二方向的视场角,采用在第二方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从所述第二位置调整至第三位置。
  17. 根据权利要求13所述的装置,其特征在于,还包括:
    第一方向确定单元,用于在所述位置确定单元工作前,采用如下方式确定第一方向:若水平视场角不小于垂直视场角,则所述第一方向为水平方向,否则所述第一方向为垂直方向。
  18. 一种电子设备,其特征在于,包括:
    显示器;
    处理器;
    存储器,用于存储指令;
    其中,所述处理器耦合于所述存储器,用于读取所述存储器存储的指令,并执行如下操作:获取用于展示全景视频的球体模型数据以及球体模型顶点对应的纹理坐标;获取待展示的全景视频图像和当前视点数据;采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,其中,所述显示范围是根据所述当前视点数据以及球体模型数据确定的;针对处于显示范围内的每个球体模型顶点,根据该顶点对应于所述全景视频图像中的相应像素值对该顶点对应于显示平面中的位置进行渲染,其中,所述相应像素值是根据该顶点对应的纹理坐标获取的。
  19. 根据权利要求18所述的电子设备,其特征在于,所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,包括:
    采用在第一方向和第二方向分别均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置。
  20. 根据权利要求18所述的电子设备,其特征在于,所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置,包括:
    针对处于显示范围内的每个球体模型顶点,利用所述球体模型数据执行如下位置调整操作:
    确定该球体模型顶点到球心的连线与当前视线之间的第一夹角;
    根据所述第一夹角,基于中心投影方式确定该球体模型顶点在所述显示平面的中心投影点的第一位置;
    确定该球体模型顶点到球心的连线与当前视线在所述第一方向的第二夹角;
    根据所述第二夹角、以及对应于第一方向的视场角,采用在第一方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置。
  21. 根据权利要求20所述的电子设备,其特征在于,所述至少在第一方向均匀分布的方式包括:在第一方向和第二方向分别均匀分布的方式;
    在针对处于显示范围内的每个球体模型顶点执行的位置调整操作中,在将该球体模型顶点对应于所述显示平面中的位置从第一位置调整至第二位置之后,包括:
    确定该球体模型顶点到球心的连线与当前视线在所述第二方向的第三夹角;
    根据所述第三夹角、以及对应于第二方向的视场角,采用在第二方向均匀分布的方式,将该球体模型顶点对应于所述显示平面中的位置从所述第二位置调整至第三位置。
  22. 根据权利要求18所述的电子设备,其特征在于,在所述采用至少在第一方向均匀分布的方式,根据所述球体模型数据确定处于显示范围内的每个球体模型顶点对应于预设显示平面中的位置之前,还包括:采用如下方式确定第一方向:若水平视场角不小于垂直视场角,则所述第一方向为水平方向,否则所述第一方向为垂直方向。
PCT/CN2017/118256 2017-01-05 2017-12-25 全景视频渲染方法、装置及电子设备 WO2018126922A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710007618.4 2017-01-05
CN201710007618.4A CN108282694B (zh) 2017-01-05 2017-01-05 全景视频渲染方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2018126922A1 true WO2018126922A1 (zh) 2018-07-12

Family

ID=62789089

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/118256 WO2018126922A1 (zh) 2017-01-05 2017-12-25 全景视频渲染方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN108282694B (zh)
WO (1) WO2018126922A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112437287A (zh) * 2020-11-23 2021-03-02 成都易瞳科技有限公司 一种全景图像扫描拼接方法
CN113115106A (zh) * 2021-03-31 2021-07-13 影石创新科技股份有限公司 全景视频的自动剪辑方法、装置、终端及存储介质
CN113286138A (zh) * 2021-05-17 2021-08-20 聚好看科技股份有限公司 一种全景视频显示方法及显示设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110354500A (zh) * 2019-07-15 2019-10-22 网易(杭州)网络有限公司 特效处理方法、装置、设备和存储介质
CN111212293A (zh) * 2020-01-13 2020-05-29 聚好看科技股份有限公司 一种图像处理方法及显示设备
CN111540325B (zh) * 2020-05-20 2021-12-03 Tcl华星光电技术有限公司 图像增强方法和图像增强装置
CN112367479B (zh) * 2020-10-14 2022-11-11 聚好看科技股份有限公司 一种全景视频图像显示方法及显示设备
CN112672131B (zh) * 2020-12-07 2024-02-06 聚好看科技股份有限公司 一种全景视频图像显示方法及显示设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101938605A (zh) * 2009-06-30 2011-01-05 爱国者全景(北京)网络科技发展有限公司 生成全景视频的方法
US20130057542A1 (en) * 2011-09-07 2013-03-07 Ricoh Company, Ltd. Image processing apparatus, image processing method, storage medium, and image processing system
CN103905761A (zh) * 2012-12-26 2014-07-02 株式会社理光 图像处理系统和图像处理方法
US20150264259A1 (en) * 2014-03-17 2015-09-17 Sony Computer Entertainment Europe Limited Image processing
CN106131540A (zh) * 2016-07-29 2016-11-16 暴风集团股份有限公司 基于d3d播放全景视频的方法及系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5912670A (en) * 1996-08-05 1999-06-15 International Business Machines Corporation Method and apparatus for overlaying a bit map image on an environment map
CN101938599A (zh) * 2009-06-30 2011-01-05 爱国者全景(北京)网络科技发展有限公司 生成互动的动态全景影像的方法
CN105678693B (zh) * 2016-01-25 2019-05-14 成都易瞳科技有限公司 全景视频浏览播放方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101938605A (zh) * 2009-06-30 2011-01-05 爱国者全景(北京)网络科技发展有限公司 生成全景视频的方法
US20130057542A1 (en) * 2011-09-07 2013-03-07 Ricoh Company, Ltd. Image processing apparatus, image processing method, storage medium, and image processing system
CN103905761A (zh) * 2012-12-26 2014-07-02 株式会社理光 图像处理系统和图像处理方法
US20150264259A1 (en) * 2014-03-17 2015-09-17 Sony Computer Entertainment Europe Limited Image processing
CN106131540A (zh) * 2016-07-29 2016-11-16 暴风集团股份有限公司 基于d3d播放全景视频的方法及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112437287A (zh) * 2020-11-23 2021-03-02 成都易瞳科技有限公司 一种全景图像扫描拼接方法
CN113115106A (zh) * 2021-03-31 2021-07-13 影石创新科技股份有限公司 全景视频的自动剪辑方法、装置、终端及存储介质
CN113115106B (zh) * 2021-03-31 2023-05-05 影石创新科技股份有限公司 全景视频的自动剪辑方法、装置、终端及存储介质
CN113286138A (zh) * 2021-05-17 2021-08-20 聚好看科技股份有限公司 一种全景视频显示方法及显示设备

Also Published As

Publication number Publication date
CN108282694A (zh) 2018-07-13
CN108282694B (zh) 2020-08-18

Similar Documents

Publication Publication Date Title
WO2018126922A1 (zh) 全景视频渲染方法、装置及电子设备
US10692274B2 (en) Image processing apparatus and method
CN113382168B (zh) 用于存储成像数据的重叠区以产生优化拼接图像的设备及方法
US10643300B2 (en) Image display method, custom method of shaped cambered curtain, and head-mounted display device
US11294535B2 (en) Virtual reality VR interface generation method and apparatus
US11282264B2 (en) Virtual reality content display method and apparatus
TWI637355B (zh) 紋理貼圖之壓縮方法及其相關圖像資料處理系統與產生360度全景視頻之方法
US10915993B2 (en) Display apparatus and image processing method thereof
WO2019033903A1 (zh) 虚拟现实的图形渲染方法和装置
CN109997167B (zh) 用于球面图像内容的定向图像拼接
WO2018077071A1 (zh) 一种全景图像的生成方法及装置
WO2020248900A1 (zh) 全景视频的处理方法、装置及存储介质
US10397481B2 (en) Stabilization and rolling shutter correction for omnidirectional image content
US11379952B1 (en) Foveated image capture for power efficient video see-through
WO2020119822A1 (zh) 虚拟现实的显示方法及装置、设备、计算机存储介质
CN108765582B (zh) 一种全景图片显示方法及设备
WO2022141781A1 (zh) 一种播放全景视频的方法、系统、存储介质及播放设备
CN116547718A (zh) 用户界面
WO2019042028A1 (zh) 全视向的球体光场渲染方法
WO2022116194A1 (zh) 一种全景呈现方法及其装置
US20180295293A1 (en) Method and apparatus for generating a panoramic image having one or more spatially altered portions
CN109697747B (zh) 矩形翻转动画生成方法及装置
US11240564B2 (en) Method for playing panoramic picture and apparatus for playing panoramic picture
EP3706410B1 (en) Image processing device, image processing method, program, and projection system
CN114862657A (zh) 一种双显卡渲染方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17889700

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17889700

Country of ref document: EP

Kind code of ref document: A1