WO2017128887A1 - 全景图像的校正3d显示方法和系统及装置 - Google Patents

全景图像的校正3d显示方法和系统及装置 Download PDF

Info

Publication number
WO2017128887A1
WO2017128887A1 PCT/CN2016/110631 CN2016110631W WO2017128887A1 WO 2017128887 A1 WO2017128887 A1 WO 2017128887A1 CN 2016110631 W CN2016110631 W CN 2016110631W WO 2017128887 A1 WO2017128887 A1 WO 2017128887A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
image
texture
panoramic
module
Prior art date
Application number
PCT/CN2016/110631
Other languages
English (en)
French (fr)
Inventor
范治江
李伟
Original Assignee
范治江
李伟
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201610049506.0A external-priority patent/CN105957048A/zh
Priority claimed from CN201610173465.6A external-priority patent/CN105787951B/zh
Application filed by 范治江, 李伟 filed Critical 范治江
Publication of WO2017128887A1 publication Critical patent/WO2017128887A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Definitions

  • the present invention relates to a panoramic display technology, and more particularly to a corrected 3D panoramic display method and system and apparatus for a panoramic image taken by a fisheye lens.
  • Stitching panorama refers to stitching a picture taken by a plurality of lenses placed at a specific angle into a panoramic view.
  • two or more wide-angle lenses of 90° to 150° are used;
  • the fisheye panorama refers to a panorama taken with a single fisheye lens.
  • the horizontal and vertical fields of view of the lens are equal to or exceed 360° ⁇ 180°.
  • the defects of splicing panorama include: (1) the inconsistency of color and brightness of multiple sensors, resulting in uneven brightness of panoramic images; (2) the time between sensors is not synchronized, causing tearing and traversing of video images; (3) insufficient stitching accuracy The stitching area is not clear and has a ghost image; (4) There is a blind zone in the area closer to the lens.
  • the fisheye panorama has only one sensor, avoiding all the drawbacks of the stitching panorama described above.
  • the steps of 3D panoramic display include: (1) calibration of the panoramic image; (2) establishing a 3D model; (3) establishing a mapping relationship between the model vertex and the panoramic image coordinates; (4) 3D rendering; 5) View 3D scenes from different perspectives through interaction.
  • the 3D scene instant network can also be shared on various internetworks.
  • the existing 3D display model has only two common geometries: cube, sphere, hemisphere, cylinder, and plane.
  • the purpose is to shoot or
  • the stitched panoramic image is displayed according to the visual habits of the human eye through the 3D model, and the 3D scene is seen as an undistorted, azimuthless, deadfield, and immersive.
  • the user can only choose from existing models without the possibility of changing the model.
  • mapping relationship between model vertices and panoramic image coordinates the existing methods are completely dependent on the optical imaging model of the lens, such as the existing Equisolid angle model, Equidistant model, etc.
  • the purpose is to minimize distortion when reproducing 3D scenes. Therefore, the mapping relationship between the model vertices and the panoramic image coordinates is usually an increasing relationship in which the imaging radius increases as the angle between the light and the optical axis increases. See the relationship between r and ⁇ in FIG.
  • the existing panoramic image panoramic display technology generally only corrects a severely distorted panoramic image to a two-dimensional image close to a real scene, and this method loses part of the panoramic image, so that the field of view becomes smaller and becomes ordinary.
  • the effect of the wide-angle lens is the effect of the wide-angle lens.
  • the problem to be solved by the present invention is to propose a correction 3D display method and system and apparatus for panoramic images.
  • the calculation is simple, fast, real-time, and the image and/or video is smooth, so that the panoramic display can be more beautiful, cool, artistic, and individualized.
  • the existing 3D panoramic display model has a fixed mindset, a small vertical field of view, an unsatisfactory correction effect, a loss of information, and a limitation of only the 2D plane display effect.
  • the invention provides a corrected 3D display method for a panoramic image, comprising the following steps:
  • the original fisheye image is bound to obtain a 3D panoramic display image.
  • the invention also provides a method for displaying a corrected 3D special effect of a panoramic image, comprising the following steps:
  • the parameter values are dynamically adjusted using a parameter-adjustable function system.
  • the present invention still further provides a system and a storage medium corresponding to the above-described corrected 3D display method of a panoramic image, and a system and a storage medium corresponding to the corrected 3D display method of the above-described panoramic image.
  • the corrected 3D display method and system and apparatus of the panoramic image of the present invention use the model to restore the 3D scene to a panoramic image, which is simple in calculation, fast in speed, good in real-time, and image and / or the video is smooth; not only enriches the 3D panoramic display model, expands the field of view to more than 360 ° ⁇ 180 °, no distortion, immersive, makes the panoramic display more beautiful, cool, artistic, and helps users to show their personality;
  • the process of creating a model is visual, interactive, fun and creative, and enhances user engagement.
  • the GPU can be directly used for 3D rendering operations, so that people can see the undistorted stereoscopic scene as if they are immersive.
  • Figure 1 is a schematic diagram of a prior spherical model.
  • FIG. 2 is a flow chart of a method for correcting 3D display of a panoramic image according to an embodiment of the present invention.
  • Figure 3 is a schematic view of the fisheye lens.
  • FIG. 4 is a schematic diagram of calibration parameters of a panoramic image according to the present invention.
  • 5-1 is a schematic diagram of a hemispherical model (lens horizontally upward) according to an embodiment of the present invention.
  • FIG. 5-2 is a schematic diagram of mapping a hemispherical model (lens horizontally) and a panoramic image according to an embodiment of the present invention.
  • FIG. 5-3 is a panoramic image (lens horizontally upward) adopted in an embodiment of the present invention.
  • FIG. 5-4 shows a display image when the hemisphere model is rendered by OpenGL/D3D in FIG. 5-3 according to an embodiment of the present invention.
  • FIG. 5-5 is a display image of an inner view point and a horizontal line of sight direction (1) using the OpenGL/D3D rendering hemisphere model in FIG. 5-3 according to an embodiment of the present invention.
  • FIG. 5-6 is a display image of the inner view point and the horizontal line of sight direction (2) using the OpenGL/D3D rendering hemisphere model in FIG. 5-3 according to an embodiment of the present invention.
  • FIG. 5-7 is a display image of an inner view point and a vertical line of sight direction, which is performed by using OpenGL/D3D to render a hemisphere model according to an embodiment of the present invention.
  • 6-1 is a schematic diagram of a hemispherical model (lens horizontally downward) according to an embodiment of the present invention.
  • 6-2 is a schematic diagram of mapping of a hemispherical model (lens horizontally downward) and a panoramic image according to an embodiment of the present invention.
  • Figure 6-3 is an original panoramic image (lens level down) employed in an embodiment of the present invention.
  • FIG. 6-4 shows a display image when the hemisphere model is rendered by OpenGL/D3D in FIG. 6-1 according to an embodiment of the present invention.
  • FIG. 6-5 are diagrams showing the display image of the inner view point and the horizontal line of sight direction (1) using the OpenGL/D3D rendering hemisphere model in FIG. 6-1 according to an embodiment of the present invention.
  • FIG. 6-6 are diagrams showing the display image of the inner view point and the horizontal line of sight direction (2) using the OpenGL/D3D rendering hemisphere model in FIG. 6-1 according to an embodiment of the present invention.
  • FIG. 6-1 are diagrams showing the display image of the inner view point and the vertical line of sight direction using the OpenGL/D3D rendering hemisphere model in FIG. 6-1 according to an embodiment of the present invention.
  • 7-1 is a schematic diagram of a hemispherical model (lens forward) according to an embodiment of the present invention.
  • FIG. 7-2 is a schematic diagram of mapping of a hemispherical model (lens forward) and a panoramic image according to an embodiment of the present invention.
  • 7-3 is an original image (lens forward direction) employed in an embodiment of the present invention.
  • FIG. 7-4 shows a display image when the hemisphere model is rendered by OpenGL/D3D in FIG. 7-1 according to an embodiment of the present invention.
  • FIG. 7-5 are diagrams showing the display image of the inner view point and the horizontal line of sight direction (front) using the OpenGL/D3D rendering hemisphere model in FIG. 7-1 according to an embodiment of the present invention.
  • FIG. 7-6 are diagrams showing the display image of the inner view point and the horizontal line of sight direction (left side) using the OpenGL/D3D rendering hemisphere model in FIG. 7-1 according to an embodiment of the present invention.
  • FIG. 7-7 are diagrams showing the display image of the inner view point and the horizontal line of sight direction (right side) using the OpenGL/D3D rendering hemisphere model in FIG. 7-1 according to an embodiment of the present invention.
  • FIG. 7-8 are diagrams showing the display image of the inner view point and the vertical line of sight direction (upward) using the OpenGL/D3D rendering hemisphere model in FIG. 7-1 according to an embodiment of the present invention.
  • FIG. 7-9 are diagrams showing the display image of the inner view point and the vertical line of sight direction (downward) using the OpenGL/D3D rendering hemisphere model of FIG. 7-3 according to an embodiment of the present invention.
  • Figure 8 is a schematic diagram of a texture mapping angle transformation function.
  • Fig. 9(a)(b)(c) are examples of changing the model by changing the scale factor
  • Fig. 9(d)(e)(f)(g) is a view of various asteroids.
  • Fig. 10 (a), (b) and (c) are examples of changing the model by changing the modeling coefficient.
  • Figures 11(a)(b)(c)(d) are examples of more models, including asymmetric models.
  • FIG. 13 is a schematic structural diagram of a corrected 3D display system for a panoramic image according to an embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of a corrected 3D special effect display system for capturing images of a fisheye lens according to an embodiment of the present invention.
  • the method and system for correcting 3D display of a panoramic image in the embodiment of the present invention takes a fisheye lens with a horizontal and vertical field of view of 360° ⁇ 230°, and reduces the panoramic image captured by the hemisphere model to a 3D scene as an example.
  • embodiments thereof may also be applied to, but not limited to, a spherical model, a cylindrical model, an asteroid model, a 360 or 180 degree cylindrical expansion model, a longitude correction plane model, a projection correction plane model, and the like;
  • the method and system of the embodiments of the present invention are not only applicable to a fisheye lens of 360° ⁇ 230°, but to other fisheye lens, such as a fisheye lens produced by Canon or Kodak, including but not limited to 360.
  • a fisheye lens produced by Canon or Kodak including but not limited to 360.
  • the images taken are equally applicable.
  • the embodiment of the present invention is also applicable to the corrected 3D display of the panoramic video of the fisheye lens.
  • the method and system for correcting 3D display of panoramic image further adopts a function system with adjustable parameters to dynamically adjust parameter values through interaction to create a continuously variable 3D model; texture mapping through 3D rendering
  • the technology maps the panoramic image onto the built 3D model to generate a 3D panoramic model; through interactive operation, the generated 3D panoramic model is viewed from different perspectives; combined with HTML5, WebGL and other technologies, the 3D panoramic image is instantly shared to the network.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • a specific implementation manner of the corrected 3D display method of the panoramic image according to the embodiment of the present invention is as follows:
  • Step S100 calculating a calibration parameter of the panoramic image by calculating the original image
  • the size and aspect ratio of the obtained fisheye image, that is, the original image are not the same, but the effective information is in a circular area in the middle of the original image, as shown in FIG.
  • the calibration parameters of the original image include, but are not limited to, a center coordinate (x 0 , y 0 ) and a radius R 0 , as shown in the calibration parameter diagram shown in FIG. 4 .
  • the calculation of the original image may be implemented by a statistical algorithm, but the statistical algorithm will generate a large deviation when there are more black pixel points in the original image scene area.
  • the original image is computed using a scan line approximation algorithm.
  • the scan line approximation algorithm calculates the center coordinates (x 0 , y 0 ) and the radius R 0 by scanning the original image line by line to obtain a central circular contour.
  • the scan line approximation algorithm has nothing to do with the black pixel points inside the original image, and can overcome the deficiencies of the statistical algorithm.
  • Step S200 constructing a 3D hemisphere model by constructing a hemisphere with a radius R for the panoramic image of the calibration parameter;
  • a hemisphere with a radius R is constructed in the world coordinate system so that the vertical field of view coincides with the effective field of view of the lens.
  • Figure 5-1 shows a 360° ⁇ 230° fisheye lens with the lens direction horizontally upward.
  • the vertical field of view of the hemisphere model corresponding to this lens is also 230°, and the angle with the y axis is half of the vertical field of view, which is 115°.
  • the vertical angle of the 90° hemisphere model is increased by 25°.
  • the definition of the coordinate system in Figure 5-1 is consistent with the definition of the world coordinate system in the OpenGL system [Descartes right-handed coordinate system], which facilitates 3D rendering using OpenGL technology.
  • the world coordinate system of the model is the same as that adopted by D3D, that is, the Cartesian left-hand coordinate system.
  • the embodiment of the present invention further provides that the 3D model is interactive, so that the corrected 3D display has the effect that the user can interact.
  • the 3D model is interactive, so that the corrected 3D display has the effect that the user can interact.
  • Step S300 establishing a texture mapping relationship between a vertex of the 3D hemisphere model and the panoramic image
  • the step S300 includes the following steps:
  • Step S310 mapping the panoramic image as a texture image on the 3D hemisphere model
  • the panoramic image is mapped as a texture image on the established 3D hemisphere model, and the accurate mapping relationship ensures that the restored 3D scene is undistorted.
  • Figure 5-2 is a schematic diagram of the mapping between the hemisphere model and the panoramic image.
  • is the angle between the model point coordinates (x, y, z) and the x and y axes
  • R is the model radius
  • r is the distance from the center of the image mapping point.
  • the disadvantage of these models is that the distortion increases as ⁇ increases, and the effect is not satisfactory.
  • k 0 ... k n is a constant coefficient and n is a positive integer.
  • step S320 the formula for calculating the texture coordinates (u, v) is as follows: (1), (2), where (x 0 , y 0 ) is the center coordinate of the panoramic image, and W and H are the image width and height.
  • Step S400 binding the panoramic image according to the texture illumination relationship, performing 3D rendering, and obtaining the rendered 3D panoramic display image.
  • the embodiment of the present invention utilizes a 3D rendering technology, such as OpenGL (Open Graphics Library), D3D (Direct3D), or other 3D rendering technology, and uses a panoramic image as a texture image according to a set texture mapping relationship. On a good 3D hemisphere model, the undistorted stereo scene is finally obtained by rendering the 3D hemisphere model.
  • a 3D rendering technology such as OpenGL (Open Graphics Library), D3D (Direct3D), or other 3D rendering technology
  • the step S400 includes the following steps:
  • Step S410 dividing the 3D hemisphere model into a grid by latitude and longitude, the intersection point of the grid as the vertex of Verg/D3D (Vertex), and using the (u, v) representing the texture coordinates of the panoramic image obtained in step S300, and the grid
  • the three-dimensional world coordinates (x, y, z) of the vertices of the intersection point together form a five-dimensional vector (x, y, z, u, v) and re-describe the vertices. See Figure 5-2.
  • the set of all vertices constitutes OpenGL/ D3D renderable hemisphere geometry.
  • Step S420 using OpenGL/D3D texture mapping technology, binding the panoramic image into a texture image, setting a matrix of world coordinate transformation/view transformation/projection transformation, and calling OpenGL/D3D drawing function to draw a vertex set to obtain different Rendering effect.
  • OpenGL/D3D texture mapping technology Using OpenGL/D3D texture mapping technology, a fisheye photo of a panoramic image or a frame image in a video is bound as a texture image, and then a transformation matrix of a world coordinate transformation, a view transformation, or a projection transformation is set, and Call OpenGL/D3D's drawing function to draw a collection of vertices to see different rendering effects.
  • Figure 5-4 is a rendering effect of Figure 5-3 with the viewpoint being outside the hemisphere model and the line of sight facing the hemisphere;
  • Figure 5-5, Figure 5-6, and Figure 5-7 show the rendering effect of Figure 5-3 in which the viewpoint is in the hemisphere model and the line of sight is oriented in different directions.
  • the present invention excels in the real-time panoramic video stream and the panoramic video file playback process.
  • the embodiment of the present invention does not have any problems such as dead zone, uneven brightness, and ghosting, and the calculation is simple and the real-time performance is good.
  • the corrected 3D display method of the panoramic image of the embodiment of the present invention further includes the following steps:
  • step S500 the 3D panoramic display image is interacted.
  • buttons In order to match the 3D display effect, you can use the mouse to move (including but not limited to devices such as computers), finger swipe (including but not limited to mobile phones, Ipad and other mobile devices) or shake the device to use its own gravity sensing to change the direction of the line of sight, see Complete 3D scenes with interactive 3D panoramic images.
  • Figures 6-1 to 6-7 and Figs. 7-1 to 7-9 illustrate the 360° ⁇ 230° fisheye lens and the lens level down and forward as an example.
  • the definition of the world coordinate system, the five-dimensional vector (x, y, z, u, v) and the texture mapping model used are consistent with the horizontal direction of the lens, and the different line-of-sight directions of the external view point and the inner view point are given.
  • the texture mapping model used
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the 3D model when the 3D model is established, the 3D model is interactive, and a function system with adjustable parameters is adopted, and the parameter values are dynamically adjusted to create a continuous Variable 3D model. What's important is that users can instantly see the models they create and choose their favorite models.
  • mapping relationship between the model vertex and the panoramic image coordinates when the mapping relationship between the model vertex and the panoramic image coordinates is established, the mapping relationship is interactive, and the parameter-adjustable function system is adopted to dynamically adjust the parameter values to create a continuously variable model vertex.
  • the mapping relationship with the coordinates of the panoramic image is adopted.
  • mapping relationship between the model vertices and the coordinates of the panoramic image, in the embodiment of the present invention,
  • the original image taken has a rich and natural color, and is used as an aesthetically pleasing texture image, not only for the reproduction of a real scene. Therefore, the mapping relationship is also interactively variable in the embodiment of the present invention, and the purpose is to stick it in 3D. The effect on the model is more beautiful.
  • Embodiment 2 of the present invention is a detailed description of a method process for creating a continuously variable 3D model by using a parameter-variable function system when interactively creating a 3D model.
  • the 3D model uses a coordinate system as shown in FIG. 8.
  • This coordinate system is consistent with the definition of the OpenGL world coordinate system [Descartes right-hand coordinate system], and is convenient for 3D rendering using OpenGL technology.
  • the spherical equation (3) is improved to become a variable function system, as shown in the following formula (4).
  • h x , h y , and h z take a positive real number, which is used to change the proportion of the model in the x, y, and z directions, called the proportional coefficient; t x , t y , and t z take the real number and are used to change the model.
  • the shape in the x, y, and z directions is called the modeling coefficient; c y takes the real number and is used to set the position of the model on the y axis, which is called the position coefficient.
  • The definition is the same as equation (3).
  • the parameter variable function has a wide selection range, for example, in the formula (4).
  • the coefficient l x , l z is also added before; wherein the coefficients l x , l z are real numbers, that is, the equation (4) is transformed into the formula (4a):
  • Figure 9 is an example of changing the model by changing the scaling factors h x , h y , h z .
  • the model parameters are as follows:
  • FIG. 9(a), (b) and (c) are views of the coordinate origin (0, 0, 0) viewed from the viewpoint (0, 0, 4) along the -z axis.
  • Figure 9(d) is the same as Figure 9(c) except that the line of sight is different.
  • Figure 9(d) shows the origin of the coordinate (0,0,0) from the viewpoint (0,4,0) along the –y axis. View.
  • Fig. 9(e) is a partial view seen when the viewpoint in Fig. 9(d) is further approached to the plane of the disk.
  • This view is similar to the Little Planet diagram generated by the existing Stereographic projection technique, as shown in Figure 9(f). Since the original image angle of view is not 360° ⁇ 360°, but 360° ⁇ 230°, a black hole is generated at the center of the spherical plane projection asteroid map.
  • the asteroid map is a special case of the 3D model under specific parameters and specific viewing angles (parameters are the same as FIG.
  • All of the models in Fig. 9 are texture maps which are described by the fisheye diagram 12(a) in accordance with the mapping relationship described in the mapping relationship between the model vertices and the panoramic image coordinates.
  • Fig. 10 is an example of changing the model by changing the modeling coefficients t x , t y , t z .
  • the texture image is from Fig. 12(b), and the model parameters are as follows:
  • Figures 9 and 10 are all models that are symmetric about the y-axis. In fact, various asymmetry models can be created by making h x , h z or t x , t z unequal. For more 3D models, please refer to Figure 11, where the parameters are not listed here.
  • the texture images of Figures 11(a)(b)(c)(d) are from Figure 12(c)(d)(e)(f), respectively. It can be seen that the embodiment of the present invention makes the panoramic display break through the display of the traditional meaning, and has an interactive artistic effect.
  • the 3D model is constructed as a continuous function, and all parameters are real numbers, so it can be easily adjusted through interface interaction.
  • defining different gestures or buttons means adjusting different parameters, defining finger or mouse movement direction and distance to represent changes in parameters, etc., converting the operation on the interface into the coefficients in equation (2) and transmitting the coefficients to the 3D module, thereby changing the display.
  • 3D model is constructed as a continuous function, and all parameters are real numbers, so it can be easily adjusted through interface interaction.
  • defining different gestures or buttons means adjusting different parameters, defining finger or mouse movement direction and distance to represent changes in parameters, etc., converting the operation on the interface into the coefficients in equation (2) and transmitting the coefficients to the 3D module, thereby changing the display.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • Embodiment 3 of the present invention is a detailed process for describing a mapping relationship between an interaction creation model vertex and a panoramic image coordinate to create a continuously variable mapping relationship.
  • FIG. 1 Take FIG. 1 as an example for explanation.
  • This mapping relationship can make the 3D scene more realistic and less distorted, and the degree of distortion depends on the degree of approximation of the model function and the real optical function.
  • the embodiment of the present invention changes on the basis of the optical model, as shown in the formula (5).
  • ⁇ max is the maximum angle between the ray and the optical axis; the angle transformation function g( ⁇ ) is a continuous function in the interval [0, ⁇ max ].
  • a variable function curve can be selected, such as a function curve represented by two broken lines in FIG. 8, to generate different texture mapping effects.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • the 3D model data, as well as the panoramic image as the texture are uploaded to the server, embedded in the HTML5 code by the server, and a network link is generated for access.
  • HTML5 technology can be used across platforms, making it easy to browse HTML5-based websites, whether it's a laptop, a desktop, or a smartphone.
  • WebGL is a 3D drawing standard that provides hardware 3D accelerated rendering of HTML5 through a unified, standard, cross-platform OpenGL interface, so that 3D scenes and models can be displayed more smoothly in the browser.
  • WebGL is a rendering code embedded in an HTML5 web page that contains 3D rendering elements such as 3D models, texture images, lights, and materials.
  • the WebGL-enabled browser When opening an HTML5 web page with WebGL code, the WebGL-enabled browser will automatically run the rendering code and display the rendered 3D scene in the browser window.
  • WebGL supports custom 3D models. Therefore, when sharing the 3D panoramic effects model, only 3D model data, such as vertex buffer, index cache, etc., and the panoramic image as texture are uploaded to the server, which is automatically embedded into the HTML5 code by the server and generates a network link for Customer access.
  • 3D model data such as vertex buffer, index cache, etc.
  • the mobile terminal generates a special effect model of the embodiment of the present invention.
  • the generated model data After clicking the share, the generated model data: the vertex array and the index array are sent to the server as files. At the same time, the panoramic image is also sent to the server.
  • Model data and panoramic images are only saved on the server.
  • the server generates an HTML5 link and sends it back to the mobile.
  • the sharing of the 3D panoramic model of the embodiment of the present invention can be implemented by using the method in the embodiment of the present invention or directly using the 3D model included in the WebGL itself.
  • Embodiment 5 is a diagrammatic representation of Embodiment 5:
  • the present invention further provides a corrected 3D display system for panoramic images, as shown in FIG. 13, including a calibration module 10, a model module 20, a texture mapping relationship module 30, and a binding Module 40; where:
  • the calibration module 10 is configured to calculate a calibration parameter of the panoramic image from the original image
  • the establishing model module 20 is configured to establish a 3D hemisphere model according to the calibration parameter of the panoramic image
  • the establishing a texture mapping relationship module 30 is configured to establish a texture mapping relationship between a vertex of the 3D hemisphere model and the panoramic image;
  • the binding module 40 is configured to bind the panoramic image according to the texture illumination relationship to obtain the 3D panoramic display image.
  • the texture mapping relationship module 30 is configured to include a texture sub-module 31 and a calculation sub-module 32, wherein:
  • the texture sub-module 31 is configured to map the panoramic image as a texture image on the established 3D hemisphere model
  • the calculation sub-module 32 is configured to calculate the texture coordinates (u, v) by the formulas (1), (2).
  • the binding module 40 includes a rendering sub-module 41 for performing 3D rendering on the bound panoramic image.
  • the rendered 3D panoramic display image is a preferred embodiment, in the corrected 3D display system of the panoramic image of the embodiment of the present invention, the binding module 40 includes a rendering sub-module 41 for performing 3D rendering on the bound panoramic image.
  • the rendered 3D panoramic display image is a preferred embodiment, in the corrected 3D display system of the panoramic image of the embodiment of the present invention.
  • the binding module 40 divides the 3D hemisphere model into grids according to latitude and longitude, and the intersection of the grids is the vertex of VerGL of OpenGL/D3D, and describes the vertices by using the obtained five-dimensional vectors (x, y, z, u, v).
  • (x, y, z) is the 3D world coordinate of the vertex
  • (u, v) is the texture coordinate of the panoramic image
  • the set of all vertices constitutes the OpenGL/D3D renderable hemisphere geometry.
  • OpenGL/D3D texture mapping technology to bind a frame image in a panoramic image or video to a texture image, then set the world coordinate transformation, view transformation, projection transformation matrix, and call the OpenGL/D3D drawing function to draw the vertices.
  • the collection can see different rendering effects.
  • the embodiment of the invention further provides software corresponding to the corrected 3D display system for taking a panoramic image of the fisheye lens, and a medium for storing the software.
  • the corrected 3D display system for capturing a panoramic image of the fisheye lens of the embodiment of the present invention, the software corresponding to the system, and the medium storing the software, the working process thereof and the 3D display of the panoramic image of the fisheye lens of the embodiment of the present invention The methods are basically the same, and therefore, the working processes of the system, the software, and the medium in the embodiment of the present invention are not repeatedly described.
  • the embodiment of the present invention further provides a corrected 3D special effect display system for panoramic images, which includes a dynamic adjustment module 100 for performing 3D model creation or When establishing the mapping relationship between the model vertices and the panoramic image coordinates, the parameter values are adjusted dynamically by adjusting the parameter values.
  • the dynamic adjustment module 100 includes a model creation submodule 110 and a mapping submodule 120.
  • the model creation sub-module 110 is configured to adopt a function system with adjustable parameters when the 3D model is created, and dynamically adjust the parameter values by user operation to create a continuously variable 3D model.
  • the mapping sub-module 120 is configured to establish a mapping between model vertices and panoramic image coordinates When the system is used, its mapping relationship is interactive. It adopts a parameter-adjustable function system to dynamically adjust parameter values to create a mapping relationship between continuously variable model vertices and panoramic image coordinates.
  • the correcting 3D effect display system may further include a sharing module 200, configured to upload the 3D model data and the panoramic image as a texture to the server when the 3D panoramic model is shared, and embed the image into the HTML5 code by the server, and Generate a web link for access.
  • a sharing module 200 configured to upload the 3D model data and the panoramic image as a texture to the server when the 3D panoramic model is shared, and embed the image into the HTML5 code by the server, and Generate a web link for access.
  • the embodiment of the invention further provides software corresponding to the corrected 3D special effect display system for taking a panoramic image of the fisheye lens, and a medium for storing the software.
  • the fisheye lens of the embodiment of the present invention captures a corrected 3D effect panoramic display system of a panoramic image, software corresponding to the system, and a medium storing the software.
  • the working process of the present invention is basically the same as the 3D special effect display method for capturing a panoramic image of the fisheye lens according to the embodiment of the present invention. Therefore, the working process of the system, the software, and the medium of the present invention will not be repeatedly described one by one.
  • the user can automatically create a series of special-shaped geometric shapes by simple operations such as finger sliding, mouse movement, etc. Further, the selected panoramic image is automatically mapped on the geometric body; Viewing the geometry after the texture from any angle, showing various different 3D display effects; further, sharing the generated 3D models to the network, such as friends circle, Weibo, forum, etc., for more people to watch . It breaks the mindset of the existing 3D panoramic display model, making the panoramic display more beautiful, cool, artistic, and showing personality in the network sharing.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种鱼眼镜头拍摄图像的3D全景显示方法和系统。该方法包括如下步骤:对原始鱼眼图像运算得到鱼眼图像的标定参数(S100);建立3D模型(S200);建立所述3D模型的顶点与所述原始鱼眼图像之间的纹理映射关系(S300);根据所述的纹理照射关系,对原始鱼眼图像进行绑定,得到3D全景显示图像(S400)。上述方法将鱼眼镜头拍摄的全景图像还原为3D场景,计算简单、速度快、实时性好、图像和/或视频流畅。

Description

全景图像的校正3D显示方法和系统及装置 技术领域
本发明涉及一种全景显示技术,特别是针对鱼眼镜头拍摄的全景图像的校正3D全景显示方法和系统及装置。
背景技术
全景技术主要分为拼接全景和鱼眼全景两种。拼接全景是指将多个按特定角度摆放的镜头拍摄的画面缝合成一副全景图,通常选用2个以上90°~150°的广角镜头;鱼眼全景是指采用单个鱼眼镜头拍摄的全景图,通常镜头的水平和垂直视野等于或超过360°×180°。
拼接全景的缺陷包括:(1)多个传感器颜色、亮度等不一致,造成全景图像明暗不均;(2)传感器之间时间不同步,造成视频画面撕裂、穿越现象;(3)缝合精度不够,缝合区域不清晰、有重影;(4)距离镜头较近的区域存在盲区。
鱼眼全景只有一个传感器,避免了上述拼接全景的所有缺陷。
现有技术中,3D全景显示的步骤包括:(1)全景图像的标定;(2)建立3D模型;(3)建立模型顶点与全景图像坐标之间的映射关系;(4)3D渲染;(5)通过交互操作从不同视角观看3D场景。进一步地,还可以将3D场景即时网络分享各种互联网络上。
在建立3D模型时,无论是单镜头的鱼眼全景还是多镜头的拼接全景,现有方法的3D显示模型只有立方体、球、半球、圆柱、平面这几种常见的几何体,目的是将拍摄或缝合的全景图像通过3D模型按照人眼的视觉习惯展现,看到无畸变、有方位感、无视野死角、犹如身临其境的3D场景。并且,用户只能从现有模型中选择,没有改变模型的可能。
而且,建立模型顶点与全景图像坐标之间的映射关系时,现有方法是完全取决于镜头的光学成像模型的,如现有的Equisolid angle(立体角)模型、Equidistant(等距)模型等,目的是再现3D场景时畸变最小。因此,模型顶点与全景图像坐标之间的映射关系通常是随着光线与光轴的夹角增大成像半径增大的递增关系,见图1中r和θ的关系。
进一步地,现有的全景图像全景显示技术通常只是将畸变严重的全景图像校正为接近真实场景的二维图像,这种方法损失了全景图像的部分信息,使得视野范围变小,沦为普通的广角镜头的效果。
发明内容
基于上述问题,本发明所要解决的问题是提出一种全景图像的校正3D示方法和系统及装置。其通过将鱼眼镜头拍摄的全景图像还原为3D场景,计算简单、速度快、实时性好、图像和/或视频流畅,使全景显示能够更美、更酷、具有艺术气质,彰显个性,解决了现有3D全景显示模型的思维定势,全景垂直视野较小、校正效果不理想、信息有损失的问题,以及只有2D平面显示效果的局限。
本发明所采用的技术方案如下:
本发明提供一种全景图像的校正3D显示方法,包括如下步骤:
对原始鱼眼图像运算得到鱼眼图像的标定参数;
建立3D模型;
建立所述3D模型的顶点与所述原始鱼眼图像之间的纹理映射关系;
根据所述的纹理照射关系,对原始鱼眼图像进行绑定,得到3D全景显示图像。
本发明还提供一种全景图像的校正3D特效显示方法,包括如下步骤:
在进行3D模型创建或者建立模型顶点与鱼眼图像坐标之间的映射关系时,采用参数可调的函数系,动态调整参数值。
本发明更进一步提供一种与上述全景图像的校正3D显示方法对应的系统和存储介质,以及一种与上述全景图像的校正3D显示方法对应的系统和存储介质。
相较于现有技术,本发明具有以下有益效果:本发明全景图像的校正3D显示方法和系统及装置对全景图像采用模型还原为3D场景,其计算简单、速度快、实时性好、图像和/或视频流畅;不仅丰富了3D全景显示模型,将视野扩大到超过360°×180°,无畸变,有沉浸感,使全景显示更美、更酷、具有艺术气质,帮助用户彰显个性;而且,创建模型的过程是可视、可交互的,具有趣味性和创造性,提升用户参与感。进一步地,可以直接利用GPU进行3D渲染运算,让人更加看到无畸变的立体场景,犹如身临其境。
当然,实施本发明的任一产品并不一定需要同时达到以上所述的所有优点。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要的附图做简单的介绍,显而易见地,下面描述的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是一个现有球面模型示意图。
图2为本发明实施例的全景图像的校正3D显示方法流程图。
图3鱼眼镜头拍摄图像示意图。
图4为本发明的全景图像标定参数示意图。
图5-1为本发明实施例的半球模型(镜头水平向上)示意图。
图5-2为本发明实施例半球模型(镜头水平向上)与全景图像的映射示意图。
图5-3为本发明实施例采用的全景图像(镜头水平向上)。
图5-4为本发明实施例对图5-3采用OpenGL/D3D渲染半球模型,外视点时的显示图像。
图5-5为本发明实施例对图5-3采用OpenGL/D3D渲染半球模型,内视点、水平视线方向(一)的显示图像。
图5-6为本发明实施例对图5-3采用OpenGL/D3D渲染半球模型,内视点、水平视线方向(二)的显示图像。
图5-7为本发明实施例对图5-3采用OpenGL/D3D渲染半球模型,内视点、垂直视线方向的显示图像。
图6-1为本发明实施例的半球模型(镜头水平向下)示意图。
图6-2为本发明实施例半球模型(镜头水平向下)与全景图像的映射示意图。
图6-3为本发明实施例采用的原始全景图像(镜头水平向下)。
图6-4为本发明实施例对图6-1采用OpenGL/D3D渲染半球模型,外视点时的显示图像。
图6-5为本发明实施例对图6-1采用OpenGL/D3D渲染半球模型,内视点、水平视线方向(一)的显示图像。
图6-6为本发明实施例对图6-1采用OpenGL/D3D渲染半球模型,内视点、水平视线方向(二)的显示图像。
图6-7为本发明实施例对图6-1采用OpenGL/D3D渲染半球模型,内视点、垂直视线方向的显示图像。
图7-1为本发明实施例的半球模型(镜头前向)示意图。
图7-2为本发明实施例半球模型(镜头前向)与全景图像的映射示意图。
图7-3为本发明实施例采用的原始图像(镜头前向)。
图7-4为本发明实施例对图7-1采用OpenGL/D3D渲染半球模型,外视点时的显示图像。
图7-5为本发明实施例对图7-1采用OpenGL/D3D渲染半球模型,内视点、水平视线方向(前方)的显示图像。
图7-6为本发明实施例对图7-1采用OpenGL/D3D渲染半球模型,内视点、水平视线方向(左侧)的显示图像。
图7-7为本发明实施例对图7-1采用OpenGL/D3D渲染半球模型,内视点、水平视线方向(右侧)的显示图像。
图7-8为本发明实施例对图7-1采用OpenGL/D3D渲染半球模型,内视点、垂直视线方向(向上)的显示图像。
图7-9为本发明实施例对图7-3采用OpenGL/D3D渲染半球模型,内视点、垂直视线方向(向下)的显示图像。
图8是纹理映射角度变换函数示意图。
图9(a)(b)(c)是通过改变比例系数改变模型的例子,图9(d)(e)(f)(g)是各种小行星视图。
图10(a)(b)(c)是通过改变造型系数改变模型的例子。
图11(a)(b)(c)(d)是更多模型的例子,包含不对称模型。
图12(a)(b)(c)(d)(e)(f)是由单镜头全景摄像机拍摄的全景图像,分别在图9、10、11中作为纹理图像。
图13为本发明实施例的全景图像的校正3D显示系统结构示意图。
图14是本发明实施例鱼眼镜头拍摄图像的校正3D特效显示系统结构示意图。
具体实施方式
为使得本发明的发明目的、特征、优点能够更加的明显和易懂,下面将结合本发明实施例中的附图1-14,对本发明的全景图像的校正3D显示方法和系统及装置技术方案进行描述,显然,所描述的实施例仅仅是本发明一部分实施例,而非全部实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例中的全景图像的校正3D显示方法和系统及装置,以水平和垂直视野为360°×230°的鱼眼镜头,将其拍摄的全景图像以半球模型还原为3D场景为例进行详细说明,其实施例也可以适用于包括但不限于球面模型、柱面模型、小行星模型、360或180度柱面展开模型、经度校正平面模型、投影校正平面模型等;
作为一种可实施方式,本发明实施例的方法和系统不仅适用于360°×230°的鱼眼镜头,对其它鱼眼镜头,如佳能或柯达生产的鱼眼镜头,包括但不局限于360°×180°或以上角度的鱼眼镜头,拍摄的图像同样适用。
进一步地,由于视频是由一帧帧图像组成的,本发明实施例亦适用于鱼眼镜头拍摄全景视频的校正3D显示。
本发明实施例的全景图像的校正3D显示方法和系统及装置,进一步地,采用参数可调的函数系,通过交互操作动态调整参数值,创建连续可变的3D模型;通过3D渲染的纹理映射技术,将全景图像贴图到建好的3D模型上,生成3D全景模型;通过交互操作,从不同视角观看生成的3D全景模型;结合HTML5、WebGL等技术,将3D全景图像即时分享到网络。
实施例一:
如图2所示,本发明实施例的全景图像的校正3D显示方法的具体实施方式如下:
步骤S100,对原始图像运算得到全景图像的标定参数;
由于传感器和图像传输系统的不同,获得的鱼眼图像即原始图像的尺寸和宽高比也不尽相同,但有效信息都在原始图像中间的一个圆形区域内,如图3所示。
所述原始图像的标定参数包括但不限于圆心坐标(x0,y0)和半径R0,见图4所示的标定参数示意图。
作为一种可实施方式,原始图像进行运算可采用统计算法实现,但该统计算法在原始图像景物区域内存在较多黑色像素点时将产生较大偏差。
作为另一种更佳的实施方式,原始图像进行运算可采用扫描线逼近算法实现。所述扫描线逼近算法是通过逐行扫描原始图像,获得中央的圆形轮廓,计算出圆心坐标(x0,y0)和半径R0。该扫描线逼近算法与原始图像内部的黑色像素点无关,能克服统计算法的不足。
步骤S200,对标定参数的全景图像,构造半径为R的半球,建立3D半球模型;
针对标定参数的全景图像,在世界坐标系内构造半径为R的半球,使其垂直视野与镜头有效视野一致。
图5-1以360°×230°鱼眼镜头且镜头方向水平向上为例进行说明,该镜头对应的半球模型的垂直视野也是230°,与y轴夹角为垂直视野的一半,为115°,比90°半球模型垂直角度增大了25°。图5-1中坐标系的定义与OpenGL系统内世界坐标系的定义【笛卡尔(Descartes)右手坐标系】是一致的,便于采用OpenGL技术进行3D渲染。
如果采用D3D技术渲染,较佳地,该模型的世界坐标系与D3D采用的一致,即笛卡尔左手坐标系。
作为另一种可实施方式,半球模型本身也可以与鱼眼参数无关,例如取R=1的单位半球,其半球模型本身与鱼眼参数无关。
进一步地,本发明实施例还提供一种3D模型是可交互的,从而使校正后的3D显示具有用户可交互的效果,具体见实施例二的详细描述。
步骤S300,建立所述3D半球模型的顶点与所述全景图像之间的纹理映射关系;
较佳地,所述步骤S300包括如下步骤:
步骤S310,将所述全景图像作为纹理图像贴图于所述3D半球模型上;
所述全景图像作为纹理图像贴图于建立的3D半球模型上,准确的映射关系保证还原的3D场景无畸变。
图5-2是半球模型与全景图像的映射示意图。图5-2中,
Figure PCTCN2016110631-appb-000001
θ是模型点坐标(x,y,z)分别与x、y轴的夹角,R为模型半径,r为图像映射点距离圆心的距离。
本发明实施例可采用三角函数模型进行贴图,例如Orthographic(正射)、Equisolid angle(立体角)、Stereographic(球形投影)采用的映射函数分别为r=f sin(θ)、r=2f sin(θ/2)、r=2f tan(θ/2),其中f为鱼眼镜头的焦距。但是,这些模型的缺点是随着θ的增大畸变增大,效果不理想。
较佳地,本发明实施例也可以采用多项式r=k0+k1θ+k2θ2+…+knθn,其中,k0…kn为常系数,n为正整数,其克服了随着θ的增大畸变增大,效果不理想的问题。
步骤S320,计算纹理坐标(u,v)的公式如下式(1)、(2),其中(x0,y0)为全景图像的圆心坐标,W、H为图像宽度和高度。
u=(r cos(θ)+x0)/W     (1)
v=(r sin(θ)+y0)/H      (2)
步骤S400,根据所述的纹理照射关系,对全景图像进行绑定,进行3D渲染,得到渲染后的所述3D全景显示图像。
本发明实施例利用3D渲染技术,如OpenGL(Open Graphics Library,开放图形库)、D3D(Direct3D),或者其他3D渲染技术,将全景图像作为纹理图像,按照设定的纹理映射关系,贴图到建好的3D半球模型上,最后通过渲染3D半球模型得到无畸变的立体场景。
所述步骤S400包括如下步骤:
步骤S410,将3D半球模型按经纬度划分为网格,网格的交点作为OpenGL/D3D的顶点(Vertex),并用步骤S300中获得的表示全景图像的纹理坐标的(u,v),以及网格交点的顶点的三维世界坐标(x,y,z)一起,构成五维向量(x,y,z,u,v)而重新描述顶点,参见图5-2,所有顶点的集合构成了OpenGL/D3D可渲染的半球几何体。
步骤S420,利用OpenGL/D3D的纹理贴图技术,将全景图像绑定为纹理图像,通过设定世界坐标变换/视图变换/投影变换的矩阵,并调用OpenGL/D3D的绘制函数绘制顶点集合得到不同的渲染效果。
利用OpenGL/D3D的纹理贴图技术,将全景图像的鱼眼照片或视频中的一帧图像绑定为纹理图像,接下来只要设定世界坐标变换、或者视图变换、或者投影变换的变换矩阵,并调用OpenGL/D3D的绘制函数绘制顶点集合即可看到不同的渲染效果。
图5-4是视点在半球模型外,视线朝向半球的图5-3渲染效果图;
图5-5、图5-6、图5-7是视点在半球模型内,视线朝向外部不同方向的图5-3渲染效果图。
可以看到,在这种模式下,图5-1中畸变严重的原始图像已经被校正为无畸变的正常全景图像,效果远好于其它通用方法。而且,通过改变视线方向能够看到水平360°一周、垂直到头顶的立体影像。不仅充分利用了原始图像的信息,没有任何损失,而且与真实环境别无二致,立体感强。
由于纹理坐标是直接作用于全景图像的,没有中间转换过程,计算速度快,本发明在实时全景视频流和全景视频文件回放过程中都表现优异。
与拼接全景相比,本发明实施例中没有任何盲区、亮度不均、重影等问题,而且计算简单,实时性好。
更进一步地,本发明实施例的全景图像的校正3D显示方法还包括如下步骤:
步骤S500,对3D全景显示图像进行交互。
为配合3D显示效果,可利用鼠标移动(包括但不限于计算机等设备)、手指滑动(包括但不限于手机、Ipad等移动设备)或者晃动设备利用其自身的重力感应来改变视线方向,看到完整的3D场景,得到交互的3D全景图像。
图6-1~图6-7、图7-1~图7-9分别360°×230°鱼眼镜头且镜头水平向下、向前为例进行说明。图中,世界坐标系、五维向量(x,y,z,u,v)的定义及所用的纹理映射模型与镜头水平向上时是一致的,同时给出了外视点、内视点不同视线方向时的渲染效果。
实施例二:
作为一种可实施方式,进一步地,全景图像的校正3D显示方法中,建立3D模型时,所述3D模型是可交互的,采用了参数可调的函数系,动态调整参数值,创建连续可变的3D模型。重要的是,用户能够即时看到创建的模型,从中选择自己喜爱的模型。
作为一种可实施方式,建立模型顶点与全景图像坐标之间的映射关系时,其映射关系是可交互的,采用了参数可调的函数系,动态调整参数值,创建连续可变的模型顶点与全景图像坐标之间的映射关系。
建立模型顶点与全景图像坐标之间的映射关系,本发明实施例中,由于拍 摄的原始图像具有丰富、自然的色彩,是作为具有美感的纹理图像,而不只用于真实场景的再现,因此,映射关系在本发明实施例中也是交互可变的,目的是使贴在3D模型上效果更具美感。
本发明的实施例二为详细描述交互创建3D模型时利用参数可变的函数系创建连续可变的3D模型的方法过程。
本发明实施例中3D模型使用坐标系如图8所示,此坐标系与OpenGL世界坐标系的定义【笛卡尔(Descartes)右手坐标系】是一致的,便于采用OpenGL技术进行3D渲染。
图8中有一个球面模型。一般地,设球面上任一点p的坐标为(x,y,z),则球面方程由式(3)给出。其中,θ是
Figure PCTCN2016110631-appb-000002
与y轴的夹角,
Figure PCTCN2016110631-appb-000003
Figure PCTCN2016110631-appb-000004
在xoz平面的投影与x轴的夹角,R为半径。
Figure PCTCN2016110631-appb-000005
本发明实施例中,为使3D模型的造型可变,对球面方程式(3)进行改进,成为参数可变的函数系,如下式(4),
Figure PCTCN2016110631-appb-000006
式中,hx、hy、hz取正实数,用于改变模型在x、y、z方向的比例,称为比例系数;tx、ty、tz取实数,用于改变模型在x、y、z方向的造型,称为造型系数;cy取实数,用于设置模型在y轴的位置,称为位置系数。θ、
Figure PCTCN2016110631-appb-000007
的定义与式(3)相同。
作为一种可实施方式,参数可变函数的选择范围很广泛,例如还可以在式(4)中
Figure PCTCN2016110631-appb-000008
前也增加系数lx,lz;其中,系数lx,lz为实数,即式(4)变换为式(4a):
Figure PCTCN2016110631-appb-000009
或者改变成其它的参数可变的三角函数、多项式函数等等。
这里只是以式(4)为例进行具体说明,显而易见,式(4)中当参数hx、hz相等,且tx、tz相等时,模型是关于y轴对称的。
图9是通过改变比例系数hx、hy、hz来改变模型的例子,模型参数分别如下:
图9(a):hx=hz=0.1;hy=1;tx=ty=tz=0.5;cy=-0.5;
图9(b):hx=hz=0.5;hy=1;tx=tz=ty=0.5;cy=-0.5;
图9(c):hx=hz=0.5;hy=0;tx=tz=ty=0.5;cy=-0.5;
图9(d):同图9(c)
其中,图9(a)(b)(c)是从视点(0,0,4)沿-z轴看向坐标原点(0,0,0)的视图。图9(c)已经退化为y=–0.5的圆盘平面。而图9(d)与图9(c)模型相同,只是视线方向不同,图9(d)是沿–y轴从视点(0,4,0)看向坐标原点(0,0,0)的视图。
图9(e)是把图9(d)中的视点进一步靠近圆盘平面时看到的局部视图。这种视图与现有的Stereographic projection(球极平面投影)技术生成的Little Planet(小行星)图异曲同工,如图9(f)所示。由于原始图像视角不是360°×360°,而是360°×230°,在球极平面投影小行星图的中心位置会产生一个黑洞。与球极平面投影技术不同,本发明实施例里,小行星图是由3D模型在特定参数、特定视角下的一个特例(参数同图9(c),视点(0,4,0),看向(0,0,0)),不仅在中心位置能够没有黑洞,产生“卡通”效果,见图9(e);而且,通过调整参数,可以生成各种不一样的小行星图;进一步地,也可以在中心位置加入任意半径的圆形图像,例如图9(g)中的LOGO,使得效果接近球极平面投影小行星图。
所有图9中的模型都是用鱼眼图12(a),按照下述实施例三交互创建模型顶点与全景图像坐标之间的映射关系中描述的映射关系进行的纹理贴图。
图10是通过改变造型系数tx、ty、tz来改变模型的例子,纹理图像来自图12(b),模型参数分别如下:
图10(a):hx=hz=hy=1;tx=tz=2.5;ty=1.0;cy=-0.5;
图10(b):hx=hz=hy=1;tx=tz=3.0;ty=1.5;cy=-0.5;
图10(c):hx=hz=hy=1;tx=tz=5.0;ty=1.0;cy=-0.5;;
图9和图10的例子都是关于y轴对称的模型。事实上,通过让hx、hz或者tx、tz不相等能够创建出各种不对称模型。更多的3D模型请参看图11,在此不一一列举参数。图11(a)(b)(c)(d)的纹理图像分别来自图12(c)(d)(e)(f)。可以看到,本发明实施例使得全景显示突破了传统意义的显示,更具有可交互艺术效 果。
本发明实施例中,构建3D模型是连续函数,所有参数都是实数,因此可以很方便地通过界面的交互来调整。例如,定义不同手势或者按键代表调整不同的参数,定义手指或者鼠标移动方向和距离代表参数的变化等等,将界面上的操作转化为式(2)中的系数传给3D模块,从而改变显示的3D模型。
实施例三:
本发明的实施三为详细描述交互创建模型顶点与全景图像坐标之间的映射关系创建连续可变映射关系的详细过程。
以图1为例进行说明。设镜头的光学成像模型为r=f(θ),其中r为成像半径,θ为光线与光轴的夹角。则对于3D模型上的任意顶点p(x,y,z),一般地,纹理映射关系为r=kf(θ),其中k是比例系数,以使纹理坐标i(u,v)映射到[0,1]区间内。这种映射关系能够使3D场景比较真实,畸变较小,且畸变程度取决于模型函数与真实光学函数的逼近程度。
本发明实施例在光学模型的基础上进行变化,如式(5)所示。
r=kf(g(θ)),θ∈[0,θmax]  (5)
式(5)中,θmax为光线与光轴的最大夹角;角度变换函数g(θ)是[0,θmax]区间内的连续函数。
一般地,纹理映射关系是g(θ)=θ,见图8中的递增函数直线,而在本发明实施例中,图9、10、11中使用的都是g(θ)=θmax-θ,见图8中的递减函数直线,这两种映射关系都是简单的线性函数。进一步地,可以选择参数可变的函数曲线,如图8中的两条虚线表示的函数曲线,以产生不一样的纹理映射效果。
实施例四:
当分享3D全景特效模型时,把3D模型数据,以及作为纹理的全景图像上传至服务器,由服务器端嵌入到HTML5代码中,并生成网络链接供访问。
创建好3D特效全景模型,接下来通过HTML5和WebGL等技术完成即时网络分享和传播。HTML5技术可以进行跨平台的使用,无论是笔记本、台式机、还是智能手机都能很方便的浏览基于HTML5的网站。WebGL是一种3D绘图标准,通过统一的、标准的、跨平台的OpenGL接口为HTML5提供硬件3D加速渲染,这样就可以在浏览器里更流畅地展示3D场景和模型。
一般地,大多数浏览器都已经支持HTML5和WebGL技术,在二者的支持下,3D全景的即时网络分享成为可能。除了WebGL,也有其它网络3D渲染技术,这里仅以WebGL为例进行说明。
WebGL是嵌入在HTML5网页中的渲染代码,其包含3D模型、纹理图像、灯光、材质等3D渲染元素。
当打开一个含WebGL代码的HTML5网页时,支持WebGL的浏览器会自动运行渲染代码,在浏览器窗口显示渲染好的3D场景。
WebGL支持自定义的3D模型。因此,当分享3D全景特效模型时,只需把3D模型数据,例如顶点缓存、索引缓存等,以及作为纹理的全景图像上传至服务器,由服务器端自动嵌入到HTML5代码中,并生成网络链接供客户访问。
作为一种可实施方式,举例如下:
31)移动端生成本发明实施例的特效模型;
32)点击分享后,生成的模型数据:顶点数组、索引数组以文件的方式发送给服务器。同时把全景图像也发送给服务器。
模型数据和全景图像都只保存在服务器。
33)服务器生成HTML5链接,并发送回移动端。
可以使用本发明实施例中的方法,或者直接利用WebGL本身包含的3D模型,实现本发明实施例的3D全景模型的分享。
实施例五:
为解决本发明现有技术中的问题,本发明还提供一种全景图像的校正3D显示系统,如图13所示,包括标定模块10,建立模型模块20,建立纹理映射关系模块30,以及绑定模块40;其中:
所述标定模块10,用于对原始图像运算得到全景图像的标定参数;
所述建立模型模块20,用于根据所述全景图像的标定参数,建立3D半球模型;
所述建立纹理映射关系模块30,用于建立所述3D半球模型的顶点与所述全景图像之间的纹理映射关系;
所述绑定模块40,用于根据所述的纹理照射关系,对全景图像进行绑定,得到所述3D全景显示图像。
作为一种较佳实施例,所述建立纹理映射关系模块30,包括贴图子模块31和计算子模块32,其中:
所述贴图子模块31,用于将所述全景图像作为纹理图像贴图于建立的所述3D半球模型上;
所述计算子模块32,用于通过公式(1)、(2)计算纹理坐标(u,v)。
作为一种较佳的实施例,本发明实施例的全景图像的校正3D显示系统,所述绑定模块40,包括渲染子模块41,用于根据对绑定后的全景图像进行3D渲染,得到渲染后的所述3D全景显示图像。
所述绑定模块40将3D半球模型按经纬度划分为网格,网格的交点作为OpenGL/D3D的顶点(Vertex),并用获得的五维向量(x,y,z,u,v)描述顶点,其中,(x,y,z)是顶点的三维世界坐标,(u,v)是全景图像的纹理坐标,所有顶点的集合构成了OpenGL/D3D可渲染的半球几何体。
利用OpenGL/D3D的纹理贴图技术,将全景图像或视频中的一帧图像绑定为纹理图像,接下来设定世界坐标变换、视图变换、投影变换矩阵,并调用OpenGL/D3D的绘制函数绘制顶点集合即可看到不同的渲染效果。
本发明实施例还提供一种鱼眼镜头拍摄全景图像的校正3D显示系统相对应的软件、以及存储该软件的介质。
本发明实施例的鱼眼镜头拍摄全景图像的校正3D显示系统、该系统相对应的软件、以及存储该软件的介质,其工作的过程与本发明实施例的鱼眼镜头拍摄全景图像的3D显示方法基本相同,因此,在本发明实施例中关于该系统、软件、介质的工作过程,不再一一重复描述。
更进一步地,为实现本发明实施例的目的,如图14所示,本发明实施例还提供了全景图像的校正3D特效显示系统,其包括动态调整模块100,用于在进行3D模型创建或者建立模型顶点与全景图像坐标之间的映射关系时,采用参数可调的函数系,动态调整参数值。
作为一种可实施方式,所述动态调整模块100,包括模型创建子模块110和映射子模块120。
所述模型创建子模块110,用于在进行3D模型创建时,采用了参数可调的函数系,通过用户操作动态调整参数值,创建连续可变的3D模型。
所述映射子模块120,用于在建立模型顶点与全景图像坐标之间的映射关 系时,其映射关系是可交互的,采用了参数可调的函数系,动态调整参数值,创建连续可变的模型顶点与全景图像坐标之间的映射关系。
所述的校正3D特效显示系统,还可以包括分享模块200,用于当分享3D全景模型时,把3D模型数据,以及作为纹理的全景图像上传至服务器,由服务器端嵌入到HTML5代码中,并生成网络链接供访问。
本发明实施例还提供一种鱼眼镜头拍摄全景图像的校正3D特效显示系统相对应的软件、以及存储该软件的介质。
本发明实施例的鱼眼镜头拍摄全景图像的校正3D特效全景显示系统、该系统相对应的软件、以及存储该软件的介质。其工作的过程与本发明实施例的鱼眼镜头拍摄全景图像的3D特效显示方法基本相同,因此,在本发明关于该系统、软件、介质的工作过程,不再一一重复描述。
本发明实施例中,用户通过简单的操作,例如手指滑动、鼠标移动等,能够自动创建一系列特殊造型的几何体;进一步地,自动将选定的全景图像贴图在此几何体上;通过简单操作能够从任意角度观看贴图后的几何体,呈现出各种不同以往的3D显示效果;更进一步地,即时将生成的3D模型分享到网络,如朋友圈、微博、论坛等,供更多的人观赏。其打破了现有3D全景显示模型的思维定势,使全景显示能够更美、更酷、具有艺术气质,在网络分享中彰显个性。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统或系统实施例而言,由于其与方法实施例相似,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的系统和软件及介质实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
本领域普通技术人员应该还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照 功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。本领域普通技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
以上所述的具体实施方式,对本发明的目的、技术方案和有益效果进轨道了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施方式而已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (26)

  1. 一种全景图像的校正3D显示方法,其特征在于,包括如下步骤:
    对原始鱼眼图像运算得到鱼眼图像的标定参数;
    建立3D模型;
    建立所述3D模型的顶点与所述原始鱼眼图像之间的纹理映射关系;
    根据所述的纹理照射关系,对原始鱼眼图像进行绑定,得到3D全景显示图像。
  2. 根据权利要求1所述的校正3D显示方法,其特征在于,所述对原始鱼眼图像进行绑定之后,还包括如下步骤:
    对原始鱼眼图像绑定后,进行3D渲染,得到渲染后的所述3D全景显示图像。
  3. 根据权利要求1或2所述的校正3D显示方法,其特征在于,还包括如下步骤:
    对所述3D全景显示图像进行交互。
  4. 根据权利要求2所述的校正3D显示方法,其特征在于,所述鱼眼图像进行运算采用统计算法或者扫描线逼近算法实现。
  5. 根据权利要求4所述的校正3D显示方法,其特征在于,所述建立纹理映射关系,包括如下步骤:
    将所述原始鱼眼图像作为纹理图像贴图于建立的所述3D模型上;
    计算纹理坐标(u,v)。
  6. 根据权利要求5所述的校正3D显示方法,其特征在于,所述进行贴图采用三角函数模型或者多项式模型。
  7. 根据权利要求6所述的校正3D显示方法,其特征在于,所述多项式为r=k0+k1θ+k2θ2+…+knθn,其中,k0…kn为常系数,n为正整数;
    所述计算纹理坐标(u,v)的公式如下:
    u=(r cos(θ)+x0)/W
    v=(r sin(θ)+y0)/H
    其中(x0,y0)为鱼眼图像的圆心坐标,W、H为图像宽度和高度。
  8. 根据权利要求7所述的校正3D显示方法,其特征在于,所述对原始鱼眼图像进行绑定,包括如下步骤:
    将3D模型按经纬度划分为网格,网格的交点作为顶点,并用所获得的五维向量(x,y,z,u,v)描述所述顶点,其中,(x,y,z)是顶点的三维世界坐标,(u,v)是鱼眼图像的纹理坐标,所有顶点的集合构成了可渲染的几何体;
    利用纹理贴图技术,将原始鱼眼图像绑定为纹理图像,通过设定世界变换、视图变换、投影变换矩阵,并调用绘制函数绘制顶点集合得到不同的效果。
  9. 一种全景图像的校正3D特效显示方法,其特征在于,包括如下步骤:
    在进行3D模型创建或者建立模型顶点与鱼眼图像坐标之间的映射关系时,采用参数可调的函数系,动态调整参数值。
  10. 根据权利要求9所述的校正3D特效显示方法,其特征在于,所述3D模型创建时采用参数可调的函数系,包括如下步骤:
    在进行3D模型创建时,采用了参数可调的函数系,通过用户操作动态调整参数值,创建连续可变的3D模型;
    构建3D模型是连续函数,所有参数都是实数。
  11. 根据权利要求10所述的校正3D特效显示方法,其特征在于,所述参数可变函数为参数可变的三角函数或者多项式函数。
  12. 根据权利要求10所述的校正3D特效显示方法,其特征在于,所述参数可变函数为:
    Figure PCTCN2016110631-appb-100001
    式中,设模型上任一点p的坐标为(x,y,z);其中,θ是
    Figure PCTCN2016110631-appb-100002
    与y轴的夹角,
    Figure PCTCN2016110631-appb-100003
    Figure PCTCN2016110631-appb-100004
    在xoz平面的投影与x轴的夹角;hx、hy、hz取正实数,用于改变模型在x、y、z方向的比例,称为比例系数;tx、ty、tz取实数,用于改变模型在x、y、z方向的造型,称为造型系数;cy取实数,用于设置模型在y轴的位置,称为位置系数。
  13. 根据权利要求12所述的校正3D特效显示方法,其特征在于,所述参数可变函数为:
    Figure PCTCN2016110631-appb-100005
    其中,系数lx,lz为实数。
  14. 根据权利要求9至13任一项所述的校正3D特效显示方法,其特征在于,建立模型顶点与鱼眼图像坐标之间的映射关系时,采用参数可调的函数系,包括如下步骤:
    建立模型顶点与全景图像坐标之间的映射关系时,其映射关系是可交互的,采用了参数可调的函数系,动态调整参数值,创建连续可变的模型顶点与鱼眼图像坐标之间的映射关系。
  15. 根据权利要求14所述的校正3D特效显示方法,其特征在于,设镜头的光学成像模型为r=f(θ),其中r为成像半径,θ为光线与光轴的夹角;
    则对于3D模型上的任意顶点P(x,y,z),纹理映射关系:
    r=kf(g(θ)),θ∈[0,θmax]
    k是比例系数;θmax为光线与光轴的最大夹角;角度变换函数g(θ)是[0,θmax] 区间内的连续函数。
  16. 根据权利要求15所述的校正3D特效显示方法,其特征在于,所述角度变换函数为:
    g(θ)=θmax-θ。
  17. 根据权利要求16所述的校正3D特效显示方法,其特征在于,还包括如下步骤:
    当分享3D全景特效模型时,把3D模型数据,以及作为纹理的鱼眼图像上传至服务器,由服务器端嵌入到HTML5代码中,并生成网络链接供访问。
  18. 一种全景图像的校正3D显示系统,其特征在于,包括标定模块,建立模型模块,建立纹理映射关系模块,以及绑定模块;
    其中:
    所述标定模块,用于对原始鱼眼图像运算得到鱼眼图像的标定参数;
    所述建立模型模块,用于建立3D模型;
    所述建立纹理映射关系模块,用于建立所述3D模型的顶点与所述原始鱼眼图像之间的纹理映射关系;
    所述绑定模块,用于根据所述的纹理照射关系,对原始鱼眼图像进行绑定,得到3D全景显示图像。
  19. 根据权利要求18所述的校正3D显示系统,其特征在于,所述建立纹理映射关系模块,包括贴图子模块和计算子模块,其中:
    所述贴图子模块,用于将所述原始鱼眼图像作为纹理图像贴图于建立的所述3D模型上;
    所述计算子模块,用于计算纹理坐标(u,v)。
  20. 根据权利要求18所述的校正3D显示系统,其特征在于,所述绑定模 块,包括渲染子模块,用于根据对绑定后的原始鱼眼图像进行3D渲染,得到渲染后的所述3D全景显示图像。
  21. 根据权利要求18所述的校正3D显示系统,其特征在于,还包括交互模块,用于对所述3D全景显示图像进行交互。
  22. 一种存储介质,其特征在于,包括权利要求18至21任一项权利要求所述的全景图像的校正3D显示系统中的模块的存储介质。
  23. 一种全景图像的校正3D特效显示系统,其特征在于,包括动态调整模块,用于在进行3D模型创建或者建立模型顶点与鱼眼图像坐标之间的映射关系时,采用参数可调的函数系,动态调整参数值。
  24. 根据权利要求23所述的校正3D特效显示系统,其特征在于,所述动态调整模块,包括模型创建子模块和映射子模块;
    其中:
    所述模型创建子模块,用于在进行3D模型创建时,采用了参数可调的函数系,通过用户操作动态调整参数值,创建连续可变的3D模型。
    所述映射子模块,用于在建立模型顶点与鱼眼图像坐标之间的映射关系时,其映射关系是可交互的,采用了参数可调的函数系,动态调整参数值,创建连续可变的模型顶点与鱼眼图像坐标之间的映射关系。
  25. 根据权利要求23或者24所述的校正3D特效显示系统,其特征在于,还包括分享模块,用于当分享3D全景特效模型时,把3D模型数据,以及作为纹理的鱼眼图像上传至服务器,由服务器端嵌入到HTML5代码中,并生成网络链接供访问。
  26. 一种存储介质,其特征在于,包括权利要求23至25任一项权利要求所述的全景图像的校正3D特效显示系统中的模块的存储介质。
PCT/CN2016/110631 2016-01-26 2016-12-19 全景图像的校正3d显示方法和系统及装置 WO2017128887A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201610049506.0A CN105957048A (zh) 2016-01-26 2016-01-26 鱼眼镜头拍摄图像的3d全景显示方法和系统
CN201610049506.0 2016-01-26
CN201610173465.6 2016-03-24
CN201610173465.6A CN105787951B (zh) 2016-03-24 2016-03-24 鱼眼镜头拍摄图像的3d特效全景显示方法和系统

Publications (1)

Publication Number Publication Date
WO2017128887A1 true WO2017128887A1 (zh) 2017-08-03

Family

ID=59397333

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/110631 WO2017128887A1 (zh) 2016-01-26 2016-12-19 全景图像的校正3d显示方法和系统及装置

Country Status (1)

Country Link
WO (1) WO2017128887A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110348138A (zh) * 2019-07-15 2019-10-18 辽宁瑞华实业集团高新科技有限公司 一种实时生成真实井下巷道模型的方法、装置及存储介质
CN110619669A (zh) * 2019-09-19 2019-12-27 深圳市富视康实业发展有限公司 一种支持多种图形样式的鱼眼图像渲染系统及方法
CN110838163A (zh) * 2018-08-15 2020-02-25 浙江宇视科技有限公司 贴图处理方法及装置
CN110930299A (zh) * 2019-12-06 2020-03-27 杭州视洞科技有限公司 一种基于半球展开的圆形鱼眼视频显示方案
CN111429382A (zh) * 2020-04-10 2020-07-17 浙江大华技术股份有限公司 一种全景图像矫正方法、装置以及计算机存储介质
CN113034350A (zh) * 2021-03-24 2021-06-25 网易(杭州)网络有限公司 一种植被模型的处理方法和装置
CN113112581A (zh) * 2021-05-13 2021-07-13 广东三维家信息科技有限公司 三维模型的纹理贴图生成方法、装置、设备及存储介质
CN113112412A (zh) * 2020-01-13 2021-07-13 株式会社理光 垂直校正矩阵的生成方法、装置及计算机可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6252603B1 (en) * 1992-12-14 2001-06-26 Ford Oxaal Processes for generating spherical image data sets and products made thereby
CN103617606A (zh) * 2013-11-26 2014-03-05 中科院微电子研究所昆山分所 用于辅助驾驶的车辆多视角全景生成方法
CN103996172A (zh) * 2014-05-08 2014-08-20 东北大学 一种基于多步校正的鱼眼图像校正方法
CN104835117A (zh) * 2015-05-11 2015-08-12 合肥工业大学 基于重叠方式的球面全景图生成方法
CN105137705A (zh) * 2015-08-14 2015-12-09 太微图影(北京)数码科技有限公司 一种虚拟球幕的创建方法和装置
CN105787951A (zh) * 2016-03-24 2016-07-20 优势拓展(北京)科技有限公司 鱼眼镜头拍摄图像的3d特效全景显示方法和系统
CN105957048A (zh) * 2016-01-26 2016-09-21 优势拓展(北京)科技有限公司 鱼眼镜头拍摄图像的3d全景显示方法和系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6252603B1 (en) * 1992-12-14 2001-06-26 Ford Oxaal Processes for generating spherical image data sets and products made thereby
CN103617606A (zh) * 2013-11-26 2014-03-05 中科院微电子研究所昆山分所 用于辅助驾驶的车辆多视角全景生成方法
CN103996172A (zh) * 2014-05-08 2014-08-20 东北大学 一种基于多步校正的鱼眼图像校正方法
CN104835117A (zh) * 2015-05-11 2015-08-12 合肥工业大学 基于重叠方式的球面全景图生成方法
CN105137705A (zh) * 2015-08-14 2015-12-09 太微图影(北京)数码科技有限公司 一种虚拟球幕的创建方法和装置
CN105957048A (zh) * 2016-01-26 2016-09-21 优势拓展(北京)科技有限公司 鱼眼镜头拍摄图像的3d全景显示方法和系统
CN105787951A (zh) * 2016-03-24 2016-07-20 优势拓展(北京)科技有限公司 鱼眼镜头拍摄图像的3d特效全景显示方法和系统

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110838163A (zh) * 2018-08-15 2020-02-25 浙江宇视科技有限公司 贴图处理方法及装置
CN110838163B (zh) * 2018-08-15 2024-02-02 浙江宇视科技有限公司 贴图处理方法及装置
CN110348138A (zh) * 2019-07-15 2019-10-18 辽宁瑞华实业集团高新科技有限公司 一种实时生成真实井下巷道模型的方法、装置及存储介质
CN110348138B (zh) * 2019-07-15 2023-04-18 北京瑞华高科技术有限责任公司 一种实时生成真实井下巷道模型的方法、装置及存储介质
CN110619669B (zh) * 2019-09-19 2023-03-28 深圳市富视康智能股份有限公司 一种支持多种图形样式的鱼眼图像渲染系统及方法
CN110619669A (zh) * 2019-09-19 2019-12-27 深圳市富视康实业发展有限公司 一种支持多种图形样式的鱼眼图像渲染系统及方法
CN110930299A (zh) * 2019-12-06 2020-03-27 杭州视洞科技有限公司 一种基于半球展开的圆形鱼眼视频显示方案
CN113112412A (zh) * 2020-01-13 2021-07-13 株式会社理光 垂直校正矩阵的生成方法、装置及计算机可读存储介质
CN113112412B (zh) * 2020-01-13 2024-03-19 株式会社理光 垂直校正矩阵的生成方法、装置及计算机可读存储介质
CN111429382B (zh) * 2020-04-10 2024-01-19 浙江大华技术股份有限公司 一种全景图像矫正方法、装置以及计算机存储介质
CN111429382A (zh) * 2020-04-10 2020-07-17 浙江大华技术股份有限公司 一种全景图像矫正方法、装置以及计算机存储介质
CN113034350A (zh) * 2021-03-24 2021-06-25 网易(杭州)网络有限公司 一种植被模型的处理方法和装置
CN113112581A (zh) * 2021-05-13 2021-07-13 广东三维家信息科技有限公司 三维模型的纹理贴图生成方法、装置、设备及存储介质

Similar Documents

Publication Publication Date Title
WO2017128887A1 (zh) 全景图像的校正3d显示方法和系统及装置
US10957011B2 (en) System and method of capturing and rendering a stereoscopic panorama using a depth buffer
Attal et al. MatryODShka: Real-time 6DoF video view synthesis using multi-sphere images
US11869205B1 (en) Techniques for determining a three-dimensional representation of a surface of an object from a set of images
US11076142B2 (en) Real-time aliasing rendering method for 3D VR video and virtual three-dimensional scene
US11257283B2 (en) Image reconstruction method, system, device and computer-readable storage medium
US10096157B2 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
US11170561B1 (en) Techniques for determining a three-dimensional textured representation of a surface of an object from a set of images with varying formats
US20080246759A1 (en) Automatic Scene Modeling for the 3D Camera and 3D Video
US11812009B2 (en) Generating virtual reality content via light fields
US20220398705A1 (en) Neural blending for novel view synthesis
CN106780759A (zh) 基于图片构建场景立体全景图的方法、装置及vr系统
CN105787951A (zh) 鱼眼镜头拍摄图像的3d特效全景显示方法和系统
EP3057316B1 (en) Generation of three-dimensional imagery to supplement existing content
WO2009068942A1 (en) Method and system for processing of images
JPWO2018135052A1 (ja) 画像生成装置、及び画像表示制御装置
CN110060349B (zh) 一种扩展增强现实头戴式显示设备视场角的方法
CN116708862A (zh) 直播间的虚拟背景生成方法、计算机设备及存储介质
US20220253975A1 (en) Panoramic presentation methods and apparatuses
US20230196658A1 (en) Enclosed multi-view visual media representation
WO2012157516A1 (ja) 映像提示システム、映像提示方法、プログラム及び記録媒体
EP4381369A1 (en) Portal view for content items
WO2023049087A1 (en) Portal view for content items
CN114339120A (zh) 沉浸式视频会议系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16887758

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16887758

Country of ref document: EP

Kind code of ref document: A1