WO2019024935A1 - 一种全景图像生成方法及装置 - Google Patents

一种全景图像生成方法及装置 Download PDF

Info

Publication number
WO2019024935A1
WO2019024935A1 PCT/CN2018/098634 CN2018098634W WO2019024935A1 WO 2019024935 A1 WO2019024935 A1 WO 2019024935A1 CN 2018098634 W CN2018098634 W CN 2018098634W WO 2019024935 A1 WO2019024935 A1 WO 2019024935A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
grid
pixel
plane
determining
Prior art date
Application number
PCT/CN2018/098634
Other languages
English (en)
French (fr)
Inventor
王泽文
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to US16/635,763 priority Critical patent/US11012620B2/en
Priority to EP18841194.6A priority patent/EP3664443B1/en
Publication of WO2019024935A1 publication Critical patent/WO2019024935A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/31Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles providing stereoscopic vision
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to a method and device for generating a panoramic image.
  • panoramic images have exerted great value and advantages in many fields.
  • the driver is provided with a 360-degree panoramic image around the vehicle, so that the driver can more clearly perceive the surrounding environment, thereby improving the safety of driving.
  • the existing panoramic image generation scheme generally includes: acquiring a two-dimensional planar image; performing distortion correction on the two-dimensional planar image to obtain a distortion correction map; and transforming the distortion correction map into a bird's-eye view through a bird's-eye view transformation; Texture mapping is performed in the preset stereo model to obtain a panoramic image.
  • An object of the embodiments of the present application is to provide a method and an apparatus for generating a panoramic image to save storage resources.
  • the embodiment of the present application discloses a method for generating a panoramic image, including:
  • each grid point in the preset stereo model is mapped to a pixel point in the two-dimensional plane image; wherein the mapping relationship includes each grid point in the preset stereo model Correspondence with pixels in a two-dimensional planar image;
  • the pixel area corresponding to the grid plane is rendered to obtain a panoramic image.
  • the process of establishing the mapping relationship may include:
  • mapping relationship is established according to each grid point and its corresponding two-dimensional plane pixel.
  • the step of converting the intersection point into a bird's eye view pixel point may include:
  • intersection is converted into a bird's-eye view pixel point according to a preset conversion coefficient.
  • the step of determining a grid point in the preset stereo model may include:
  • the step of establishing the mapping relationship according to each of the grid points and the corresponding two-dimensional plane pixel points may include:
  • the mapping between the three-dimensional coordinate value of the grid point and the pixel coordinate value of the corresponding two-dimensional plane pixel is established by using the longitude value and the latitude value of the grid point as indexes;
  • the step of determining, according to the pre-established mapping relationship, the mapping of each of the grid points in the preset stereo model to the pixel points in the two-dimensional plane image may include:
  • step of performing the rendering of the pixel image corresponding to the grid plane for each grid plane to obtain a panoramic image which may include:
  • Each grid plane is used as a plane to be rendered
  • the position to be rendered is rendered by the target area to obtain a panoramic image.
  • the step of dividing the latitude and longitude in the preset stereo model may include:
  • the process of establishing the mapping relationship may include:
  • mapping relationship is established according to pixel points in each grid point and its corresponding two-dimensional planar image sample.
  • the method may further include:
  • the effective mesh plane is: a mesh plane composed of grid points other than the invalid point;
  • step of performing a panoramic image by using the pixel area corresponding to the grid plane for each grid plane including:
  • the pixel area corresponding to the effective mesh plane is rendered to obtain a panoramic image.
  • the step of acquiring a two-dimensional planar image may include:
  • the step of determining, as the pixel area corresponding to the pixel plane, the pixel area formed by the pixel points mapped to each grid point in the grid plane for each of the preset mesh models include:
  • Determining, by each target mesh plane in the preset stereoscopic model, a pixel region formed by the pixel points to which the grid point in the target mesh plane is mapped is a target pixel region corresponding to the target mesh plane; wherein A target mesh plane is composed of a first predetermined number of target mesh points;
  • step of performing the rendering of the pixel image corresponding to the grid plane for each grid plane to obtain a panoramic image which may include:
  • the target pixel region corresponding to the target mesh plane is rendered to obtain a panoramic image of the current viewpoint.
  • a panoramic image generating apparatus including:
  • a first determining module configured to determine, according to a pre-established mapping relationship, each grid point in the preset stereo model is mapped to a pixel point in the two-dimensional plane image; wherein the mapping relationship includes the preset Corresponding relationship between each grid point in the three-dimensional model and the pixel points in the two-dimensional plane image;
  • a second determining module configured to determine, according to each of the grid planes in the preset stereo model, a pixel region formed by pixels corresponding to each grid point in the grid plane as a pixel corresponding to the grid plane a region; wherein a grid plane is composed of a first predetermined number of grid points;
  • a rendering module is configured to render a panoramic image by using a pixel area corresponding to the grid plane for each grid plane.
  • the device may further include:
  • a third determining module configured to determine a grid point in the preset stereo model
  • a fourth determining module configured to determine a projection line between the grid point and a preset projection point for each determined grid point; wherein the projection point is located in a bird's-eye view plane of the preset stereo model Above the projection line starting from the projection point and passing through the grid point;
  • a fifth determining module configured to determine an intersection of the projection line and the bird's-eye view plane
  • a first conversion module configured to convert the intersection point into a bird's eye view pixel point
  • An inverse transform module configured to perform inverse perspective transformation on the bird's-eye view bird point according to an external parameter of the camera that collects the two-dimensional planar image to obtain a distortion corrected pixel point;
  • An inverse operation module configured to perform a distortion correction inverse operation on the distortion correction pixel according to an internal parameter of the camera to obtain a two-dimensional planar pixel point;
  • the first establishing module is configured to establish the mapping relationship according to each grid point and its corresponding two-dimensional plane pixel.
  • the first conversion module is specifically configured to:
  • intersection is converted into a bird's-eye view pixel point according to a preset conversion coefficient.
  • the third determining module may include:
  • Determining a sub-module configured to determine, according to the segmentation result, each grid point in the model, a longitude value and a latitude value of each grid point, and a three-dimensional shape of each grid point in the preset three-dimensional model Coordinate value
  • the first establishing module may be specifically configured to:
  • the mapping between the three-dimensional coordinate value of the grid point and the pixel coordinate value of the corresponding two-dimensional plane pixel is established by using the longitude value and the latitude value of the grid point as indexes;
  • the first determining module may be specifically configured to:
  • Determining, according to the index, a current grid point to be mapped determining, for each current grid point to be mapped, a three-dimensional coordinate value of the current grid point and a corresponding pixel coordinate value, and the determined pixel coordinate value is the Mapping a current grid point to a coordinate value of a pixel point in the two-dimensional planar image;
  • the rendering module can be specifically used to:
  • the dividing submodule is specifically configured to:
  • the device may further include:
  • a distortion correction operation module configured to perform a distortion correction operation on a pixel point in the two-dimensional planar image sample according to an internal parameter of the camera that acquires the two-dimensional planar image, to obtain a distortion correction pixel point;
  • a perspective transformation module configured to perform perspective transformation on the distortion correction pixel according to an external parameter of the camera, to obtain a bird's eye view pixel point;
  • a second conversion module configured to convert the bird's-eye view pixel point into a world coordinate point
  • a sixth determining module configured to determine a projection line between the world coordinate point and a projection point of the preset stereo model, where the projection point is preset;
  • a seventh determining module configured to determine an intersection of the projection line and the preset stereo model as a grid point
  • the second establishing module establishes the mapping relationship according to the pixel points in each grid point and its corresponding two-dimensional plane image sample.
  • the device may further include:
  • a marking module for marking a grid point that fails to map as an invalid point
  • the second determining module may be specifically configured to:
  • the effective mesh plane is: a mesh plane composed of grid points other than the invalid point;
  • the rendering module can be specifically used to:
  • the pixel area corresponding to the effective mesh plane is rendered to obtain a panoramic image.
  • the device may further include:
  • the obtaining module may be specifically configured to:
  • the first determining module may be specifically configured to:
  • the second determining module may be specifically configured to:
  • Determining, by each target mesh plane in the preset stereoscopic model, a pixel region formed by the pixel points to which the grid point in the target mesh plane is mapped is a target pixel region corresponding to the target mesh plane; wherein A target mesh plane is composed of a first predetermined number of target mesh points;
  • the rendering module can be specifically used to:
  • the target pixel region corresponding to the target mesh plane is rendered to obtain a panoramic image of the current viewpoint.
  • an embodiment of the present application further discloses an electronic device, including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory complete communication with each other through a communication bus;
  • a memory for storing a computer program
  • the processor when used to execute a program stored on the memory, implements any of the above-described panoramic image generation methods.
  • an embodiment of the present application further discloses a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, implements any of the above-mentioned panoramic image generation. method.
  • an embodiment of the present application further discloses an executable program code for being executed to implement any of the above-described panoramic image generating methods.
  • each grid point in the preset stereo model is mapped to a pixel point in the two-dimensional plane image, and the plurality of grid points form a grid plane, corresponding to The plurality of pixels mapped to each other also constitute a pixel area, and for each grid plane, the pixel area corresponding to the grid plane is used for rendering, thereby obtaining a panoramic image; visible, the scheme does not need to generate a distortion correction map, Overlooking the aerial view saves storage resources.
  • FIG. 1 is a first schematic flowchart of a method for generating a panoramic image according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a three-dimensional model provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a storage style of a mapping relationship according to an embodiment of the present disclosure
  • FIG. 4 is a second schematic flowchart diagram of a method for generating a panoramic image according to an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of generating a roamable panoramic image video according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a panoramic image at a certain viewpoint according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a panoramic image generating apparatus according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the embodiment of the present application provides a method and device for generating a panoramic image.
  • the method and device can be applied to various devices having image processing functions, and are not limited in specific applications.
  • a method for generating a panoramic image provided by an embodiment of the present application is described in detail below.
  • FIG. 1 is a first schematic flowchart of a method for generating a panoramic image according to an embodiment of the present disclosure, including:
  • S102 Determine, according to a pre-established mapping relationship, mapping each grid point in the preset stereo model to a pixel point in the two-dimensional plane image; wherein the mapping relationship includes each grid point in the preset stereo model Correspondence of pixel points in a two-dimensional planar image.
  • a pixel region formed by pixels corresponding to each grid point in the mesh plane is determined as a pixel region corresponding to the grid plane; wherein, a network
  • the grid plane is composed of a first predetermined number of grid points.
  • S104 For each grid plane, perform rendering by using a pixel area corresponding to the grid plane to obtain a panoramic image.
  • each grid point in the preset stereo model is mapped to a pixel point in the two-dimensional plane image, and the plurality of grid points constitute a grid plane.
  • the plurality of pixel points mapped to form a pixel area, and for each grid plane, the pixel area corresponding to the grid plane is used for rendering, thereby obtaining a panoramic image; visible, the scheme does not need to generate distortion correction.
  • the execution entity (the device that performs the present solution, hereinafter referred to as the device) may have an image collection function, and the two-dimensional planar image acquired in S101 may be collected by the device.
  • the device can also be communicatively coupled with other cameras and acquire two-dimensional planar images acquired by other cameras.
  • the obtained two-dimensional plane image may be one sheet or multiple sheets, and if it is multiple sheets, it may be an image of each viewpoint in the same scene.
  • the two-dimensional planar image may also be a frame image in a video, or a multi-frame image in a multi-segment video, which is not limited.
  • S102 Determine, according to a pre-established mapping relationship, mapping each grid point in the preset stereo model to a pixel point in the two-dimensional plane image; wherein the mapping relationship includes each grid point in the preset stereo model Correspondence of pixel points in a two-dimensional planar image.
  • two ways of establishing a mapping relationship are provided, one of which is a way of establishing a mapping relationship in reverse, and the other is a way of establishing a mapping relationship in a forward direction.
  • mapping relationship is established according to each grid point and its corresponding two-dimensional plane pixel.
  • the partial sphere in FIG. 2 is the preset stereo model
  • the horizontal plane tangential to the partial sphere is a bird's-eye view plane of the preset stereo model.
  • the projection point may be located at the center of the sphere of the part of the sphere, or may be located in a perpendicular line between the center of the sphere and the bird's-eye view plane, which is located above the bird's-eye view plane.
  • the point Q is a grid point in the preset stereo model, and a projection line between the point Q and the projection point G is determined, and the projection line intersects the bird's-eye view plane at a point V.
  • the intersection point V is converted into a bird's eye view pixel point.
  • the intersection point can be converted into a bird's eye view pixel point according to a preset conversion coefficient.
  • the conversion coefficient can be understood as a conversion coefficient of a coordinate point in the world coordinate system into a bird's-eye view in a bird's eye view. It should be noted that, in this embodiment, a bird's-eye view is not generated, but the conversion coefficient is used.
  • the intersection is converted.
  • the coordinate system in which the stereo model in FIG. 2 is located may be a world coordinate system, or may be other coordinate systems, and it is also reasonable to establish a conversion relationship between the other coordinate system and the world coordinate system.
  • the bird's-eye view pixel point after the intersection V conversion is v
  • the pixel coordinate value of the pixel point v is (v x , v y )
  • v x k*V x
  • v y k* V y
  • the bird's-eye view is not actually generated, but in the process of establishing the mapping relationship, for the convenience of description, the pixel point obtained by converting the intersection of the projection line and the bird's-eye view plane is called a bird's eye view. pixel.
  • the bird's-eye view pixel is inversely transformed to obtain a distortion corrected pixel, and the distortion corrected pixel can be understood as a pixel in the distortion correction map.
  • the distortion correction map is not actually generated, but in the process of establishing the mapping relationship, for the convenience of description, the pixel obtained by the inverse perspective transformation is referred to as a distortion correction pixel.
  • the distortion correction pixel is subjected to distortion correction inverse operation to obtain a two-dimensional planar pixel point, which is a grid point Q mapped to a two-dimensional plane.
  • the pixel point in the image that is, the two-dimensional plane pixel corresponding to the grid point Q, is assumed to be (p x , p y ).
  • the two-dimensional planar pixel points corresponding to the respective grid points in the three-dimensional model of FIG. 2 can be determined, so that a mapping relationship can be established, which includes each grid point and two-dimensional plane in the three-dimensional model.
  • Correspondence of pixel points in the image including the correspondence between (q x , q y , q z ) and (p x , p y ).
  • the latitude and longitude may be first divided in the preset three-dimensional model, and then each grid point in the model and the longitude value of each grid point are determined according to the division result. And latitude values, and three-dimensional coordinate values of each grid point in the preset stereo model.
  • the established mapping relationship includes the longitude value and the latitude value of the grid point, the three-dimensional coordinate value of the grid point, and the pixel coordinate value of the two-dimensional plane pixel corresponding to the network point.
  • the longitude of the grid point can be The value and latitude values are used as indexes to establish the mapping.
  • the shape of the preset stereo model and the position in the coordinate system are preset. Still taking FIG. 2 as an example, the spherical center coordinates, the sphere radius, and the height of the partial sphere (the vertical distance from the bird's-eye view plane to the highest edge of the part of the sphere) are set, and the shape and position of the sphere are set. In this way, the three-dimensional coordinate values of the grid points on the surface of the model can be further determined. In addition, by dividing the latitude and longitude on the surface of the model, the longitude and latitude values of each grid point can be determined.
  • S102 may include: determining, according to the index, a current grid point to be mapped; determining, for each current grid point to be mapped, a three-dimensional coordinate value of the current grid point to be mapped and corresponding a pixel coordinate value, the determined pixel coordinate value being a coordinate value of the current grid point mapped to a pixel point in the two-dimensional planar image.
  • the established mapping relationship includes (i, j), (q x , q y , q z ), (p x , p y
  • the correspondence between the three, where (i, j) can be used as an index.
  • the current grid point to be mapped can be sequentially determined according to the order of the latitude and longitude of each grid point, and the three-dimensional coordinate values of the current grid point to be mapped and the corresponding pixel coordinate values are determined, so as to obtain a two-dimensional plane by mapping.
  • the pixel point in the image is
  • This embodiment can be applied to a sphere model, or a sphere-like model, such as the partial sphere model in Fig. 2, in which the three-dimensional coordinate values of the grid points of the model surface are not linearly arranged.
  • the grid points are determined according to the latitude and longitude. Compared with determining the grid points according to the three-dimensional coordinate values, the distribution of the grid points can be better adjusted, and the grid points can be better adjusted. The degree of density.
  • the degree of density of the grid points reflects the resolution of the mapping relationship. The more the number of grid points, the denser the mapping relationship is, the more resolution is established, and the resolution is larger, and the resolution of the mapping relationship is adjustable. Regardless of the resolution of the image.
  • dividing the latitude and longitude in the preset stereo model may include: determining a latitude interval of each view corresponding to the preset stereo model; calculating a latitude interval of the latitude interval of each view; The latitude interval and the overlapping latitude interval between the respective viewpoints are obtained, and the latitude interval of the preset stereo model is obtained; the longitude interval of each view is determined as the longitude interval of the preset stereo model; The longitude interval and the latitude interval of the stereo model are preset, and the preset stereo model is divided into latitude and longitude.
  • the model in FIG. 2 reflects a 360-degree viewing angle
  • the two-dimensional planar image acquired in S101 is usually not an image of a 360-degree viewing angle. Therefore, multiple two-dimensional planar images can be acquired in S101, which is more
  • the image is an image under each viewpoint corresponding to the stereo model, and can be stitched into an image of a 360-degree angle of view. It is assumed that four two-dimensional planar images are acquired in S101, and the viewing angle of each two-dimensional planar image is greater than or equal to 90 degrees.
  • the splicing process is horizontal splicing.
  • the partial sphere of FIG. 2 is divided into four parts, and the four parts respectively correspond to one image, and the four images are horizontally spliced to coincide with the angle of view of the sphere.
  • Horizontal splicing can be understood as adding the width of the above four images (removing the overlapping portion), the height is unchanged, and the dividing latitude is based on the height of the sphere in FIG. 2, and the dividing longitude is based on stretching the surface of the sphere in FIG.
  • the width of the plane is divided, so that the latitude interval and the overlapping portion of the four images are the latitude intervals of the partial spheres in FIG. 2, and the longitude intervals of the four images are identical or approximately identical, and the The longitude interval is the longitude interval of the partial sphere in Fig. 2.
  • the storage relationship is stored.
  • the storage style of the mapping relationship can be as shown in FIG. 3, and w1, w2, w3, and w4 are each viewpoint corresponding to the stereo model (viewpoint 1, viewpoint 2, viewpoint 3, and viewpoint 4).
  • the latitude interval in which there is an overlapping portion d (assuming that all three overlapping portions are d)
  • the latitude interval of the three-dimensional model w w1+w2+w3+w4-3d
  • h is the longitude interval of each viewpoint, the three-dimensional model
  • the longitude interval is also h.
  • the point value of a point in Fig. 3 is i, and the latitude value is j.
  • the position stores the three-dimensional coordinate value of the grid point and the corresponding pixel coordinate value, for example, [(q x , q y , q z ) , (p x , p y )].
  • the "viewpoint” can be used to describe the position of the virtual observer, and the “viewpoint” can include information such as the camera position and the angle of view of the image captured by the camera.
  • Each viewpoint in FIG. 3 can correspond to a camera.
  • each camera corresponds to one mapping table.
  • multiple cameras correspond to one mapping relationship, which saves storage resources on the one hand, and determines pixel points to which grid points are mapped on the other hand. Finding a mapping relationship is more efficient than finding multiple mapping tables.
  • mapping relationship is established in the opposite direction.
  • the pixel point is subjected to distortion correction according to the camera internal parameter to obtain a distortion corrected pixel point; and the distortion corrected pixel is obtained according to the camera external parameter.
  • the point is subjected to perspective transformation to obtain a bird's-eye view pixel point (v x , v y ); and the bird's-eye view pixel point (v x , v y ) is converted into a coordinate value in the world coordinate system according to a preset conversion coefficient k (V) x , V y ); the (V x , V y ) is projected into the corresponding three-dimensional coordinate value of the grid point in the stereo model (q x , q y , q z ); the established mapping relationship includes (q x , q y , q z ).
  • the pixel points are mapped to the grid points, and the number of pixel points is related to the number of grid points included in the mapping relationship. Therefore, the resolution of the mapping relationship and the resolution of the two-dimensional plane image The rate is also related, that is to say, the manner in which the mapping relationship is inversely established is more convenient than the manner in which the mapping relationship is established in the forward direction, and the resolution of the mapping relationship can be adjusted more conveniently.
  • a pixel region formed by pixels corresponding to each grid point in the mesh plane is determined as a pixel region corresponding to the mesh plane.
  • one grid plane is composed of a first preset number of grid points.
  • S104 For each grid plane, perform rendering by using a pixel area corresponding to the grid plane to obtain a panoramic image.
  • the first preset number can be set according to actual conditions, such as three, four, five, and the like.
  • the assumption is three. It can be understood that three grid points can form a grid plane, and the three grid points are mapped to three pixel points in the two-dimensional plane image to form a pixel area, and the pixel area is the grid plane. Corresponding. In this way, each grid plane corresponds to one pixel area, and for each grid plane, the pixel area corresponding to the grid plane is used for rendering, and a panoramic image is obtained.
  • the grid plane can be rendered by the pixel area through a GUI (Graphical User Interface).
  • GUI Graphic User Interface
  • some pixels can be selected in the pixel area, and the grid values are rendered by the pixel values of the pixels, and the number of pixels selected depends on the performance of the device.
  • the latitude and longitude of the grid point is used as an index in the established mapping relationship.
  • the S104 may include: respectively, each grid plane as a plane to be rendered; according to the plane to be rendered The three-dimensional coordinate values of the respective grid points included in the three-dimensional coordinate values are determined in the preset stereo model; and the pixel coordinate values corresponding to the three-dimensional coordinate values of the respective grid points included in the to-be-drawn plane are Determining a target area in the two-dimensional planar image; and rendering the to-be-rendered position by using the target area to obtain a panoramic image.
  • the current grid point to be mapped may be sequentially determined according to the order of the latitude and longitude of each grid point, and the three-dimensional coordinate value of the current grid point to be mapped and the corresponding pixel coordinate value may be determined;
  • the three-dimensional coordinate value of each grid point, according to the three-dimensional coordinate value, the position of the plane to be rendered (the position to be rendered) can be determined in the stereo model; in addition, the pixel coordinates of the corresponding pixel point of each grid point are obtained.
  • the value, according to the pixel coordinate value can determine a target area in the two-dimensional plane image, and use the target area to render the position to be rendered, thereby obtaining a panoramic image.
  • the method further includes: marking the grid point that fails the mapping as an invalid point; in this implementation manner, S103 may include: for each valid mesh plane in the preset stereo model, Determining, by the pixel area formed by the pixel points to which the grid points in the effective grid plane are mapped, a pixel area corresponding to the effective grid plane, where the effective grid plane is: a network other than the invalid point a grid plane formed by grid points; S104 includes: for each effective grid plane, rendering a pixel area corresponding to the effective grid plane to obtain a panoramic image.
  • the number of grid points in the stereo model is larger than the number of pixel points in the acquired two-dimensional plane image, there will be some grid points that fail to map.
  • the grid points that fail to map have no corresponding pixels in the two-dimensional plane image. Points, you can not participate in subsequent rendering. Therefore, the grid points that failed to map can be marked as invalid points, and the pixel area corresponding to the invalid grid plane is no longer determined, and the grid plane containing the invalid grid is no longer rendered.
  • the grid point can be marked as invalid.
  • S101 may acquire each frame image in a video.
  • the embodiment of FIG. 1 is applied for each frame image, and a dynamic panoramic image video can be generated.
  • each grid point in the preset stereo model is mapped to a pixel point in the two-dimensional plane image, and the plurality of grid points constitute a grid plane.
  • the plurality of pixel points mapped to form a pixel area, and for each grid plane, the pixel area corresponding to the grid plane is used for rendering, thereby obtaining a panoramic image; visible, the scheme does not need to generate distortion correction.
  • FIG. 4 is a second flowchart of a method for generating a panoramic image according to an embodiment of the present disclosure, including:
  • S401 Acquire a two-dimensional planar image of multiple viewpoints.
  • S402 Determine a current viewpoint, and each target grid point of the preset stereo model at the current viewpoint.
  • S403 Determine, according to the pre-established mapping relationship, mapping each target grid point to a pixel point in the two-dimensional planar image.
  • a target mesh plane is composed of a first preset number of target mesh points.
  • S405 For each target mesh plane, use the target pixel area corresponding to the target mesh plane to perform rendering, to obtain a panoramic image of the current viewpoint.
  • a 360-degree panoramic image can be obtained.
  • only a panoramic image of the viewpoint may be generated.
  • Applying the embodiment shown in Fig. 4 a panoramic image of a certain viewpoint can be generated.
  • the latitude range in the stereo model corresponding to each camera viewpoint can be determined, and the grid point in this range is determined as the target grid point.
  • the mesh plane formed by the target mesh points is referred to as a target mesh plane
  • the pixel region corresponding to the target mesh plane is referred to as a target pixel region.
  • the target mesh plane is rendered only by the target pixel area, and a panoramic image of the current viewpoint is obtained.
  • the current viewpoint can be switched according to actual needs. After switching the viewpoint, the embodiment of FIG. 4 can be continued to generate a panoramic image of the new viewpoint.
  • the two-dimensional planar image of the plurality of view points that can be acquired by S401 may be an image in the multi-segment video.
  • the two in-vehicle cameras respectively transmit video to the device, and the device receives four segments of video, and respectively acquires four images at the same time from the four segments of video, and applies FIG. 4 to the four images.
  • a dynamic panoramic image video can be generated.
  • the architecture for generating a dynamic, roamable, panoramic image video can be as shown in FIG. 5, and multiple in-vehicle cameras (vehicle camera 1, in-vehicle camera 2, in-vehicle camera N) transmit the captured video to the device.
  • the device can generate a panoramic video according to the pre-established mapping relationship, and can also update the viewpoint of the generated panoramic video according to the current viewpoint, that is, generate a roamable panoramic video to implement video roaming.
  • the view 1 and the view 2 in FIG. 6 can be understood as a display area. If the current viewpoint is the viewpoint 1, the dotted area in the stereo model is rendered to obtain a panoramic image of the viewpoint, and the panoramic image is displayed in the view 1; similarly, if the current viewpoint is the viewpoint 2, the dotted area in the stereo model Rendering is performed to obtain a panoramic image of the viewpoint, and the panoramic image is displayed in view 2.
  • the two-dimensional planar image of the plurality of viewpoints in the S401 may also be a multi-channel image captured by the on-vehicle surround lens, and the specific application scenario is not limited.
  • a panoramic image of the current viewpoint can be generated only for the current viewpoint, which reduces the amount of calculation compared to generating a 360-degree panoramic image, and can also achieve panoramic video roaming, and the experience is better.
  • the embodiment of the present application further provides a panoramic image generating apparatus.
  • FIG. 7 is a schematic structural diagram of a panoramic image generating apparatus according to an embodiment of the present disclosure, including:
  • An obtaining module 701, configured to acquire a two-dimensional plane image
  • a first determining module 702 configured to determine, according to a pre-established mapping relationship, each grid point in the preset stereo model is mapped to a pixel point in the two-dimensional plane image; wherein the mapping relationship includes the pre- Establishing a correspondence between each grid point in the volume model and a pixel point in the two-dimensional plane image;
  • a second determining module 703 configured to determine, according to each of the preset mesh models, a pixel region formed by the pixel points to which the mesh points in the mesh plane are mapped, corresponding to the mesh plane a pixel area; wherein a grid plane is composed of a first predetermined number of grid points;
  • the rendering module 704 is configured to render, for each grid plane, a pixel area corresponding to the grid plane to obtain a panoramic image.
  • the device may further include:
  • a third determining module configured to determine a grid point in the preset stereo model
  • a fourth determining module configured to determine a projection line between the grid point and a preset projection point for each determined grid point; wherein the projection point is located in a bird's-eye view plane of the preset stereo model Above the projection line starting from the projection point and passing through the grid point;
  • a fifth determining module configured to determine an intersection of the projection line and the bird's-eye view plane
  • a first conversion module configured to convert the intersection point into a bird's eye view pixel point
  • An inverse transform module configured to perform inverse perspective transformation on the bird's-eye view bird point according to an external parameter of the camera that collects the two-dimensional planar image to obtain a distortion corrected pixel point;
  • An inverse operation module configured to perform a distortion correction inverse operation on the distortion correction pixel according to an internal parameter of the camera to obtain a two-dimensional planar pixel point;
  • the first establishing module is configured to establish the mapping relationship according to each grid point and its corresponding two-dimensional plane pixel.
  • the first conversion module may be specifically configured to:
  • intersection is converted into a bird's-eye view pixel point according to a preset conversion coefficient.
  • the third determining module includes:
  • Determining a sub-module configured to determine, according to the segmentation result, each grid point in the model, a longitude value and a latitude value of each grid point, and a three-dimensional shape of each grid point in the preset three-dimensional model Coordinate value
  • the first establishing module may be specifically configured to:
  • the mapping between the three-dimensional coordinate value of the grid point and the pixel coordinate value of the corresponding two-dimensional plane pixel is established by using the longitude value and the latitude value of the grid point as indexes;
  • the first determining module may be specifically configured to:
  • Determining, according to the index, a current grid point to be mapped determining, for each current grid point to be mapped, a three-dimensional coordinate value of the current grid point and a corresponding pixel coordinate value, and the determined pixel coordinate value is the Mapping a current grid point to a coordinate value of a pixel point in the two-dimensional planar image;
  • the rendering module 705 can be specifically used to:
  • the dividing submodule may be specifically configured to:
  • the device may further include:
  • a distortion correction operation module configured to perform a distortion correction operation on a pixel point in the two-dimensional planar image sample according to an internal parameter of the camera that acquires the two-dimensional planar image, to obtain a distortion correction pixel point;
  • a perspective transformation module configured to perform perspective transformation on the distortion correction pixel according to an external parameter of the camera, to obtain a bird's eye view pixel point;
  • a second conversion module configured to convert the bird's-eye view pixel point into a world coordinate point
  • a sixth determining module configured to determine a projection line between the world coordinate point and a projection point of the preset stereo model, where the projection point is preset;
  • a seventh determining module configured to determine an intersection of the projection line and the preset stereo model as a grid point
  • the second establishing module establishes the mapping relationship according to the pixel points in each grid point and its corresponding two-dimensional plane image sample.
  • the device may further include:
  • a marking module for marking a grid point that fails to map as an invalid point
  • the second determining module 703 is specifically configured to:
  • the effective mesh plane is: a mesh plane composed of grid points other than the invalid point;
  • the rendering module 704 can be specifically configured to:
  • the pixel area corresponding to the effective mesh plane is rendered to obtain a panoramic image.
  • the device may further include:
  • the obtaining module 701 is specifically configured to:
  • the first determining module 702 is specifically configured to:
  • the second determining module 703 is specifically configured to:
  • Determining, by each target mesh plane in the preset stereoscopic model, a pixel region formed by the pixel points to which the grid point in the target mesh plane is mapped is a target pixel region corresponding to the target mesh plane; wherein A target mesh plane is composed of a first predetermined number of target mesh points;
  • the rendering module 704 can be specifically configured to:
  • the target pixel region corresponding to the target mesh plane is rendered to obtain a panoramic image of the current viewpoint.
  • the embodiment of the present application further provides an electronic device, including a processor and a memory, where the memory is used to store a computer program, and the processor is configured to execute a program stored on the memory, as follows: Method steps:
  • each grid point in the preset stereo model is mapped to a pixel point in the two-dimensional plane image; wherein the mapping relationship includes each grid point in the preset stereo model Correspondence with pixels in a two-dimensional planar image;
  • the pixel area corresponding to the grid plane is rendered to obtain a panoramic image.
  • mapping relationship is established according to each grid point and its corresponding two-dimensional plane pixel.
  • intersection is converted into a bird's-eye view pixel point according to a preset conversion coefficient.
  • the mapping between the three-dimensional coordinate value of the grid point and the pixel coordinate value of the corresponding two-dimensional plane pixel is established by using the longitude value and the latitude value of the grid point as indexes;
  • Each grid plane is used as a plane to be rendered
  • the position to be rendered is rendered by the target area to obtain a panoramic image.
  • mapping relationship is established according to pixel points in each grid point and its corresponding two-dimensional planar image sample.
  • the effective mesh plane is: a mesh plane composed of grid points other than the invalid point;
  • the pixel area corresponding to the effective mesh plane is rendered to obtain a panoramic image.
  • Determining, by each target mesh plane in the preset stereoscopic model, a pixel region formed by the pixel points to which the grid point in the target mesh plane is mapped is a target pixel region corresponding to the target mesh plane; wherein A target mesh plane is composed of a first predetermined number of target mesh points;
  • the target pixel region corresponding to the target mesh plane is rendered to obtain a panoramic image of the current viewpoint.
  • the embodiment of the present application further provides an electronic device, as shown in FIG. 8, including a processor 801, a communication interface 802, a memory 803, and a communication bus 804, wherein the processor 801, the communication interface 802, and the memory 803 pass through the communication bus 804.
  • a processor 801, the communication interface 802, and the memory 803 pass through the communication bus 804.
  • the processor 801 is configured to implement any of the above-described panoramic image generation methods when executing a program stored on the memory 803.
  • the communication bus mentioned in the above electronic device may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the above electronic device and other devices.
  • the memory may include a random access memory (RAM), and may also include a non-volatile memory (NVM), such as at least one disk storage.
  • RAM random access memory
  • NVM non-volatile memory
  • the memory may also be at least one storage device located away from the aforementioned processor.
  • the above processor may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; or may be a digital signal processing (DSP), dedicated integration.
  • CPU central processing unit
  • NP network processor
  • DSP digital signal processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • each grid point in the preset stereo model is mapped to a pixel point in the two-dimensional plane image, and the plurality of grid points constitute a grid plane.
  • the plurality of pixel points mapped to form a pixel area, and for each grid plane, the pixel area corresponding to the grid plane is used for rendering, thereby obtaining a panoramic image; visible, the scheme does not need to generate distortion correction.
  • the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, the following steps are implemented:
  • each grid point in the preset stereo model is mapped to a pixel point in the two-dimensional plane image; wherein the mapping relationship includes each grid point in the preset stereo model Correspondence with pixels in a two-dimensional planar image;
  • the pixel area corresponding to the grid plane is rendered to obtain a panoramic image.
  • mapping relationship is established according to each grid point and its corresponding two-dimensional plane pixel.
  • intersection is converted into a bird's-eye view pixel point according to a preset conversion coefficient.
  • the mapping between the three-dimensional coordinate value of the grid point and the pixel coordinate value of the corresponding two-dimensional plane pixel is stored according to the longitude value and the latitude value of the grid point;
  • Each grid plane is used as a plane to be rendered
  • the position to be rendered is rendered by the target area to obtain a panoramic image.
  • mapping relationship is established according to pixel points in each grid point and its corresponding two-dimensional planar image sample.
  • each of the grid points in the preset stereo model is mapped to the pixel points in the two-dimensional plane image, marking the grid point that fails the mapping as an invalid point;
  • the effective mesh plane is: a mesh plane composed of grid points other than the invalid point;
  • the pixel area corresponding to the effective mesh plane is rendered to obtain a panoramic image.
  • Determining, by each target mesh plane in the preset stereoscopic model, a pixel region formed by the pixel points to which the grid point in the target mesh plane is mapped is a target pixel region corresponding to the target mesh plane; wherein A target mesh plane is composed of a first predetermined number of target mesh points;
  • the target pixel region corresponding to the target mesh plane is rendered to obtain a panoramic image of the current viewpoint.
  • the embodiment of the present application also discloses an executable program code for being executed to implement any of the above-described panoramic image generating methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Algebra (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Computer Graphics (AREA)
  • Mathematical Analysis (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Geometry (AREA)

Abstract

本申请实施例提供了一种全景图像生成方法及装置,方法包括:根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到二维平面图像中的像素点,多个网格点构成一个网格平面,相对应的,映射到的多个像素点也构成一个像素区域,对于每个网格平面,用该网格平面对应的像素区域进行渲染,进而得到全景图像;可见,本方案不需要生成畸变校正图、俯视鸟瞰图,节省了存储资源。

Description

一种全景图像生成方法及装置
本申请要求于2017年8月3日提交中国专利局、申请号为201710655999.7、发明名称为“一种全景图像生成方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种全景图像生成方法及装置。
背景技术
目前,全景图像已在诸多领域发挥出巨大价值和优势。例如:在车载领域中,为驾驶人员提供车辆周围360度全景图像,使得驾驶人员能够更清楚地感知周围环境,进而提高驾驶的安全性。
现有的全景图像生成方案一般包括:采集二维平面图像;对该二维平面图像进行畸变校正,得到畸变校正图;再通过鸟瞰视角变换,将畸变校正图变换为俯视鸟瞰图;将俯视鸟瞰图预设的立体模型中进行纹理映射,得到全景图像。
上述方案中,需要生成畸变校正图、俯视鸟瞰图,占用较多存储资源。
发明内容
本申请实施例的目的在于提供一种全景图像生成方法及装置,以节省存储资源。
为达到上述目的,本申请实施例公开了一种全景图像生成方法,包括:
获取二维平面图像;
根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点;其中,所述映射关系中包含所述预设立体模型中各个网格点与二维平面图像中像素点的对应关系;
针对所述预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域;其中,一个网格平面由第一预设数量个网格点构成;
对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像。
可选的,建立所述映射关系的过程可以包括:
确定所述预设立体模型中的网格点;
针对所确定的每个网格点,确定该网格点与预设投影点之间的投影线;其中,所述投影点位于所述预设立体模型的俯视鸟瞰平面的上方,所述投影线以所述投影点为起点,且穿过该网格点;
确定所述投影线与所述俯视鸟瞰平面的交点;
将所述交点转换为俯视鸟瞰像素点;
根据采集所述二维平面图像的相机的外参,对所述俯视鸟瞰像素点进行透视逆变换,得到畸变校正像素点;
根据所述相机的内参,对所述畸变校正像素点进行畸变校正逆运算,得到二维平面像素点;
根据每个网格点及其对应的二维平面像素点,建立所述映射关系。
可选的,所述将所述交点转换为俯视鸟瞰像素点的步骤,可以包括:
根据预先设定的转换系数,将所述交点转换为俯视鸟瞰像素点。
可选的,所述确定所述预设立体模型中的网格点的步骤,可以包括:
在所述预设立体模型中划分经纬度;
根据划分结果,确定所述模型中的每个网格点、每个网格点的经度值和纬度值、以及每个网格点在所述预设立体模型中的三维坐标值;
所述根据每个网格点及其对应的二维平面像素点,建立所述映射关系的步骤,可以包括:
针对每个网格点,以该网格点的经度值和纬度值为索引,建立该网格点的三维坐标值与其对应的二维平面像素点的像素坐标值之间的映射关系;
所述根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点的步骤,可以包括:
基于所述索引,依次确定当前待映射网格点;
针对每个当前待映射网格点,确定该当前网格点的三维坐标值及其对应的像素坐标值,所确定的像素坐标值为该当前网格点映射到所述二维平面图像中像素点的坐标值;
所述对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像的步骤,可以包括:
分别将每个网格平面作为待渲染平面;
根据所述待渲染平面中包含的各个网格点的三维坐标值,在所述预设立体模型中确定待渲染位置;
根据所述待渲染平面中包含的各个网格点的三维坐标值对应的像素坐标值,在所述二维平面图像中确定目标区域;
利用所述目标区域对所述待渲染位置进行渲染,得到全景图像。
可选的,所述在所述预设立体模型中划分经纬度的步骤,可以包括:
确定所述预设立体模型对应的每个视点的纬度区间;
计算所述每个视点的纬度区间的纬度区间和;
将所述纬度区间和去除各视点间的重叠纬度区间,得到所述预设立体模型的纬度区间;
将所述每个视点的经度区间,确定为所述预设立体模型的经度区间;
根据所述预设立体模型的经度区间和纬度区间,对所述预设立体模型进行经纬度的划分。
可选的,建立所述映射关系的过程可以包括:
根据采集所述二维平面图像的相机的内参,对二维平面图像样本中的像素点进行畸变校正运算,得到畸变校正像素点;
根据所述相机的外参,对所述畸变校正像素点进行透视变换,得到俯视鸟瞰像素点;
将所述俯视鸟瞰像素点转换为世界坐标点;
确定所述世界坐标点与预设立体模型的投影点之间的投影线,所述投影点为预先设定的;
将所述投影线与所述预设立体模型的交点确定为网格点;
根据每个网格点及其对应的二维平面图像样本中的像素点,建立所述映射关系。
可选的,在所述根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点的步骤之后,还可以包括:
将映射失败的网格点标记为无效点;
所述针对所述预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域的步骤,包括:
针对所述预设立体模型中的每个有效网格平面,将该有效网格平面中各 个网格点映射到的像素点构成的像素区域确定为该有效网格平面对应的像素区域,所述有效网格平面为:由除所述无效点之外的网格点构成的网格平面;
所述对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像的步骤,包括:
对于每个有效网格平面,用该有效网格平面对应的像素区域进行渲染,得到全景图像。
可选的,所述获取二维平面图像的步骤,可以包括:
获取多个视点的二维平面图像;
所述根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点的步骤,包括:
确定当前视点、以及预设立体模型在所述当前视点的各个目标网格点;
根据预先建立的映射关系,确定所述各个目标网格点映射到所述二维平面图像中的像素点;
所述针对所述预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域的步骤,可以包括:
针对所述预设立体模型中的每个目标网格平面,将该目标网格平面中网格点映射到的像素点构成的像素区域确定为该目标网格平面对应的目标像素区域;其中,一个目标网格平面由第一预设数量个目标网格点构成;
所述对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像的步骤,可以包括:
对于每个目标网格平面,用该目标网格平面对应的目标像素区域进行渲染,得到当前视点的全景图像。
为达到上述目的,本申请实施例还公开了一种全景图像生成装置,包括:
获取模块,用于获取二维平面图像;
第一确定模块,用于根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点;其中,所述映射关系中包含所述预设立体模型中各个网格点与二维平面图像中像素点的对应关系;
第二确定模块,用于针对所述预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域;其中,一个网格平面由第一预设数量个网格点构成;
渲染模块,用于对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像。
可选的,所述装置还可以包括:
第三确定模块,用于确定所述预设立体模型中的网格点;
第四确定模块,用于针对所确定的每个网格点,确定该网格点与预设投影点之间的投影线;其中,所述投影点位于所述预设立体模型的俯视鸟瞰平面的上方,所述投影线以所述投影点为起点,且穿过该网格点;
第五确定模块,用于确定所述投影线与所述俯视鸟瞰平面的交点;
第一转换模块,用于将所述交点转换为俯视鸟瞰像素点;
逆变换模块,用于根据采集所述二维平面图像的相机的外参,对所述俯视鸟瞰像素点进行透视逆变换,得到畸变校正像素点;
逆运算模块,用于根据所述相机的内参,对所述畸变校正像素点进行畸变校正逆运算,得到二维平面像素点;
第一建立模块,用于根据每个网格点及其对应的二维平面像素点,建立所述映射关系。
可选的,所述第一转换模块,具体可以用于:
根据预先设定的转换系数,将所述交点转换为俯视鸟瞰像素点。
可选的,所述第三确定模块,可以包括:
划分子模块,用于在所述预设立体模型中划分经纬度;
确定子模块,用于根据划分结果,确定所述模型中的每个网格点、每个网格点的经度值和纬度值、以及每个网格点在所述预设立体模型中的三维坐标值;
所述第一建立模块,具体可以用于:
针对每个网格点,以该网格点的经度值和纬度值为索引,建立该网格点的三维坐标值与其对应的二维平面像素点的像素坐标值之间的映射关系;
所述第一确定模块,具体可以用于:
基于所述索引,依次确定当前待映射网格点;针对每个当前待映射网格点,确定该当前网格点的三维坐标值及其对应的像素坐标值,所确定的像素坐标值为该当前网格点映射到所述二维平面图像中像素点的坐标值;
所述渲染模块,具体可以用于:
分别将每个网格平面作为待渲染平面;根据所述待渲染平面中包含的各 个网格点的三维坐标值,在所述预设立体模型中确定待渲染位置;根据所述待渲染平面中包含的各个网格点的三维坐标值对应的像素坐标值,在所述二维平面图像中确定目标区域;利用所述目标区域对所述待渲染位置进行渲染,得到全景图像。
可选的,所述划分子模块,具体可以用于:
确定所述预设立体模型对应的每个视点的纬度区间;
计算所述每个视点的纬度区间的纬度区间和;
将所述纬度区间和去除各视点间的重叠纬度区间,得到所述预设立体模型的纬度区间;
将所述每个视点的经度区间,确定为所述预设立体模型的经度区间;
根据所述预设立体模型的经度区间和纬度区间,对所述预设立体模型进行经纬度的划分。
可选的,所述装置还可以包括:
畸变校正运算模块,用于根据采集所述二维平面图像的相机的内参,对二维平面图像样本中的像素点进行畸变校正运算,得到畸变校正像素点;
透视变换模块,用于根据所述相机的外参,对所述畸变校正像素点进行透视变换,得到俯视鸟瞰像素点;
第二转换模块,用于将所述俯视鸟瞰像素点转换为世界坐标点;
第六确定模块,用于确定所述世界坐标点与预设立体模型的投影点之间的投影线,所述投影点为预先设定的;
第七确定模块,用于将所述投影线与所述预设立体模型的交点确定为网格点;
第二建立模块,根据每个网格点及其对应的二维平面图像样本中的像素点,建立所述映射关系。
可选的,所述装置还可以包括:
标记模块,用于将映射失败的网格点标记为无效点;
所述第二确定模块,具体可以用于:
针对所述预设立体模型中的每个有效网格平面,将该有效网格平面中各个网格点映射到的像素点构成的像素区域确定为该有效网格平面对应的像素区域,所述有效网格平面为:由除所述无效点之外的网格点构成的网格平面;
所述渲染模块,具体可以用于:
对于每个有效网格平面,用该有效网格平面对应的像素区域进行渲染,得到全景图像。
可选的,所述装置还可以包括:
所述获取模块,具体可以用于:
获取多个视点的二维平面图像;
所述第一确定模块,具体可以用于:
确定当前视点、以及预设立体模型在所述当前视点的各个目标网格点;根据预先建立的映射关系,确定所述各个目标网格点映射到所述二维平面图像中的像素点;
所述第二确定模块,具体可以用于:
针对所述预设立体模型中的每个目标网格平面,将该目标网格平面中网格点映射到的像素点构成的像素区域确定为该目标网格平面对应的目标像素区域;其中,一个目标网格平面由第一预设数量个目标网格点构成;
所述渲染模块,具体可以用于:
对于每个目标网格平面,用该目标网格平面对应的目标像素区域进行渲染,得到当前视点的全景图像。
为达到上述目的,本申请实施例还公开了一种电子设备,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;
存储器,用于存放计算机程序;
处理器,用于执行存储器上所存放的程序时,实现上述任一种全景图像生成方法。
为达到上述目的,本申请实施例还公开了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一种全景图像生成方法。
为达到上述目的,本申请实施例还公开了一种可执行程序代码,所述可执行程序代码用于被运行以实现上述任一种全景图像生成方法。
应用本申请所示实施例,根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到二维平面图像中的像素点,多个网格点构成一个网格平面,相对应的,映射到的多个像素点也构成一个像素区域,对于每个网格平面,用该网格平面对应的像素区域进行渲染,进而得到全景图像;可见, 本方案不需要生成畸变校正图、俯视鸟瞰图,节省了存储资源。
当然,实施本申请的任一产品或方法并不一定需要同时达到以上所述的所有优点。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的全景图像生成方法的第一种流程示意图;
图2为本申请实施例提供的一种立体模型示意图;
图3为本申请实施例提供的一种映射关系的存储样式示意图;
图4为本申请实施例提供的全景图像生成方法的第二种流程示意图;
图5为本申请实施例提供的一种生成可漫游的全景图像视频的架构示意图;
图6为本申请实施例提供的某视点下的全景图像示意图;
图7为本申请实施例提供的一种全景图像生成装置的结构示意图;
图8为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为了解决上述技术问题,本申请实施例提供了一种全景图像生成方法及装置。该方法及装置可以应用于具有图像处理功能的各种设备,具体不做限定。
下面首先对本申请实施例提供的一种全景图像生成方法进行详细说明。
图1为本申请实施例提供的全景图像生成方法的第一种流程示意图,包括:
S101:获取二维平面图像。
S102:根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到该二维平面图像中的像素点;其中,该映射关系中包含该预设立体模型中各个网格点与二维平面图像中像素点的对应关系。
S103:针对该预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域;其中,一个网格平面由第一预设数量个网格点构成。
S104:对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像。
应用本申请图1所示实施例,根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到二维平面图像中的像素点,多个网格点构成一个网格平面,相对应的,映射到的多个像素点也构成一个像素区域,对于每个网格平面,用该网格平面对应的像素区域进行渲染,进而得到全景图像;可见,本方案不需要生成畸变校正图、俯视鸟瞰图,节省了存储资源。
下面对图1所示实施进行详细说明:
S101:获取二维平面图像。
作为一种实施方式,执行主体(执行本方案的设备,以下简称本设备)可以具有图像采集功能,S101中获取的二维平面图像可以为本设备采集的。作为另一种实施方式,本设备也可以与其他相机通信连接,并获取其他相机采集的二维平面图像。
获取到的二维平面图像可以为一张也可以为多张,如果为多张,可以为同一场景下的各视点的图像。该二维平面图像也可以为一段视频中的一帧图像,或者,多段视频中的多帧图像,具体不做限定。
S102:根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到该二维平面图像中的像素点;其中,该映射关系中包含该预设立体模型中各个网格点与二维平面图像中像素点的对应关系。
本实施例中提供两种建立映射关系的方式,其中一种为逆向建立映射关系的方式,另一种为正向建立映射关系的方式。
逆向建立映射关系的方式:
确定所述预设立体模型中的网格点;
针对所确定的每个网格点,确定该网格点与预设投影点之间的投影线;其中,所述投影点位于所述预设立体模型的俯视鸟瞰平面的上方,所述投影 线以所述投影点为起点,且穿过该网格点;
确定所述投影线与所述俯视鸟瞰平面的交点;
将所述交点转换为俯视鸟瞰像素点;
根据采集所述二维平面图像的相机的外参,对所述俯视鸟瞰像素点进行透视逆变换,得到畸变校正像素点;
根据所述相机的内参,对所述畸变校正像素点进行畸变校正逆运算,得到二维平面像素点;
根据每个网格点及其对应的二维平面像素点,建立所述映射关系。
结合图2进行说明,图2中的部分球体即为该预设立体模型,与该部分球体相切的水平平面为该预设立体模型的俯视鸟瞰平面。投影点可以位于该部分球体的球心处,也可以位于球心与俯视鸟瞰平面之间的垂线中,该投影点位于俯视鸟瞰平面上方。
点Q为预设立体模型中的网格点,确定点Q与投影点G之间的投影线,该投影线与该俯视鸟瞰平面交于点V。
将交点V转换为俯视鸟瞰像素点,在本申请的一个可选实施方式中,可以根据预先设定的转换系数,将所述交点转换为俯视鸟瞰像素点。该转换系数可以理解为将世界坐标系中坐标点转换为俯视鸟瞰图中像素点的转换系数,需要说明的是,在本实施例中,并不生成俯视鸟瞰图,只是利用转换系数,对上述交点进行转换。
在这种实施方式中,图2中的立体模型所在的坐标系可以为世界坐标系,或者,也可以为其他坐标系,再建立该其他坐标系与世界坐标系的转换关系,也是合理的。
假设点Q在世界坐标系中的三维坐标值为(q x,q y,q z),点V在世界坐标系中的坐标值为(V x,V y),Z轴中的坐标V z这里可以不考虑。
假设该转换系数为k,交点V转换后的俯视鸟瞰像素点为v,像素点v的像素坐标值为(v x,v y),则:v x=k*V x,v y=k*V y,这样,便得到了俯视鸟瞰像素点,俯视鸟瞰像素点可以理解为俯视鸟瞰图中的像素点。需要说明的是,在本实施例中,并不真正生成俯视鸟瞰图,只是在建立映射关系的过程中,为了方便描述,将投影线与俯视鸟瞰平面的交点转换得到的像素点称为俯视鸟瞰像素点。
根据采集S101中二维平面图像的相机的外参,对俯视鸟瞰像素点进行透 视逆变换,得到畸变校正像素点,畸变校正像素点可以理解为畸变校正图中的像素点。需要说明的是,在本实施例中,并不真正生成畸变校正图,只是在建立映射关系的过程中,为了方便描述,将通过透视逆变换得到的像素点称为畸变校正像素点。
再根据采集S101中二维平面图像的相机的内参,将该畸变校正像素点进行畸变校正逆运算,得到二维平面像素点,该二维平面像素点即为网格点Q映射到二维平面图像中的像素点,也就是网格点Q对应的二维平面像素点,这里假设该二维平面像素点的像素值坐标为(p x,p y)。
利用这种方式,可以确定出图2立体模型中的各个网格点对应的二维平面像素点,这样,便可以建立映射关系,该映射关系中包含立体模型中各个网格点与二维平面图像中像素点的对应关系,其中包含(q x,q y,q z)与(p x,p y)的对应关系。
作为一种实施方式,在逆向建立映射关系的过程中,可以先在该预设立体模型中划分经纬度,然后根据划分结果,确定模型中的每个网格点、每个网格点的经度值和纬度值、以及每个网格点在所述预设立体模型中的三维坐标值。这样,建立的映射关系中包含网格点的经度值和纬度值、网格点的三维坐标值、网络点对应的二维平面像素点的像素坐标值,具体的,可以将网格点的经度值和纬度值作为索引,建立该映射关系。
可以理解,该预设立体模型的形状、在坐标系中的位置是预先设定的。仍以图2为例进行说明,设定该部分球体的球心坐标、球体半径、高度(从俯视鸟瞰平面到该部分球体最高边缘的垂直距离),便设定了该部分球体的形状及位置,这样便可以进一步确定出该模型表面各网格点的三维坐标值。另外,在该模型表面划分经纬度,便可以确定出各网格点的经度值和纬度值。
在本实施方式中,S102可以包括:基于所述索引,依次确定当前待映射网格点;针对每个当前待映射网格点,确定该当前待映射网格点的三维坐标值及其对应的像素坐标值,所确定的像素坐标值为该当前网格点映射到所述二维平面图像中像素点的坐标值。
假设图2中的网格点Q的经度值为i,纬度值为j,则建立的映射关系中包含(i,j)、(q x,q y,q z)、(p x,p y)三者的对应关系,其中,(i,j)可以作为索引。这样,S102中便可以按照各网格点经纬度的顺序,依次确定当前待映射网格点,并确定当前待映射网格点的三维坐标值及其对应的像素坐标值,以 映射得到二维平面图像中的像素点。
本实施方式可以应用于球体模型中,或者类球体模型中,比如图2中的部分球体模型中,在这些模型中,模型表面网格点的三维坐标值不是线性排列的。可以理解,在这些模型中,根据经纬度确定网格点,相比于根据三维坐标值确定网格点,能够更好地调整网格点的分布情况,也能够更好地调整网格点的疏密程度。
再者,网格点的疏密程度反应了映射关系的分辨率,网格点数量越多,越密集,建立的映射关系越多,分辨率越大,而这里映射关系的分辨率是可调的,与图像的分辨率无关。
在本实施方式中,在该预设立体模型中划分经纬度,可以包括:确定所述预设立体模型对应的每个视点的纬度区间;计算所述每个视点的纬度区间的纬度区间和;将所述纬度区间和去除各视点间的重叠纬度区间,得到所述预设立体模型的纬度区间;将所述每个视点的经度区间,确定为所述预设立体模型的经度区间;根据所述预设立体模型的经度区间和纬度区间,对所述预设立体模型进行经纬度的划分。
以图2为例进行说明,图2中的模型反应了360度的视角,而S101中获取的二维平面图像通常不是360度视角的图像,因此,S101中可以获取多张二维平面图像,这多张图像为立体模型对应的各个视点下的图像,可以拼接成360度视角的图像。假设S101中获取了四张二维平面图像,每张二维平面图像的视角大于等于90度。在多数场景中,拼接过程为横向拼接,比如,将图2的部分球体划分为四部分,这四部分分别对应一张图像,这四张图像横向拼接时与该球体的视角一致。
横向拼接可以理解为将上述四张图像的宽度加和(去除重叠部分),高度不变,而划分纬度是基于图2中球体高度进行划分,划分经度是基于将图2中球体表面拉伸后得到平面的宽度进行划分,因此,将这四张图像的纬度区间和去除重叠部分,即为图2中部分球体的纬度区间,而这四张图像的经度区间一致或近似一致,可以直接将该经度区间作为图2中部分球体的经度区间。确定了模型的经纬度区间,便可以在模型中划分经纬度。
建立映射关系后,对其进行存储,映射关系的存储样式可以如图3所示,w1、w2、w3、w4分别为立体模型对应的每个视点(视点1、视点2、视点3、视点4)的纬度区间,其中存在重叠部分d(假设三个重叠部分都为d),立体 模型的纬度区间w=w1+w2+w3+w4-3d,h为该每个视点的经度区间,立体模型的经度区间也为h。图3中的某个点的经度值为i,纬度值为j,该位置处存储了网格点的三维坐标值及对应的像素坐标值,比如,[(q x,q y,q z),(p x,p y)]。
“视点”可以用于描述虚拟观察者的位置,“视点”可以包含相机位置、及相机采集图像的视角等信息,图3中每个视点可以对应一台相机。相比于一些方案中,每台相机对应一个映射表,本实施方式中,多台相机对应一份映射关系,一方面节约了存储资源,另一方面,确定网格点映射到的像素点时,查找一份映射关系相比于查找多个映射表,查找效率更高。
正向建立映射关系的方式:
根据采集所述二维平面图像的相机的内参,对二维平面图像样本中的像素点进行畸变校正运算,得到畸变校正像素点;根据所述相机的外参,对所述畸变校正像素点进行透视变换,得到俯视鸟瞰像素点;将所述俯视鸟瞰像素点转换为世界坐标点;确定所述世界坐标点与预设立体模型的投影点之间的投影线,所述投影点为预先设定的;将所述投影线与所述预设立体模型的交点确定为网格点;根据每个网格点及其对应的二维平面图像样本中的像素点,建立所述映射关系。
正向建立映射关系的方式与上述逆向建立映射关系的方式相反,具体如下:
假设二维平面图像中某像素点的像素值坐标为(p x,p y),根据相机内参,对该像素点进行畸变校正,得到畸变校正像素点;根据相机外参,对该畸变校正像素点进行透视变换,得到俯视鸟瞰像素点(v x,v y);根据预先设定的转换系数k,将俯视鸟瞰像素点(v x,v y)转换为世界坐标系中的坐标值(V x,V y);将(V x,V y)投影到立体模型中对应的网格点的三维坐标值为(q x,q y,q z);建立的映射关系中包含(q x,q y,q z)。
在正向建立映射关系的方式中,由像素点映射到网格点,像素点的数量与映射关系中包含的网格点的数量相关,因此,映射关系的分辨率与二维平面图像的分辨率也是相关的,也就是说,上述逆向建立映射关系的方式比该正向建立映射关系的方式,能够更方便地调整映射关系的分辨率。
S103:针对该预设立体模型中的每个网格平面,将该网格平面中各个网 格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域。其中,一个网格平面由第一预设数量个网格点构成。
S104:对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像。
第一预设数量可以根据实际情况进行设定,比如三、四、五等。假设为三,可以理解,三个网格点可以组成一个网格平面,这三个网格点映射到二维平面图像中的三个像素点组成一个像素区域,该像素区域为该网格平面相对应。这样,各个网格平面都对应一个像素区域,对于每个网格平面,用该网格平面对应的像素区域进行渲染,便得到了全景图像。
作为一种实施方式,可以通过GUI(Graphical User Interface,图形用户界面,又称图形用户接口)技术,用像素区域对网格平面进行渲染。通过GUI渲染时,可以在像素区域中选取一些像素点,利用这些像素点的像素值对网格平面进行渲染,而选取像素点的数量取决于设备性能。
在上述一种实施方式中,建立的映射关系中以网格点的经纬度作为索引,这种实施方式中,S104可以包括:分别将每个网格平面作为待渲染平面;根据所述待渲染平面中包含的各个网格点的三维坐标值,在所述预设立体模型中确定待渲染位置;根据所述待渲染平面中包含的各个网格点的三维坐标值对应的像素坐标值,在所述二维平面图像中确定目标区域;利用所述目标区域对所述待渲染位置进行渲染,得到全景图像。
在这种实施方式中,可以按照各网格点经纬度的顺序,依次确定当前待映射网格点,并确定当前待映射网格点的三维坐标值及其对应的像素坐标值;这样便得到了每个网格点的三维坐标值,根据该三维坐标值,可以在立体模型中确定出待渲染平面的位置(待渲染位置);另外,还得到了每个网格点对应像素点的像素坐标值,根据该像素坐标值,可以在二维平面图像中确定出一个目标区域,利用该目标区域对该待渲染位置进行渲染,进而得到全景图像。
作为一种实施方式,在S102之后还包括:将映射失败的网格点标记为无效点;这种实施方式中,S103可以包括:针对所述预设立体模型中的每个有效网格平面,将该有效网格平面中各个网格点映射到的像素点构成的像素区域确定为该有效网格平面对应的像素区域,所述有效网格平面为:由除所述 无效点之外的网格点构成的网格平面;S104包括:对于每个有效网格平面,用该有效网格平面对应的像素区域进行渲染,得到全景图像。
如果立体模型中网格点的数量大于所获取二维平面图像中像素点的数量,则会存在一些映射失败的网格点,这些映射失败的网格点在二维平面图像中没有对应的像素点,也就不能参与后续渲染。因此,可以将这些映射失败的网格点标记为无效点,不再确定包含无效的网格平面对应的像素区域,也就不再对包含无效的网格平面进行渲染。
延续上述例子,假设网格点的三维坐标值为(q x,q y,q z),对应的像素点的像素值坐标为(p x,p y),如果p x>w(图3中立体模型的纬度区间),或者p x<0,或者p y>h(图3中立体模型的经度区间),或者p y<0,这几种情况下,可以将网格点标记为无效点,具体的,可以将q x,q y,q z,p x,p y都置为flage,flage可以表示无效标志,例如,可以设置flage=-7等特征标记值。
映射失败还包含其他情况,在此不一一列举。
在图1实施例中,S101获取的可以是一段视频中的各帧图像,这种情况下,针对每帧图像都应用图1实施例,可以生成动态的全景图像视频。
应用本申请图1所示实施例,根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到二维平面图像中的像素点,多个网格点构成一个网格平面,相对应的,映射到的多个像素点也构成一个像素区域,对于每个网格平面,用该网格平面对应的像素区域进行渲染,进而得到全景图像;可见,本方案不需要生成畸变校正图、俯视鸟瞰图,节省了存储资源。
图4为本申请实施例提供的全景图像生成方法的第二种流程图,包括:
S401:获取多个视点的二维平面图像。
S402:确定当前视点、以及预设立体模型在该当前视点的各个目标网格点。
S403:根据预先建立的映射关系,确定各个目标网格点映射到该二维平面图像中的像素点。
S404:针对该预设立体模型中的每个目标网格平面,将该目标网格平面中网格点映射到的像素点构成的像素区域确定为该目标网格平面对应的目标像素区域。其中,一个目标网格平面由第一预设数量个目标网格点构成。
S405:对于每个目标网格平面,用该目标网格平面对应的目标像素区域进行渲染,得到当前视点的全景图像。
继续以图2为例进行说明,应用图1所示实施例对图2中的部分球体模型进行渲染,可以得到360度的全景图像。而一些情况下,仅需要得到某个视点的全景图像,这些情况下,可以仅生成该视点的全景图像。应用图4所示实施例,可以生成某个视点的全景图像。
下面结合一应用场景进行说明,在车载设备中,假设有4个车载相机分别对准前、后、左、右四个方向进行图像采集。获取同一时刻、同一场景中前、后、左、右(四个视点下的)四张二维平面图像。假设用户仅需要后方的全景图像,则确定当前视点为后方视点。
确定预设立体模型在后方视点的各个目标网格点。在图1实施例的一种实施方式中,确定了立体模型的经纬度区间,其中,纬度区间w=w1+w2+w3+w4-3d,经度区间为h。这种实施方式中,假设w1=w2=w3=w4,则wi=w/4-3d,i∈[1,4]。根据这一算式,可以确定出每个相机视点对应的立体模型中的纬度范围,将这一范围中的网格点确定为目标网格点。
或者,如果获取到N张二维平面图像,则可以利用wi=w/N-(N-1)d,i∈[1,N]来确定每个相机视点对应的立体模型中的纬度范围。
仅确定目标网格点构成的网格平面对应的像素区域。为了方便描述,这里将目标网格点构成的网格平面称为目标网格平面,将目标网格平面对应的像素区域称为目标像素区域。仅利用目标像素区域对目标网格平面进行渲染,得到的是当前视点的全景图像。
在图4实施例中,可以根据实际需求切换当前视点,切换视点后,可以继续利用图4实施例,生成新视点的全景图像。
在图4实施例中,S401获取的可以的多个视点的二维平面图像,可以是多段视频中的图像。如上述车载设备中,4个车载相机分别向本设备发送视频,本设备接收到4段视频,并从这4段视频中分别获取同一时刻的4张图像,针对这4张图像,应用图4实施例,可以生成动态的全景图像视频。另外,还可以切换当前视点,生成动态的、可漫游的、全景图像视频。
举例来说,生成动态的、可漫游的、全景图像视频的架构可以如图5所示,多台车载相机(车载相机1、车载相机2……车载相机N)将采集的视频发送给本设备,本设备根据预先建立的映射关系,可以生成全景视频,还可以根据当前视点,更新生成的全景视频的视点,也就是生成可漫游的全景视频,实现视频漫游。
如图6所示,图6中的视图1、视图2可以理解为显示区域。如果当前视点为视点1,对立体模型中虚线区域进行渲染,得到该视点的全景图像,将并该全景图像显示在视图1中;类似的,如果当前视点为视点2,对立体模型中虚线区域进行渲染,得到该视点的全景图像,将并该全景图像显示在视图2中。
或者,S401中的多个视点的二维平面图像也可以为车载环视镜头所拍摄的多路图像,具体应用场景不做限定。
应用本申请图4所示实施例,可以仅针对当前视点,生成当前视点的全景图像,相比于生成360度全景图像,减少了计算量,而且,还可以实现全景视频漫游,体验更佳。
与上述方法实施例相对应,本申请实施例还提供一种全景图像生成装置。
图7为本申请实施例提供的一种全景图像生成装置的结构示意图,包括:
获取模块701,用于获取二维平面图像;
第一确定模块702,用于根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点;其中,所述映射关系中包含所述预设立体模型中各个网格点与二维平面图像中像素点的对应关系;
第二确定模块703,用于针对所述预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域;其中,一个网格平面由第一预设数量个网格点构成;
渲染模块704,用于对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像。
作为一种实施方式,所述装置还可以包括:
第三确定模块,用于确定所述预设立体模型中的网格点;
第四确定模块,用于针对所确定的每个网格点,确定该网格点与预设投影点之间的投影线;其中,所述投影点位于所述预设立体模型的俯视鸟瞰平面的上方,所述投影线以所述投影点为起点,且穿过该网格点;
第五确定模块,用于确定所述投影线与所述俯视鸟瞰平面的交点;
第一转换模块,用于将所述交点转换为俯视鸟瞰像素点;
逆变换模块,用于根据采集所述二维平面图像的相机的外参,对所述俯视鸟瞰像素点进行透视逆变换,得到畸变校正像素点;
逆运算模块,用于根据所述相机的内参,对所述畸变校正像素点进行畸变校正逆运算,得到二维平面像素点;
第一建立模块,用于根据每个网格点及其对应的二维平面像素点,建立所述映射关系。
作为一种实施方式,所述第一转换模块,具体可以用于:
根据预先设定的转换系数,将所述交点转换为俯视鸟瞰像素点。
作为一种实施方式,所述第三确定模块,包括:
划分子模块,用于在所述预设立体模型中划分经纬度;
确定子模块,用于根据划分结果,确定所述模型中的每个网格点、每个网格点的经度值和纬度值、以及每个网格点在所述预设立体模型中的三维坐标值;
所述第一建立模块,具体可以用于:
针对每个网格点,以该网格点的经度值和纬度值为索引,建立该网格点的三维坐标值与其对应的二维平面像素点的像素坐标值之间的映射关系;
所述第一确定模块,具体可以用于:
基于所述索引,依次确定当前待映射网格点;针对每个当前待映射网格点,确定该当前网格点的三维坐标值及其对应的像素坐标值,所确定的像素坐标值为该当前网格点映射到所述二维平面图像中像素点的坐标值;
渲染模块705,具体可以用于:
分别将每个网格平面作为待渲染平面;根据所述待渲染平面中包含的各个网格点的三维坐标值,在所述预设立体模型中确定待渲染位置;根据所述待渲染平面中包含的各个网格点的三维坐标值对应的像素坐标值,在所述二维平面图像中确定目标区域;利用所述目标区域对所述待渲染位置进行渲染,得到全景图像。
作为一种实施方式,所述划分子模块,具体可以用于:
确定所述预设立体模型对应的每个视点的纬度区间;
计算所述每个视点的纬度区间的纬度区间和;
将所述纬度区间和去除各视点间的重叠纬度区间,得到所述预设立体模型的纬度区间;
将所述每个视点的经度区间,确定为所述预设立体模型的经度区间;
根据所述预设立体模型的经度区间和纬度区间,对所述预设立体模型进行经纬度的划分。
作为一种实施方式,所述装置还可以包括:
畸变校正运算模块,用于根据采集所述二维平面图像的相机的内参,对二维平面图像样本中的像素点进行畸变校正运算,得到畸变校正像素点;
透视变换模块,用于根据所述相机的外参,对所述畸变校正像素点进行透视变换,得到俯视鸟瞰像素点;
第二转换模块,用于将所述俯视鸟瞰像素点转换为世界坐标点;
第六确定模块,用于确定所述世界坐标点与预设立体模型的投影点之间的投影线,所述投影点为预先设定的;
第七确定模块,用于将所述投影线与所述预设立体模型的交点确定为网格点;
第二建立模块,根据每个网格点及其对应的二维平面图像样本中的像素点,建立所述映射关系。
作为一种实施方式,所述装置还可以包括:
标记模块,用于将映射失败的网格点标记为无效点;
第二确定模块703,具体可以用于:
针对所述预设立体模型中的每个有效网格平面,将该有效网格平面中各个网格点映射到的像素点构成的像素区域确定为该有效网格平面对应的像素区域,所述有效网格平面为:由除所述无效点之外的网格点构成的网格平面;
渲染模块704,具体可以用于:
对于每个有效网格平面,用该有效网格平面对应的像素区域进行渲染,得到全景图像。
作为一种实施方式,所述装置还可以包括:
获取模块701,具体可以用于:
获取多个视点的二维平面图像;
第一确定模块702,具体可以用于:
确定当前视点、以及预设立体模型在所述当前视点的各个目标网格点;根据预先建立的映射关系,确定所述各个目标网格点映射到所述二维平面图像中的像素点;
第二确定模块703,具体可以用于:
针对所述预设立体模型中的每个目标网格平面,将该目标网格平面中网格点映射到的像素点构成的像素区域确定为该目标网格平面对应的目标像素区域;其中,一个目标网格平面由第一预设数量个目标网格点构成;
渲染模块704,具体可以用于:
对于每个目标网格平面,用该目标网格平面对应的目标像素区域进行渲染,得到当前视点的全景图像。
本申请实施例还提供了一种电子设备,包括处理器和存储器;其中,所述存储器,用于存放计算机程序;所述处理器,用于执行所述存储器上所存放的程序时,实现如下方法步骤:
获取二维平面图像;
根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点;其中,所述映射关系中包含所述预设立体模型中各个网格点与二维平面图像中像素点的对应关系;
针对所述预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域;其中,一个网格平面由第一预设数量个网格点构成;
对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像。
作为一种实施方式,所述处理器,还用于执行所述存储器上所存放的程序时,实现如下方法步骤:
确定所述预设立体模型中的网格点;
针对所确定的每个网格点,确定该网格点与预设投影点之间的投影线;其中,所述投影点位于所述预设立体模型的俯视鸟瞰平面的上方,所述投影线以所述投影点为起点,且穿过该网格点;
确定所述投影线与所述俯视鸟瞰平面的交点;
将所述交点转换为俯视鸟瞰像素点;
根据采集所述二维平面图像的相机的外参,对所述俯视鸟瞰像素点进行透视逆变换,得到畸变校正像素点;
根据所述相机的内参,对所述畸变校正像素点进行畸变校正逆运算,得到二维平面像素点;
根据每个网格点及其对应的二维平面像素点,建立所述映射关系。
作为一种实施方式,所述处理器,还用于执行所述存储器上所存放的程序时,实现如下方法步骤:
根据预先设定的转换系数,将所述交点转换为俯视鸟瞰像素点。
作为一种实施方式,所述处理器,还用于执行所述存储器上所存放的程序时,实现如下方法步骤:
在所述预设立体模型中划分经纬度;
根据划分结果,确定所述模型中的每个网格点、每个网格点的经度值和纬度值、以及每个网格点在所述预设立体模型中的三维坐标值;
针对每个网格点,以该网格点的经度值和纬度值为索引,建立该网格点的三维坐标值与其对应的二维平面像素点的像素坐标值之间的映射关系;
基于所述索引,依次确定当前待映射网格点;
针对每个当前待映射网格点,确定该当前网格点的三维坐标值及其对应的像素坐标值,所确定的像素坐标值为该当前网格点映射到所述二维平面图像中像素点的坐标值;
分别将每个网格平面作为待渲染平面;
根据所述待渲染平面中包含的各个网格点的三维坐标值,在所述预设立体模型中确定待渲染位置;
根据所述待渲染平面中包含的各个网格点的三维坐标值对应的像素坐标值,在所述二维平面图像中确定目标区域;
利用所述目标区域对所述待渲染位置进行渲染,得到全景图像。
作为一种实施方式,所述处理器,还用于执行所述存储器上所存放的程序时,实现如下方法步骤:
确定所述预设立体模型对应的每个视点的纬度区间;
计算所述每个视点的纬度区间的纬度区间和;
将所述纬度区间和去除各视点间的重叠纬度区间,得到所述预设立体模型的纬度区间;
将所述每个视点的经度区间,确定为所述预设立体模型的经度区间;
根据所述预设立体模型的经度区间和纬度区间,对所述预设立体模型进行经纬度的划分。
作为一种实施方式,所述处理器,还用于执行所述存储器上所存放的程序时,实现如下方法步骤:
根据采集所述二维平面图像的相机的内参,对二维平面图像样本中的像素点进行畸变校正运算,得到畸变校正像素点;
根据所述相机的外参,对所述畸变校正像素点进行透视变换,得到俯视鸟瞰像素点;
将所述俯视鸟瞰像素点转换为世界坐标点;
确定所述世界坐标点与预设立体模型的投影点之间的投影线,所述投影点为预先设定的;
将所述投影线与所述预设立体模型的交点确定为网格点;
根据每个网格点及其对应的二维平面图像样本中的像素点,建立所述映射关系。
作为一种实施方式,所述处理器,还用于执行所述存储器上所存放的程序时,实现如下方法步骤:
在所述根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点的步骤之后,将映射失败的网格点标记为无效点;
针对所述预设立体模型中的每个有效网格平面,将该有效网格平面中各个网格点映射到的像素点构成的像素区域确定为该有效网格平面对应的像素区域,所述有效网格平面为:由除所述无效点之外的网格点构成的网格平面;
对于每个有效网格平面,用该有效网格平面对应的像素区域进行渲染,得到全景图像。
作为一种实施方式,所述处理器,还用于执行所述存储器上所存放的程序时,实现如下方法步骤:
获取多个视点的二维平面图像;
确定当前视点、以及预设立体模型在所述当前视点的各个目标网格点;
根据预先建立的映射关系,确定所述各个目标网格点映射到所述二维平面图像中的像素点;
针对所述预设立体模型中的每个目标网格平面,将该目标网格平面中网格点映射到的像素点构成的像素区域确定为该目标网格平面对应的目标像素区域;其中,一个目标网格平面由第一预设数量个目标网格点构成;
对于每个目标网格平面,用该目标网格平面对应的目标像素区域进行渲染,得到当前视点的全景图像。
本申请实施例还提供了一种电子设备,如图8所示,包括处理器801、通信接口802、存储器803和通信总线804,其中,处理器801,通信接口802,存 储器803通过通信总线804完成相互间的通信,
存储器803,用于存放计算机程序;
处理器801,用于执行存储器803上所存放的程序时,实现上述任一种全景图像生成方法。
上述电子设备提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
通信接口用于上述电子设备与其他设备之间的通信。
存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
应用本申请图8所示实施例,根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到二维平面图像中的像素点,多个网格点构成一个网格平面,相对应的,映射到的多个像素点也构成一个像素区域,对于每个网格平面,用该网格平面对应的像素区域进行渲染,进而得到全景图像;可见,本方案不需要生成畸变校正图、俯视鸟瞰图,节省了存储资源。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现如下步骤:
获取二维平面图像;
根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点;其中,所述映射关系中包含所述预设立体模型中各个网格点与二维平面图像中像素点的对应关系;
针对所述预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域;其中,一个网格平面由第一预设数量个网格点构成;
对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像。
作为一种实施方式,所述计算机程序被处理器执行时还可以用于实现如下步骤:
确定所述预设立体模型中的网格点;
针对所确定的每个网格点,确定该网格点与预设投影点之间的投影线;其中,所述投影点位于所述预设立体模型的俯视鸟瞰平面的上方,所述投影线以所述投影点为起点,且穿过该网格点;
确定所述投影线与所述俯视鸟瞰平面的交点;
将所述交点转换为俯视鸟瞰像素点;
根据采集所述二维平面图像的相机的外参,对所述俯视鸟瞰像素点进行透视逆变换,得到畸变校正像素点;
根据所述相机的内参,对所述畸变校正像素点进行畸变校正逆运算,得到二维平面像素点;
根据每个网格点及其对应的二维平面像素点,建立所述映射关系。
作为一种实施方式,所述计算机程序被处理器执行时还可以用于实现如下步骤:
根据预先设定的转换系数,将所述交点转换为俯视鸟瞰像素点。
作为一种实施方式,所述计算机程序被处理器执行时还可以用于实现如下步骤:
在所述预设立体模型中划分经纬度;
根据划分结果,确定所述模型中的每个网格点、每个网格点的经度值和纬度值、以及每个网格点在所述预设立体模型中的三维坐标值;
针对每个网格点,以该网格点的经度值和纬度值为索引,存储该网格点的三维坐标值与其对应的二维平面像素点的像素坐标值之间的映射关系;
基于所述索引,依次确定当前待映射网格点;
针对每个当前待映射网格点,确定该当前网格点的三维坐标值及其对应的像素坐标值,所确定的像素坐标值为该当前网格点映射到所述二维平面图 像中像素点的坐标值;
分别将每个网格平面作为待渲染平面;
根据所述待渲染平面中包含的各个网格点的三维坐标值,在所述预设立体模型中确定待渲染位置;
根据所述待渲染平面中包含的各个网格点的三维坐标值对应的像素坐标值,在所述二维平面图像中确定目标区域;
利用所述目标区域对所述待渲染位置进行渲染,得到全景图像。
作为一种实施方式,所述计算机程序被处理器执行时还可以用于实现如下步骤:
确定所述预设立体模型对应的每个视点的纬度区间;
计算所述每个视点的纬度区间的纬度区间和;
将所述纬度区间和去除各视点间的重叠纬度区间,得到所述预设立体模型的纬度区间;
将所述每个视点的经度区间,确定为所述预设立体模型的经度区间;
根据所述预设立体模型的经度区间和纬度区间,对所述预设立体模型进行经纬度的划分。
作为一种实施方式,所述计算机程序被处理器执行时还可以用于实现如下步骤:
根据采集所述二维平面图像的相机的内参,对二维平面图像样本中的像素点进行畸变校正运算,得到畸变校正像素点;
根据所述相机的外参,对所述畸变校正像素点进行透视变换,得到俯视鸟瞰像素点;
将所述俯视鸟瞰像素点转换为世界坐标点;
确定所述世界坐标点与预设立体模型的投影点之间的投影线,所述投影点为预先设定的;
将所述投影线与所述预设立体模型的交点确定为网格点;
根据每个网格点及其对应的二维平面图像样本中的像素点,建立所述映射关系。
作为一种实施方式,所述计算机程序被处理器执行时还可以用于实现如下步骤:
在所述根据预先建立的映射关系,确定预设立体模型中的各个网格点映 射到所述二维平面图像中的像素点的步骤之后,将映射失败的网格点标记为无效点;
针对所述预设立体模型中的每个有效网格平面,将该有效网格平面中各个网格点映射到的像素点构成的像素区域确定为该有效网格平面对应的像素区域,所述有效网格平面为:由除所述无效点之外的网格点构成的网格平面;
对于每个有效网格平面,用该有效网格平面对应的像素区域进行渲染,得到全景图像。
作为一种实施方式,所述计算机程序被处理器执行时还可以用于实现如下步骤:
获取多个视点的二维平面图像;
确定当前视点、以及预设立体模型在所述当前视点的各个目标网格点;
根据预先建立的映射关系,确定所述各个目标网格点映射到所述二维平面图像中的像素点;
针对所述预设立体模型中的每个目标网格平面,将该目标网格平面中网格点映射到的像素点构成的像素区域确定为该目标网格平面对应的目标像素区域;其中,一个目标网格平面由第一预设数量个目标网格点构成;
对于每个目标网格平面,用该目标网格平面对应的目标像素区域进行渲染,得到当前视点的全景图像。
本申请实施例还公开了一种可执行程序代码,所述可执行程序代码用于被运行以实现上述任一种全景图像生成方法。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例、电子设备实施例、计算机可读存储介质实施 例、可执行程序代码实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本申请的保护范围内。

Claims (26)

  1. 一种全景图像生成方法,其特征在于,包括:
    获取二维平面图像;
    根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点;其中,所述映射关系中包含所述预设立体模型中各个网格点与二维平面图像中像素点的对应关系;
    针对所述预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域;其中,一个网格平面由第一预设数量个网格点构成;
    对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像。
  2. 根据权利要求1所述的方法,其特征在于,建立所述映射关系的过程包括:
    确定所述预设立体模型中的网格点;
    针对所确定的每个网格点,确定该网格点与预设投影点之间的投影线;其中,所述投影点位于所述预设立体模型的俯视鸟瞰平面的上方,所述投影线以所述投影点为起点,且穿过该网格点;
    确定所述投影线与所述俯视鸟瞰平面的交点;
    将所述交点转换为俯视鸟瞰像素点;
    根据采集所述二维平面图像的相机的外参,对所述俯视鸟瞰像素点进行透视逆变换,得到畸变校正像素点;
    根据所述相机的内参,对所述畸变校正像素点进行畸变校正逆运算,得到二维平面像素点;
    根据每个网格点及其对应的二维平面像素点,建立所述映射关系。
  3. 根据权利要求2所述的方法,其特征在于,所述将所述交点转换为俯视鸟瞰像素点的步骤,包括:
    根据预先设定的转换系数,将所述交点转换为俯视鸟瞰像素点。
  4. 根据权利要求2所述的方法,其特征在于,所述确定所述预设立体模型中的网格点的步骤,包括:
    在所述预设立体模型中划分经纬度;
    根据划分结果,确定所述模型中的每个网格点、每个网格点的经度值和纬度值、以及每个网格点在所述预设立体模型中的三维坐标值;
    所述根据每个网格点及其对应的二维平面像素点,建立所述映射关系的步骤,包括:
    针对每个网格点,以该网格点的经度值和纬度值为索引,建立该网格点的三维坐标值与其对应的二维平面像素点的像素坐标值之间的映射关系;
    所述根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点的步骤,包括:
    基于所述索引,依次确定当前待映射网格点;
    针对每个当前待映射网格点,确定该当前网格点的三维坐标值及其对应的像素坐标值,所确定的像素坐标值为该当前网格点映射到所述二维平面图像中像素点的坐标值;
    所述对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像的步骤,包括:
    分别将每个网格平面作为待渲染平面;
    根据所述待渲染平面中包含的各个网格点的三维坐标值,在所述预设立体模型中确定待渲染位置;
    根据所述待渲染平面中包含的各个网格点的三维坐标值对应的像素坐标值,在所述二维平面图像中确定目标区域;
    利用所述目标区域对所述待渲染位置进行渲染,得到全景图像。
  5. 根据权利要求4所述的方法,其特征在于,所述在所述预设立体模型中划分经纬度的步骤,包括:
    确定所述预设立体模型对应的每个视点的纬度区间;
    计算所述每个视点的纬度区间的纬度区间和;
    将所述纬度区间和去除各视点间的重叠纬度区间,得到所述预设立体模型的纬度区间;
    将所述每个视点的经度区间,确定为所述预设立体模型的经度区间;
    根据所述预设立体模型的经度区间和纬度区间,对所述预设立体模型进行经纬度的划分。
  6. 根据权利要求1所述的方法,其特征在于,建立所述映射关系的过程包括:
    根据采集所述二维平面图像的相机的内参,对二维平面图像样本中的像素点进行畸变校正运算,得到畸变校正像素点;
    根据所述相机的外参,对所述畸变校正像素点进行透视变换,得到俯视鸟瞰像素点;
    将所述俯视鸟瞰像素点转换为世界坐标点;
    确定所述世界坐标点与预设立体模型的投影点之间的投影线,所述投影点为预先设定的;
    将所述投影线与所述预设立体模型的交点确定为网格点;
    根据每个网格点及其对应的二维平面图像样本中的像素点,建立所述映射关系。
  7. 根据权利要求1所述的方法,其特征在于,在所述根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点的步骤之后,还包括:
    将映射失败的网格点标记为无效点;
    所述针对所述预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域的步骤,包括:
    针对所述预设立体模型中的每个有效网格平面,将该有效网格平面中各个网格点映射到的像素点构成的像素区域确定为该有效网格平面对应的像素区域,所述有效网格平面为:由除所述无效点之外的网格点构成的网格平面;
    所述对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像的步骤,包括:
    对于每个有效网格平面,用该有效网格平面对应的像素区域进行渲染,得到全景图像。
  8. 根据权利要求1所述的方法,其特征在于,所述获取二维平面图像的步骤,包括:
    获取多个视点的二维平面图像;
    所述根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点的步骤,包括:
    确定当前视点、以及预设立体模型在所述当前视点的各个目标网格点;
    根据预先建立的映射关系,确定所述各个目标网格点映射到所述二维平面图像中的像素点;
    所述针对所述预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域的步骤,包括:
    针对所述预设立体模型中的每个目标网格平面,将该目标网格平面中网格点映射到的像素点构成的像素区域确定为该目标网格平面对应的目标像素区域;其中,一个目标网格平面由第一预设数量个目标网格点构成;
    所述对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像的步骤,包括:
    对于每个目标网格平面,用该目标网格平面对应的目标像素区域进行渲染,得到当前视点的全景图像。
  9. 一种全景图像生成装置,其特征在于,包括:
    获取模块,用于获取二维平面图像;
    第一确定模块,用于根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点;其中,所述映射关系中包含所述预设立体模型中各个网格点与二维平面图像中像素点的对应关系;
    第二确定模块,用于针对所述预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域;其中,一个网格平面由第一预设数量个网格点构成;
    渲染模块,用于对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像。
  10. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    第三确定模块,用于确定所述预设立体模型中的网格点;
    第四确定模块,用于针对所确定的每个网格点,确定该网格点与预设投影点之间的投影线;其中,所述投影点位于所述预设立体模型的俯视鸟瞰平面的上方,所述投影线以所述投影点为起点,且穿过该网格点;
    第五确定模块,用于确定所述投影线与所述俯视鸟瞰平面的交点;
    第一转换模块,用于将所述交点转换为俯视鸟瞰像素点;
    逆变换模块,用于根据采集所述二维平面图像的相机的外参,对所述俯视鸟瞰像素点进行透视逆变换,得到畸变校正像素点;
    逆运算模块,用于根据所述相机的内参,对所述畸变校正像素点进行畸变校正逆运算,得到二维平面像素点;
    第一建立模块,用于根据每个网格点及其对应的二维平面像素点,建立所述映射关系。
  11. 根据权利要求10所述的装置,其特征在于,所述第一转换模块,具体用于:根据预先设定的转换系数,将所述交点转换为俯视鸟瞰像素点。
  12. 根据权利要求10所述的装置,其特征在于,所述第三确定模块,包括:
    划分子模块,用于在所述预设立体模型中划分经纬度;
    确定子模块,用于根据划分结果,确定所述模型中的每个网格点、每个网格点的经度值和纬度值、以及每个网格点在所述预设立体模型中的三维坐标值;
    所述第一建立模块,具体用于:
    针对每个网格点,以该网格点的经度值和纬度值为索引,建立该网格点的三维坐标值与其对应的二维平面像素点的像素坐标值之间的映射关系;
    所述第一确定模块,具体用于:
    基于所述索引,依次确定当前待映射网格点;针对每个当前待映射网格点,确定该当前网格点的三维坐标值及其对应的像素坐标值,所确定的像素坐标值为该当前网格点映射到所述二维平面图像中像素点的坐标值;
    所述渲染模块,具体用于:
    分别将每个网格平面作为待渲染平面;根据所述待渲染平面中包含的各个网格点的三维坐标值,在所述预设立体模型中确定待渲染位置;根据所述待渲染平面中包含的各个网格点的三维坐标值对应的像素坐标值,在所述二维平面图像中确定目标区域;利用所述目标区域对所述待渲染位置进行渲染,得到全景图像。
  13. 根据权利要求12所述的装置,其特征在于,所述划分子模块,具体用于:
    确定所述预设立体模型对应的每个视点的纬度区间;
    计算所述每个视点的纬度区间的纬度区间和;
    将所述纬度区间和去除各视点间的重叠纬度区间,得到所述预设立体模型的纬度区间;
    将所述每个视点的经度区间,确定为所述预设立体模型的经度区间;
    根据所述预设立体模型的经度区间和纬度区间,对所述预设立体模型进行经纬度的划分。
  14. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    畸变校正运算模块,用于根据采集所述二维平面图像的相机的内参,对二维平面图像样本中的像素点进行畸变校正运算,得到畸变校正像素点;
    透视变换模块,用于根据所述相机的外参,对所述畸变校正像素点进行透视变换,得到俯视鸟瞰像素点;
    第二转换模块,用于将所述俯视鸟瞰像素点转换为世界坐标点;
    第六确定模块,用于确定所述世界坐标点与预设立体模型的投影点之间的投影线,所述投影点为预先设定的;
    第七确定模块,用于将所述投影线与所述预设立体模型的交点确定为网格点;
    第二建立模块,根据每个网格点及其对应的二维平面图像样本中的像素点,建立所述映射关系。
  15. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    标记模块,用于将映射失败的网格点标记为无效点;
    所述第二确定模块,具体用于:
    针对所述预设立体模型中的每个有效网格平面,将该有效网格平面中各个网格点映射到的像素点构成的像素区域确定为该有效网格平面对应的像素区域,所述有效网格平面为:由除所述无效点之外的网格点构成的网格平面;
    所述渲染模块,具体用于:
    对于每个有效网格平面,用该有效网格平面对应的像素区域进行渲染,得到全景图像。
  16. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    所述获取模块,具体用于:获取多个视点的二维平面图像;
    所述第一确定模块,具体用于:
    确定当前视点、以及预设立体模型在所述当前视点的各个目标网格点; 根据预先建立的映射关系,确定所述各个目标网格点映射到所述二维平面图像中的像素点;
    所述第二确定模块,具体用于:
    针对所述预设立体模型中的每个目标网格平面,将该目标网格平面中网格点映射到的像素点构成的像素区域确定为该目标网格平面对应的目标像素区域;其中,一个目标网格平面由第一预设数量个目标网格点构成;
    所述渲染模块,具体用于:
    对于每个目标网格平面,用该目标网格平面对应的目标像素区域进行渲染,得到当前视点的全景图像。
  17. 一种电子设备,其特征在于,包括处理器和存储器;其中,所述存储器,用于存放计算机程序;所述处理器,用于执行所述存储器上所存放的程序时,实现如下方法步骤:
    获取二维平面图像;
    根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点;其中,所述映射关系中包含所述预设立体模型中各个网格点与二维平面图像中像素点的对应关系;
    针对所述预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域;其中,一个网格平面由第一预设数量个网格点构成;
    对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像。
  18. 根据权利要求17所述的电子设备,其特征在于,建立所述映射关系的过程包括:
    确定所述预设立体模型中的网格点;
    针对所确定的每个网格点,确定该网格点与预设投影点之间的投影线;其中,所述投影点位于所述预设立体模型的俯视鸟瞰平面的上方,所述投影 线以所述投影点为起点,且穿过该网格点;
    确定所述投影线与所述俯视鸟瞰平面的交点;
    将所述交点转换为俯视鸟瞰像素点;
    根据采集所述二维平面图像的相机的外参,对所述俯视鸟瞰像素点进行透视逆变换,得到畸变校正像素点;
    根据所述相机的内参,对所述畸变校正像素点进行畸变校正逆运算,得到二维平面像素点;
    根据每个网格点及其对应的二维平面像素点,建立所述映射关系。
  19. 根据权利要求18所述的电子设备,其特征在于,所述将所述交点转换为俯视鸟瞰像素点的步骤,包括:
    根据预先设定的转换系数,将所述交点转换为俯视鸟瞰像素点。
  20. 根据权利要求18所述的电子设备,其特征在于,所述确定所述预设立体模型中的网格点的步骤,包括:
    在所述预设立体模型中划分经纬度;
    根据划分结果,确定所述模型中的每个网格点、每个网格点的经度值和纬度值、以及每个网格点在所述预设立体模型中的三维坐标值;
    所述根据每个网格点及其对应的二维平面像素点,建立所述映射关系的步骤,包括:
    针对每个网格点,以该网格点的经度值和纬度值为索引,建立该网格点的三维坐标值与其对应的二维平面像素点的像素坐标值之间的映射关系;
    所述根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点的步骤,包括:
    基于所述索引,依次确定当前待映射网格点;
    针对每个当前待映射网格点,确定该当前网格点的三维坐标值及其对应的像素坐标值,所确定的像素坐标值为该当前网格点映射到所述二维平面图 像中像素点的坐标值;
    所述对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像的步骤,包括:
    分别将每个网格平面作为待渲染平面;
    根据所述待渲染平面中包含的各个网格点的三维坐标值,在所述预设立体模型中确定待渲染位置;
    根据所述待渲染平面中包含的各个网格点的三维坐标值对应的像素坐标值,在所述二维平面图像中确定目标区域;
    利用所述目标区域对所述待渲染位置进行渲染,得到全景图像。
  21. 根据权利要求20所述的电子设备,其特征在于,所述在所述预设立体模型中划分经纬度的步骤,包括:
    确定所述预设立体模型对应的每个视点的纬度区间;
    计算所述每个视点的纬度区间的纬度区间和;
    将所述纬度区间和去除各视点间的重叠纬度区间,得到所述预设立体模型的纬度区间;
    将所述每个视点的经度区间,确定为所述预设立体模型的经度区间;
    根据所述预设立体模型的经度区间和纬度区间,对所述预设立体模型进行经纬度的划分。
  22. 根据权利要求17所述的电子设备,其特征在于,建立所述映射关系的过程包括:
    根据采集所述二维平面图像的相机的内参,对二维平面图像样本中的像素点进行畸变校正运算,得到畸变校正像素点;
    根据所述相机的外参,对所述畸变校正像素点进行透视变换,得到俯视鸟瞰像素点;
    将所述俯视鸟瞰像素点转换为世界坐标点;
    确定所述世界坐标点与预设立体模型的投影点之间的投影线,所述投影点为预先设定的;
    将所述投影线与所述预设立体模型的交点确定为网格点;
    根据每个网格点及其对应的二维平面图像样本中的像素点,建立所述映射关系。
  23. 根据权利要求17所述的电子设备,其特征在于,在所述根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点的步骤之后,还包括:
    将映射失败的网格点标记为无效点;
    所述针对所述预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域的步骤,包括:
    针对所述预设立体模型中的每个有效网格平面,将该有效网格平面中各个网格点映射到的像素点构成的像素区域确定为该有效网格平面对应的像素区域,所述有效网格平面为:由除所述无效点之外的网格点构成的网格平面;
    所述对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像的步骤,包括:
    对于每个有效网格平面,用该有效网格平面对应的像素区域进行渲染,得到全景图像。
  24. 根据权利要求17所述的电子设备,其特征在于,所述获取二维平面图像的步骤,包括:
    获取多个视点的二维平面图像;
    所述根据预先建立的映射关系,确定预设立体模型中的各个网格点映射到所述二维平面图像中的像素点的步骤,包括:
    确定当前视点、以及预设立体模型在所述当前视点的各个目标网格点;
    根据预先建立的映射关系,确定所述各个目标网格点映射到所述二维平 面图像中的像素点;
    所述针对所述预设立体模型中的每个网格平面,将该网格平面中各个网格点映射到的像素点构成的像素区域确定为该网格平面对应的像素区域的步骤,包括:
    针对所述预设立体模型中的每个目标网格平面,将该目标网格平面中网格点映射到的像素点构成的像素区域确定为该目标网格平面对应的目标像素区域;其中,一个目标网格平面由第一预设数量个目标网格点构成;
    所述对于每个网格平面,用该网格平面对应的像素区域进行渲染,得到全景图像的步骤,包括:
    对于每个目标网格平面,用该目标网格平面对应的目标像素区域进行渲染,得到当前视点的全景图像。
  25. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-8任一所述的方法步骤。
  26. 一种可执行程序代码,其特征在于,所述可执行程序代码用于被运行以实现权利要求1-8任一所述的方法步骤。
PCT/CN2018/098634 2017-08-03 2018-08-03 一种全景图像生成方法及装置 WO2019024935A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/635,763 US11012620B2 (en) 2017-08-03 2018-08-03 Panoramic image generation method and device
EP18841194.6A EP3664443B1 (en) 2017-08-03 2018-08-03 Panoramic image generation method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710655999.7 2017-08-03
CN201710655999.7A CN109547766B (zh) 2017-08-03 2017-08-03 一种全景图像生成方法及装置

Publications (1)

Publication Number Publication Date
WO2019024935A1 true WO2019024935A1 (zh) 2019-02-07

Family

ID=65232300

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/098634 WO2019024935A1 (zh) 2017-08-03 2018-08-03 一种全景图像生成方法及装置

Country Status (4)

Country Link
US (1) US11012620B2 (zh)
EP (1) EP3664443B1 (zh)
CN (1) CN109547766B (zh)
WO (1) WO2019024935A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111009024A (zh) * 2019-12-09 2020-04-14 咪咕视讯科技有限公司 一种生成动态图像的方法、电子设备和存储介质
CN111723174A (zh) * 2020-06-19 2020-09-29 航天宏图信息技术股份有限公司 栅格数据快速区域统计方法和系统
CN112562063A (zh) * 2020-12-08 2021-03-26 北京百度网讯科技有限公司 用于对物体进行立体尝试的方法、装置、设备及存储介质
CN115589536A (zh) * 2022-12-12 2023-01-10 杭州巨岩欣成科技有限公司 泳池防溺水多相机空间融合方法及装置

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109964245A (zh) * 2016-12-06 2019-07-02 深圳市大疆创新科技有限公司 用于校正广角图像的系统和方法
CN111754385A (zh) * 2019-03-26 2020-10-09 深圳中科飞测科技有限公司 数据点模型处理方法及系统、检测方法及系统、可读介质
CN110675350B (zh) * 2019-10-22 2022-05-06 普联技术有限公司 云台相机视场坐标映射方法、装置、存储介质及云台相机
CN113066158B (zh) * 2019-12-16 2023-03-10 杭州海康威视数字技术股份有限公司 一种车载环视方法及装置
CN111212267A (zh) * 2020-01-16 2020-05-29 聚好看科技股份有限公司 一种全景图像的分块方法及服务器
CN111681190A (zh) * 2020-06-18 2020-09-18 深圳天海宸光科技有限公司 一种全景视频高精度坐标映射方法
CN111737506B (zh) * 2020-06-24 2023-12-22 众趣(北京)科技有限公司 一种三维数据的展示方法、装置及电子设备
CN112116524A (zh) * 2020-09-21 2020-12-22 胡翰 一种街景图像立面纹理的纠正方法及装置
CN114500970B (zh) * 2020-11-13 2024-04-26 聚好看科技股份有限公司 一种全景视频图像处理、显示方法及设备
CN112686824A (zh) * 2020-12-30 2021-04-20 北京迈格威科技有限公司 图像校正方法、装置、电子设备和计算机可读介质
CN113206992A (zh) * 2021-04-20 2021-08-03 聚好看科技股份有限公司 一种转换全景视频投影格式的方法及显示设备
CN113096254B (zh) * 2021-04-23 2023-09-22 北京百度网讯科技有限公司 目标物渲染方法及装置、计算机设备和介质
CN113438463B (zh) * 2021-07-30 2022-08-19 贝壳找房(北京)科技有限公司 正交相机图像的模拟方法和装置、存储介质、电子设备
CN114240740B (zh) * 2021-12-16 2022-08-16 数坤(北京)网络科技股份有限公司 骨骼展开图像的获取方法、装置、医疗设备以及存储介质
CN115641399B (zh) * 2022-09-08 2024-05-17 上海新迪数字技术有限公司 基于图像的多层网格拾取方法及系统
CN115661293B (zh) * 2022-10-27 2023-09-19 东莘电磁科技(成都)有限公司 一种电磁散射中目标瞬态感应特征图像生成方法
CN116680351A (zh) * 2023-05-17 2023-09-01 中国地震台网中心 基于移动大数据的地震人口热力实时获取方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050104966A1 (en) * 2001-11-30 2005-05-19 Microsoft Corporation Interactive images
CN102750734A (zh) * 2011-08-26 2012-10-24 新奥特(北京)视频技术有限公司 一种虚拟三维地球系统显示的方法和系统
CN105245838A (zh) * 2015-09-29 2016-01-13 成都虚拟世界科技有限公司 一种全景视频播放方法及播放器
CN106530218A (zh) * 2016-10-28 2017-03-22 浙江宇视科技有限公司 一种坐标转换方法及装置

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008271308A (ja) 2007-04-23 2008-11-06 Sanyo Electric Co Ltd 画像処理装置及び方法並びに車両
JP2009129001A (ja) 2007-11-20 2009-06-11 Sanyo Electric Co Ltd 運転支援システム、車両、立体物領域推定方法
TW201103787A (en) * 2009-07-31 2011-02-01 Automotive Res & Testing Ct Obstacle determination system and method utilizing bird's-eye images
JP5872764B2 (ja) 2010-12-06 2016-03-01 富士通テン株式会社 画像表示システム
US9282321B2 (en) * 2011-02-17 2016-03-08 Legend3D, Inc. 3D model multi-reviewer system
WO2014043814A1 (en) * 2012-09-21 2014-03-27 Tamaggo Inc. Methods and apparatus for displaying and manipulating a panoramic image by tiles
US10373366B2 (en) * 2015-05-14 2019-08-06 Qualcomm Incorporated Three-dimensional model generation
CN105163158A (zh) * 2015-08-05 2015-12-16 北京奇艺世纪科技有限公司 一种图像处理方法和装置
CN105913478A (zh) * 2015-12-28 2016-08-31 乐视致新电子科技(天津)有限公司 360度全景显示方法、显示模块及移动终端
US10523865B2 (en) * 2016-01-06 2019-12-31 Texas Instruments Incorporated Three dimensional rendering for surround view using predetermined viewpoint lookup tables
TWI613106B (zh) 2016-05-05 2018-02-01 威盛電子股份有限公司 車輛周圍影像處理方法及裝置
CN105959675A (zh) * 2016-05-25 2016-09-21 腾讯科技(深圳)有限公司 一种视频数据的处理方法和装置
CN106131535B (zh) * 2016-07-29 2018-03-02 传线网络科技(上海)有限公司 视频采集方法及装置、视频生成方法及装置
CN106570822B (zh) * 2016-10-25 2020-10-16 宇龙计算机通信科技(深圳)有限公司 一种人脸贴图方法及装置
CN106815805A (zh) * 2017-01-17 2017-06-09 湖南优象科技有限公司 基于Bayer图像的快速畸变校正方法
US10733786B2 (en) * 2018-07-20 2020-08-04 Facebook, Inc. Rendering 360 depth content

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050104966A1 (en) * 2001-11-30 2005-05-19 Microsoft Corporation Interactive images
CN102750734A (zh) * 2011-08-26 2012-10-24 新奥特(北京)视频技术有限公司 一种虚拟三维地球系统显示的方法和系统
CN105245838A (zh) * 2015-09-29 2016-01-13 成都虚拟世界科技有限公司 一种全景视频播放方法及播放器
CN106530218A (zh) * 2016-10-28 2017-03-22 浙江宇视科技有限公司 一种坐标转换方法及装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111009024A (zh) * 2019-12-09 2020-04-14 咪咕视讯科技有限公司 一种生成动态图像的方法、电子设备和存储介质
CN111009024B (zh) * 2019-12-09 2024-03-26 咪咕视讯科技有限公司 一种生成动态图像的方法、电子设备和存储介质
CN111723174A (zh) * 2020-06-19 2020-09-29 航天宏图信息技术股份有限公司 栅格数据快速区域统计方法和系统
CN112562063A (zh) * 2020-12-08 2021-03-26 北京百度网讯科技有限公司 用于对物体进行立体尝试的方法、装置、设备及存储介质
CN115589536A (zh) * 2022-12-12 2023-01-10 杭州巨岩欣成科技有限公司 泳池防溺水多相机空间融合方法及装置

Also Published As

Publication number Publication date
US11012620B2 (en) 2021-05-18
EP3664443B1 (en) 2023-10-25
CN109547766B (zh) 2020-08-14
CN109547766A (zh) 2019-03-29
EP3664443A1 (en) 2020-06-10
US20200366838A1 (en) 2020-11-19
EP3664443A4 (en) 2020-06-10

Similar Documents

Publication Publication Date Title
WO2019024935A1 (zh) 一种全景图像生成方法及装置
WO2019192358A1 (zh) 一种全景视频合成方法、装置及电子设备
CN107945112B (zh) 一种全景图像拼接方法及装置
US7570280B2 (en) Image providing method and device
CN108537721B (zh) 全景图像的处理方法、装置及电子设备
KR102208773B1 (ko) 파노라마 영상 압축 방법 및 장치
CN109308686B (zh) 一种鱼眼图像处理方法及装置、设备和存储介质
CN106331527B (zh) 一种图像拼接方法及装置
CN107993276B (zh) 一种全景图像的生成方法及装置
KR20190026876A (ko) 구면 파노라마 영상에 매핑을 수행하는 방법 및 장치
KR20160116075A (ko) 카메라로부터 획득한 영상에 대한 자동보정기능을 구비한 영상처리장치 및 그 방법
WO2010028559A1 (zh) 图像拼接方法及装置
CN111750820A (zh) 影像定位方法及其系统
WO2021208486A1 (zh) 一种相机坐标变换方法、终端以及存储介质
WO2020151268A1 (zh) 一种3d小行星动态图的生成方法及便携式终端
CA3101222C (en) Image processing method and device, and three-dimensional imaging system
US20190266802A1 (en) Display of Visual Data with a Virtual Reality Headset
US10798300B2 (en) Method and device for unfolding lens image into panoramic image
CN111294580A (zh) 基于gpu的摄像头视频投影方法、装置、设备及存储介质
CN112017242B (zh) 显示方法及装置、设备、存储介质
CN115174805A (zh) 全景立体图像的生成方法、装置和电子设备
CN113724141B (zh) 一种图像校正方法、装置及电子设备
JP7278720B2 (ja) 生成装置、生成方法及びプログラム
CN111161148A (zh) 一种全景图像生成方法、装置、设备和存储介质
CN111028357B (zh) 增强现实设备的软阴影处理方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18841194

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018841194

Country of ref document: EP

Effective date: 20200303