CN112215033A - Method, device and system for generating vehicle panoramic all-round view image and storage medium - Google Patents

Method, device and system for generating vehicle panoramic all-round view image and storage medium Download PDF

Info

Publication number
CN112215033A
CN112215033A CN201910616903.5A CN201910616903A CN112215033A CN 112215033 A CN112215033 A CN 112215033A CN 201910616903 A CN201910616903 A CN 201910616903A CN 112215033 A CN112215033 A CN 112215033A
Authority
CN
China
Prior art keywords
scene
information
image
vehicle
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910616903.5A
Other languages
Chinese (zh)
Other versions
CN112215033B (en
Inventor
王泽文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910616903.5A priority Critical patent/CN112215033B/en
Publication of CN112215033A publication Critical patent/CN112215033A/en
Application granted granted Critical
Publication of CN112215033B publication Critical patent/CN112215033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device and a system for generating a vehicle panoramic all-around image and a storage medium, and belongs to the technical field of vehicles. The method comprises the following steps: if the fact that the vehicle is switched from a first scene to a second scene is detected, determining residual scene information based on first scene information and second scene information, wherein the first scene information comprises a first scene image around the vehicle, and the second scene information comprises a second scene image around the vehicle; and updating the first stereo scene model based on the residual scene information to obtain a second stereo scene model, and determining the panoramic all-around view image of the vehicle in the second scene based on the second scene image and the second stereo scene model. The method and the device can adaptively update the three-dimensional scene model according to the dynamic scene information to obtain the dynamic three-dimensional scene model capable of adapting to scene changes, and effectively avoid the problem of serious stretching deformation of objects caused when the panoramic all-around image is generated based on the fixed three-dimensional scene model.

Description

Method, device and system for generating vehicle panoramic all-round view image and storage medium
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to a method, an apparatus, a system, and a storage medium for generating a vehicle panoramic all-around image.
Background
The panoramic all-around view image of the vehicle is a panoramic view capable of showing 360-degree scenes around the vehicle, and can be obtained by processing images collected by a plurality of cameras arranged around the vehicle so as to map a plurality of collected images into a three-dimensional space. Through the panoramic all-round image of vehicle, the driver can look over directly perceivedly whether there is the barrier in each angle of vehicle periphery to know the relative position and the distance of barrier, thereby can enlarge driver's the field of vision, effectively reduce the emergence of accidents such as scraping, collision, collapse.
In the related art, a panoramic surround view image of a vehicle is generally generated based on a fixed stereoscopic scene model. Specifically, when the vehicle is in a low-speed driving state or a parking state, scene information of a scene in which the vehicle is located may be acquired, the scene information including scene images around the vehicle captured by a plurality of cameras disposed around the vehicle. Then, a three-dimensional scene model used for representing the current scene in a three-dimensional space is constructed based on the acquired scene information, a space mapping relation between the scene image and the actual scene is determined based on the scene image and the shooting parameters, and the scene image is mapped into the three-dimensional scene model based on the space mapping relation, so that the panoramic all-round view image of the vehicle in the current scene can be obtained. And then, the constructed stereo scene model is used as a fixed stereo scene model, and the scene image acquired at any time later is mapped into the stereo scene model to obtain the panoramic all-around view image of the vehicle in any time scene.
However, the scene of the vehicle is not fixed, and if the changed scene image is mapped to the original fixed stereo scene model after the scene is changed, the fixed stereo scene model can only be used for representing the scene before the change in the three-dimensional space, and the changed scene cannot be accurately represented, so that the object in the obtained panoramic all-around view image is severely stretched and deformed.
Disclosure of Invention
The embodiment of the application provides a method, a device and a system for generating a vehicle panoramic all-around image and a storage medium, which can be used for solving the problem that objects in the image are easy to seriously stretch and deform when the panoramic all-around image is generated based on a fixed three-dimensional scene model in the related technology. The technical scheme is as follows:
in one aspect, a method for generating a vehicle panoramic all-round view image is provided, and the method comprises the following steps:
if the fact that the vehicle is switched from a first scene to a second scene is detected, determining residual scene information based on the first scene information and the second scene information;
wherein the residual scene information is used for indicating scene change of the second scene relative to the first scene, the first scene information comprises a first scene image around the vehicle, and the second scene information comprises a second scene image around the vehicle;
updating a first stereo scene model based on the residual scene information to obtain a second stereo scene model, wherein the first stereo scene is used for representing the first scene in a three-dimensional space, and the second stereo scene model is used for representing the second scene in the three-dimensional space;
determining a panoramic all-around view image of the vehicle in the second scene based on the second scene image and the second stereoscopic scene model.
Optionally, the determining residual scene information based on the first scene information and the second scene information includes:
determining second point cloud information based on the second scene information, wherein the second point cloud information is used for indicating a coordinate set of the second scene in a three-dimensional space;
determining a residual error between the second point cloud information and first point cloud information to obtain residual error point cloud information, wherein the first point cloud information is used for indicating a coordinate set of the first scene in a three-dimensional space;
and determining the residual point cloud information as the residual scene information.
Optionally, the updating the first stereoscopic scene model based on the residual scene information to obtain a second stereoscopic scene model includes:
quantizing the residual scene information to obtain quantized residual scene information;
and updating the first stereo scene model based on the quantized residual scene information to obtain the second stereo scene model.
Optionally, the residual scene information is residual point cloud information, where the residual point cloud information is a residual between second point cloud information and first point cloud information, the first point cloud information is used to indicate a coordinate set of the first scene in a three-dimensional space, and the second point cloud information is used to indicate a coordinate set of the second scene in the three-dimensional space;
the quantizing the residual scene information to obtain quantized residual scene information includes:
quantizing the residual point cloud information to obtain quantized residual point cloud information;
the updating the first stereo scene model based on the quantized residual scene information to obtain the second stereo scene model includes:
and summing the quantized residual point cloud information and the first stereo scene model to obtain the second stereo scene model.
Optionally, said determining a panoramic all-around image of the vehicle in the second scene based on the second scene image and the second stereoscopic scene model comprises:
acquiring a first spatial mapping relation between the first scene image and an actual scene;
determining a second spatial mapping relation between a first sub-image and an actual scene based on the residual scene information and the shooting parameters, wherein the first sub-image is a partial image corresponding to a changed scene in the second scene image, and the changed scene is a partial scene of the second scene changed relative to the first scene;
and mapping the second scene image into the second stereo scene model based on the first mapping relation and the second mapping relation to obtain a panoramic all-around view image of the vehicle in the second scene.
Optionally, the mapping the second scene image into the second stereoscopic scene model based on the first mapping relationship and the second mapping relationship includes:
mapping a second sub-image in the second scene image into the second stereoscopic scene model based on the first mapping relation, wherein the second sub-image is a partial image corresponding to an unchanged scene in the second scene image, and the unchanged scene is a partial scene in which the second scene is unchanged relative to the first scene;
mapping the first sub-image in the second scene image into the second stereoscopic scene model based on the second mapping relationship.
Optionally, before determining the residual scene information based on the first scene information and the second scene information, the method further includes:
acquiring first scene information corresponding to a first scene where the vehicle is located;
constructing the first stereoscopic scene model based on the first scene information;
and determining a first spatial mapping relation between the first scene image and the actual scene based on the first scene information and the shooting parameters.
Optionally, after determining the first spatial mapping relationship between the first scene image and the actual scene based on the first scene information and the imaging parameter, the method further includes:
and mapping the first scene image to the first stereo scene model based on the first spatial mapping relation to obtain a panoramic all-around view image of the vehicle in the first scene.
In one aspect, an apparatus for generating a panoramic all-around image of a vehicle is provided, the apparatus comprising:
the first determining module is used for determining residual scene information based on the first scene information and the second scene information if the vehicle is detected to be switched from the first scene to the second scene;
wherein the residual scene information is used for indicating scene change of the second scene relative to the first scene, the first scene information comprises a first scene image around the vehicle, and the second scene information comprises a second scene image around the vehicle;
an updating module, configured to update a first stereoscopic scene model based on the residual scene information to obtain a second stereoscopic scene model, where the first stereoscopic scene is used to represent the first scene in a three-dimensional space, and the second stereoscopic scene model is used to represent the second scene in the three-dimensional space;
a second determination module to determine a panoramic all-around view image of the vehicle in the second scene based on the second scene image and the second stereoscopic scene model.
Optionally, the first determining module is configured to:
determining second point cloud information based on the second scene information, wherein the second point cloud information is used for indicating a coordinate set of the second scene in a three-dimensional space;
determining a residual error between the second point cloud information and first point cloud information to obtain residual error point cloud information, wherein the first point cloud information is used for indicating a coordinate set of the first scene in a three-dimensional space;
and determining the residual point cloud information as the residual scene information.
Optionally, the update module includes:
the quantization unit is used for quantizing the residual scene information to obtain quantized residual scene information;
and the updating unit is used for updating the first stereo scene model based on the quantized residual scene information to obtain the second stereo scene model.
Optionally, the residual scene information is residual point cloud information, where the residual point cloud information is a residual between second point cloud information and first point cloud information, the first point cloud information is used to indicate a coordinate set of the first scene in a three-dimensional space, and the second point cloud information is used to indicate a coordinate set of the second scene in the three-dimensional space;
the quantization unit is used for quantizing the residual point cloud information to obtain quantized residual point cloud information;
and the updating unit is used for summing the quantized residual point cloud information and the first stereo scene model to obtain the second stereo scene model.
Optionally, the second determining module includes:
the acquiring unit is used for acquiring a first spatial mapping relation between the first scene image and an actual scene;
a determining unit, configured to determine, based on the residual scene information and the imaging parameter, a second spatial mapping relationship between a first sub-image and an actual scene, where the first sub-image is a partial image corresponding to a changed scene in the second scene image, and the changed scene is a partial scene in which the second scene changes with respect to the first scene;
and the mapping unit is used for mapping the second scene image to the second stereo scene model based on the first mapping relation and the second mapping relation to obtain the panoramic all-around image of the vehicle in the second scene.
Optionally, the mapping unit is configured to:
mapping a second sub-image in the second scene image into the second stereoscopic scene model based on the first mapping relation, wherein the second sub-image is a partial image corresponding to an unchanged scene in the second scene image, and the unchanged scene is a partial scene in which the second scene is unchanged relative to the first scene;
mapping the first sub-image in the second scene image into the second stereoscopic scene model based on the second mapping relationship.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring first scene information corresponding to a first scene where the vehicle is located;
a construction module configured to construct the first stereoscopic scene model based on the first scene information;
and the third determining module is used for determining a first spatial mapping relation between the first scene image and the actual scene based on the first scene information and the shooting parameters.
Optionally, the apparatus further comprises:
and the mapping module is used for mapping the first scene image to the first three-dimensional scene model based on the first spatial mapping relation to obtain a panoramic all-around view image of the vehicle in the first scene.
In one aspect, a vehicle panoramic look-around system is provided, the system comprising a sensing element, a processor, and a display, the sensing element comprising a plurality of cameras disposed about a vehicle;
the perception element is used for acquiring scene information of a scene where the vehicle is located, and the scene information comprises a scene image around the vehicle;
the processor is used for realizing the generation method of the vehicle panoramic all-around image;
the display is used for displaying the panoramic all-around view image of the vehicle.
Optionally, the sensing device further comprises a distance sensor disposed on the vehicle, the distance sensor comprising at least one of an optical distance sensor, an infrared distance sensor, and an ultrasonic distance sensor.
In one aspect, a non-transitory computer readable storage medium is provided, wherein instructions when executed by a processor of a device enable the device to perform any one of the above methods for generating a vehicle panoramic all-around image.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
in the embodiment of the application, if it is detected that the vehicle is switched from the first scene to the second scene, residual scene information that can indicate a scene change situation of the second scene relative to the first scene may be determined based on the first scene information and the second scene information, then the first stereo scene model corresponding to the first scene is updated based on the residual scene information, so as to obtain a second stereo scene model adapted to the second scene, and then the panoramic all-around image of the vehicle in the second scene is determined based on the second scene image and the second stereo scene model. That is, the method and the device can adaptively update the three-dimensional scene model according to the dynamically changing scene information to obtain the dynamically three-dimensional scene model capable of adapting to the scene change, and then can generate the panoramic all-around image capable of accurately reflecting the scene change based on the changed scene information and the dynamically three-dimensional scene model, thereby effectively avoiding the problems of serious stretching and deformation of objects in the image caused by generating the panoramic all-around image based on the fixed three-dimensional scene model. In addition, when the first stereo scene model is updated based on the residual scene information, only the changed part of the scene is needed to be updated, and the unchanged part of the scene is not needed to be updated, so that the model updating efficiency is improved, and the generation efficiency of the panoramic all-around image is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a panoramic looking-around system provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a panoramic looking-around system provided by an embodiment of the present application;
FIG. 3 is a flowchart of a method for generating a panoramic all-around image of a vehicle according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a stereo scene model reconstruction process provided in an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating comparison between before and after point cloud information quantization according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a panoramic view image generation process provided in an embodiment of the present application;
FIG. 7 is a block diagram of an apparatus for generating a panoramic all-around view image of a vehicle according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a device for generating a vehicle panoramic all-around image according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the embodiments of the present application in detail, an application scenario of the embodiments of the present application will be described.
The method for generating the vehicle panoramic all-around image can be applied to scenes in which a driver needs to check the surrounding environment of the vehicle when driving the vehicle. For example, the method can be applied to special driving scenes such as reversing, mountain driving, curve driving or congestion driving, and certainly can also be applied to normal driving scenes, which are not limited in the embodiment of the present application.
For example, in scenes such as narrow and congested urban areas and parking lots where collision and scratch incidents are prone to occurring, in order to enlarge the field of vision of the driver, the panoramic view of the vehicle can be generated by the method provided by the embodiment of the application, and the panoramic view is displayed for the driver, so that the driver can more clearly and intuitively sense the 360-degree all-around environment around the vehicle, a blind spot in the field of vision is avoided, and collision and scratch are avoided.
Next, a brief description will be given of an implementation environment related to the embodiments of the present application.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application, and referring to fig. 1, the implementation environment includes an on-board surround view system 100 installed on a vehicle. The vehicle looking around system 100 includes, but is not limited to, a perception element 110 and a vehicle looking around device 120, and the vehicle looking around device 120 may include a communication element 121, a processor 122, and a display 123.
Sensing elements 110 may include, but are not limited to, cameras, optical sensors, infrared sensors, ultrasonic sensors, odometers, wheel pulses, and the like. The sensing element 110 may acquire information such as a plurality of photographed images of the periphery of the vehicle, information of surrounding objects, the speed of the vehicle, or the wheel rotation angle of the vehicle. As an example, the optical sensor may be a laser sensor and the ultrasonic sensor may be a radar. Illustratively, referring to fig. 2, the sensing element 10 includes cameras such as a camera 11, a camera 12, a camera 13, and a camera 14, which are mounted on the periphery of the vehicle. The sensor 10 can capture 4 captured images of the periphery of the vehicle at the same time by the 4 cameras.
The communication element 121 is configured to transmit scene information including a plurality of captured images of the periphery of the vehicle acquired by the sensing element 110 to the processor 122. Optionally, the communication element 121 may also transmit instruction information sent by the user to the processor 122. When the display 123 is a touchable resistive display or a capacitive display, a user may select an image to be displayed by clicking or sliding a display interface, so as to trigger instruction information.
After receiving the scene information, the processor 122 may generate a panoramic all-around view image of the vehicle according to the method provided in the embodiment of the present application. Illustratively, the processor 122 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or may be one or more integrated circuits for controlling the execution of programs according to the present disclosure.
The display 123 is used to display the generated panoramic view image and the like. Illustratively, the display 123 may be a resistive display, a capacitive display, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, a Cathode Ray Tube (CRT) display, a projector (projector), or the like.
Next, a method for generating a vehicle panoramic all-round view image according to an embodiment of the present application will be described in detail. Fig. 3 is a flowchart of a method for generating a vehicle panoramic all-around image according to an embodiment of the present application, where the method may be applied to the vehicle-mounted all-around device shown in fig. 1, as shown in fig. 3, and the method includes the following steps:
step 301: the method comprises the steps of obtaining first scene information corresponding to a first scene where a vehicle is located, wherein the first scene information comprises a first scene image around the vehicle.
The vehicle is a vehicle to be subjected to panoramic view image acquisition, and specifically can be an automobile, a truck or a passenger car. The first scene in which the vehicle is located refers to the environment around the vehicle.
The first scene information refers to scene information of a first scene, which indicates the first scene and includes at least a first scene image around the vehicle. The first scene image may be a look-around image of the vehicle, such as an image captured by a plurality of cameras disposed around the vehicle. In addition, the first scene information may further include object information collected by a distance sensor provided on the vehicle, and the distance and angle of an object around the vehicle may be acquired based on the object information.
In some examples, a plurality of cameras are installed around the vehicle, and the vehicle is further provided with a radar, and the vehicle-mounted looking-around device may acquire scene images and radar information around the vehicle, which are acquired by the plurality of cameras, and use the acquired scene images and radar information as the first scene information.
In some examples, the first context information of the vehicle may be acquired in real time, periodically, or when the information acquisition condition is satisfied. For example, the information acquisition condition may be that the vehicle is in a stationary state or a low-speed running state, or the like, or that the panoramic all-round function is detected to be activated, or the like.
Step 302: based on the first scene information, a first stereo scene model is constructed, and the first stereo scene is used for representing a first scene in a three-dimensional space.
Based on the acquired first scene information, a first stereo scene model corresponding to the first scene may be constructed first.
In some examples, first point cloud information may be determined based on first scene information, the first point cloud information referring to point cloud information of a first scene, and then a first stereoscopic scene model may be constructed based on the first point cloud information. The first point cloud information is used for indicating a coordinate set of the first scene in the three-dimensional space, namely indicating a coordinate set of the first stereo scene model, so that the first stereo scene model can be constructed and obtained based on the first point cloud information.
In one possible implementation, the first point cloud information may be recovered by a motion structure recovery technique based on the first scene information. Of course, the first point cloud information may also be determined in other ways, which is not limited in this application embodiment.
In some examples, after the first point cloud information is obtained, the first point cloud information may be quantized to obtain quantized first point cloud information, and then the first stereoscopic scene model is constructed based on the quantized first point cloud information. Through carrying out quantization processing on the first point cloud information, the point cloud density corresponding to the first scene can be reduced, and point cloud singular points are filtered out, so that the point cloud distribution is more uniform. In one possible implementation, the first point cloud information may be quantized by performing smooth sampling and other processes on the first point cloud information.
For example, any coordinate point included in the first point cloud information generally includes coordinate values of 3 dimensions, such as coordinate values of an x axis, a y axis, and a z axis, and when any coordinate point is quantized, the coordinate value of each dimension of the coordinate is quantized. For example, for the coordinate values of each dimension, a correspondence between the coordinate values and quantized coordinate values may be set in advance, and then the quantized coordinate values of the coordinate values of each dimension may be determined according to the coordinate values of each dimension and the correspondence. The corresponding relationship includes a plurality of coordinate intervals and a quantized coordinate value corresponding to each coordinate interval, where the quantized coordinate value corresponding to each coordinate interval is usually a specific coordinate value in the coordinate interval, for example, the quantized coordinate value may be a coordinate value located at a head position, a middle position, or a tail position of the coordinate interval, or a coordinate mean value of the coordinate interval.
For example, for any coordinate point a (x, y, z) included in the first point cloud information, coordinate values of each dimension of the coordinate point a may be processed by the following formula, and coordinate values quantized on the x axis, the y axis, and the z axis are obtained respectively:
quantification of x-axis coordinates:
Figure BDA0002124255760000101
wherein, x is the coordinate value of the x axis before quantization; x is the number ofth0,xth1,xth2,…,xthnN +1 thresholds which are preset aiming at the x axis and are used for dividing the x axis coordinate into a plurality of coordinate intervals; and X is the quantized X-axis coordinate value.
Quantification of y-axis coordinates:
Figure BDA0002124255760000102
wherein, y is a y-axis coordinate value before quantization; y isth0,yth1,yth2…,ythnN +1 thresholds preset for the y axis for dividing the y axis coordinate into a plurality of coordinate intervals; and Y is a quantized Y-axis coordinate value.
Quantification of z-axis coordinates:
Figure BDA0002124255760000103
wherein z is a z-axis coordinate value before quantization; z is a radical ofth0,zth1,zth2…,zthnN +1 thresholds which are preset aiming at the z axis and are used for dividing the z axis coordinate into a plurality of coordinate intervals; z is a quantized Z-axis coordinate value.
Step 303: and determining a first spatial mapping relation between the first scene image and the actual scene based on the first scene information and the shooting parameters.
The first spatial mapping relationship refers to a spatial mapping relationship between the first scene image and the actual scene, and is used to refer to a position in the actual scene space to which a certain point in the first scene image is mapped. The imaging parameters are used for indicating an imaging principle between an actual scene and a scene image, and based on the imaging principle, a first spatial mapping relation between a first scene image and the actual scene can be determined. For example, the image capturing parameters may be parameters of a camera that captures the first scene image.
In some examples, the imaging parameters include an internal parameter and an external parameter, the internal parameter is used to indicate a spatial mapping relationship between an image space and a camera space, and the external parameter is used to indicate a spatial mapping relationship between the camera space and an actual space, so that the first spatial mapping relationship may be obtained based on the first scene information, the internal parameter, and the external parameter.
In some examples, after step 303, the first scene image may be further mapped into the first stereoscopic scene model based on the first spatial mapping relationship, resulting in a panoramic all-around view image of the vehicle in the first scene.
Step 304: if the vehicle is detected to be switched from the first scene to the second scene, residual scene information is determined based on the first scene information and the second scene information, the residual scene information is used for indicating scene change conditions of the second scene relative to the first scene, and the second scene information comprises a second scene image around the vehicle.
In the embodiment of the application, when it is detected that the scene where the vehicle is located changes, the changed scene information can be obtained, residual scene information between the changed scene information and the scene information before the change is calculated, and then the first stereo scene model before the change is updated based on the residual scene information, so that the second stereo scene model which can adapt to the changed scene is obtained.
In some examples, the second point cloud information may be determined based on the second scene information, and then residual point cloud information between the second point cloud information and the first point cloud information may be determined, the residual point cloud information being determined as residual scene information. The first point cloud information is used for indicating a coordinate set of a first scene in a three-dimensional space, and the second point cloud information is used for indicating a coordinate set of a second scene in the three-dimensional space.
Step 305: and updating the first stereo scene model based on the residual scene information to obtain a second stereo scene model, wherein the first stereo scene is used for representing the first scene in the three-dimensional space, and the second stereo scene model is used for representing the second scene in the three-dimensional space.
The model updating based on the residual scene information has the advantages that when the scene changes less, only a small part of the three-dimensional scene model which changes needs to be updated, and the part which does not change does not need to be updated, so that the model updating speed can be increased, and the generation efficiency of the panoramic all-around image is improved.
In some examples, the residual scene information may be quantized to obtain quantized residual scene information, and then the first stereo scene model is updated based on the quantized residual scene information to obtain the second stereo scene model. By quantizing the residual scene information, singular scene information can be filtered out, so that the residual scene information is smoother and more uniform.
In a possible implementation manner, if the residual scene information is residual point cloud information, the residual point cloud information may be quantized first to obtain quantized residual point cloud information, and then the quantized residual point cloud information is summed with the first stereo scene model to obtain the second stereo scene model. As an example, the reconstruction process of the second stereo scene model may be as shown in fig. 4.
The implementation manner of quantizing the residual point cloud information to obtain quantized residual point cloud information is the same as the implementation manner of quantizing the first point cloud information to obtain quantized first point cloud information, and the specific process may refer to the above description, which is not repeated herein.
For example, referring to fig. 5, for the point clouds (r), c, and c in the residual point cloud information, the positions before and after quantization can be as shown in fig. 5. As can be seen from fig. 5, the point clouds (i) and (iii) before quantization have large position deviations with other point clouds, and the position deviations can be reduced by quantizing the point clouds, so that residual point cloud information is smoother and more uniform.
Step 306: and determining a panoramic all-around view image of the vehicle in the second scene based on the second scene image and the second stereo scene model.
In some examples, a first spatial mapping relationship between a first scene image and an actual scene may be obtained, then a second spatial mapping relationship between a first sub-image and the actual scene may be determined based on residual scene information and imaging parameters, and a second scene image may be mapped into a second stereoscopic scene model based on the first mapping relationship and the second mapping relationship, so as to obtain a panoramic all-around image of a vehicle in a second scene. The first sub-image is a partial image corresponding to a changed scene in the second scene image, and the changed scene is a partial scene in which the second scene changes relative to the first scene.
In some examples, mapping the second scene image into the second stereoscopic scene model based on the first mapping relationship and the second mapping relationship may include: and mapping a second sub-image in the second scene image into a second stereo scene model based on the first mapping relation, and mapping a first sub-image in the second scene image into the second stereo scene model based on the second mapping relation. The second sub-image is a partial image corresponding to an unchanged scene in the second scene image, and the unchanged scene is a partial scene in which the second scene is unchanged relative to the first scene.
Therefore, after the reconstruction of the three-dimensional scene model is completed, the rendering of the panoramic all-around view image can be performed according to the new mapping relation only by recalculating the mapping relation between the reconstructed part and the scene image, and the rendering of the non-reconstructed part is still performed by adopting the original mapping relation. Therefore, when the scene changes, the three-dimensional scene model reconstruction and the rendering of the panoramic all-around image can be rapidly carried out.
As an example, during the dynamic change of the scene, the generation process of the panoramic all-round image of the vehicle may be as shown in fig. 6. That is, in the process of dynamically changing the scene, a motion structure recovery technology may be adopted to perform motion structure recovery on the all-around view image of the current scene acquired by the camera to obtain point cloud information of the current scene, then reconstruct the three-dimensional scene model of the scene before change based on the point cloud information of the current scene, and then determine to obtain the panoramic all-around view image of the vehicle in the current scene based on the all-around view image of the current scene and the reconstructed three-dimensional scene model.
In the embodiment of the application, if it is detected that the vehicle is switched from the first scene to the second scene, residual scene information that can indicate a scene change situation of the second scene relative to the first scene may be determined based on the first scene information and the second scene information, then the first stereo scene model corresponding to the first scene is updated based on the residual scene information, so as to obtain a second stereo scene model adapted to the second scene, and then the panoramic all-around image of the vehicle in the second scene is determined based on the second scene image and the second stereo scene model. That is, the method and the device can adaptively update the three-dimensional scene model according to the dynamically changing scene information to obtain the dynamically three-dimensional scene model capable of adapting to the scene change, and then can generate the panoramic all-around image capable of accurately reflecting the scene change based on the changed scene information and the dynamically three-dimensional scene model, thereby effectively avoiding the problems of serious stretching and deformation of objects in the image caused by generating the panoramic all-around image based on the fixed three-dimensional scene model. In addition, when the first stereo scene model is updated based on the residual scene information, only the changed part of the scene is needed to be updated, and the unchanged part of the scene is not needed to be updated, so that the model updating efficiency is improved, and the generation efficiency of the panoramic all-around image is improved.
Fig. 7 is a block diagram of an apparatus for generating a panoramic all-around image of a vehicle according to an embodiment of the present application, and as shown in fig. 7, the apparatus includes: a first determining module 701, an updating module 702 and a second determining module 703.
A first determining module 701, configured to determine, if it is detected that the vehicle is switched from a first scene to a second scene, residual scene information based on the first scene information and the second scene information;
wherein the residual scene information is used for indicating scene change of the second scene relative to the first scene, the first scene information includes a first scene image around the vehicle, and the second scene information includes a second scene image around the vehicle;
an updating module 702, configured to update a first stereo scene model based on the residual scene information to obtain a second stereo scene model, where the first stereo scene is used to represent the first scene in a three-dimensional space, and the second stereo scene model is used to represent the second scene in the three-dimensional space;
a second determining module 703, configured to determine a panoramic all-around view image of the vehicle in the second scene based on the second scene image and the second stereoscopic scene model.
Optionally, the first determining module 701 is configured to:
determining second point cloud information based on the second scene information, wherein the second point cloud information is used for indicating a coordinate set of the second scene in a three-dimensional space;
determining a residual error between the second point cloud information and first point cloud information to obtain residual error point cloud information, wherein the first point cloud information is used for indicating a coordinate set of the first scene in a three-dimensional space;
and determining the residual point cloud information as the residual scene information.
Optionally, the update module 702 includes:
the quantization unit is used for quantizing the residual scene information to obtain quantized residual scene information;
and the updating unit is used for updating the first stereo scene model based on the quantized residual scene information to obtain the second stereo scene model.
Optionally, the residual scene information is residual point cloud information, where the residual point cloud information is a residual between second point cloud information and first point cloud information, the first point cloud information is used to indicate a coordinate set of the first scene in a three-dimensional space, and the second point cloud information is used to indicate a coordinate set of the second scene in the three-dimensional space;
the quantization unit is used for quantizing the residual point cloud information to obtain quantized residual point cloud information;
the updating unit is used for summing the quantized residual point cloud information and the first stereo scene model to obtain the second stereo scene model.
Optionally, the second determining module 703 includes:
the acquiring unit is used for acquiring a first spatial mapping relation between the first scene image and an actual scene;
a determining unit, configured to determine, based on the residual scene information and the imaging parameter, a second spatial mapping relationship between a first sub-image and an actual scene, where the first sub-image is a partial image corresponding to a changed scene in the second scene image, and the changed scene is a partial scene in which the second scene changes with respect to the first scene;
and the mapping unit is used for mapping the second scene image to the second three-dimensional scene model based on the first mapping relation and the second mapping relation to obtain the panoramic all-around image of the vehicle in the second scene.
Optionally, the mapping unit is configured to:
mapping a second sub-image in the second scene image into the second stereoscopic scene model based on the first mapping relation, wherein the second sub-image is a partial image corresponding to an unchanged scene in the second scene image, and the unchanged scene is a partial scene in which the second scene is unchanged relative to the first scene;
and mapping the first sub-image in the second scene image into the second stereo scene model based on the second mapping relation.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring first scene information corresponding to a first scene where the vehicle is located;
a construction module for constructing the first stereoscopic scene model based on the first scene information;
and the third determining module is used for determining a first spatial mapping relation between the first scene image and the actual scene based on the first scene information and the shooting parameters.
Optionally, the apparatus further comprises:
and the mapping module is used for mapping the first scene image to the first three-dimensional scene model based on the first spatial mapping relation to obtain a panoramic all-around view image of the vehicle in the first scene.
In the embodiment of the application, if it is detected that the vehicle is switched from the first scene to the second scene, residual scene information that can indicate a scene change situation of the second scene relative to the first scene may be determined based on the first scene information and the second scene information, then the first stereo scene model corresponding to the first scene is updated based on the residual scene information, so as to obtain a second stereo scene model adapted to the second scene, and then the panoramic all-around image of the vehicle in the second scene is determined based on the second scene image and the second stereo scene model. That is, the method and the device can adaptively update the three-dimensional scene model according to the dynamically changing scene information to obtain the dynamically three-dimensional scene model capable of adapting to the scene change, and then can generate the panoramic all-around image capable of accurately reflecting the scene change based on the changed scene information and the dynamically three-dimensional scene model, thereby effectively avoiding the problems of serious stretching and deformation of objects in the image caused by generating the panoramic all-around image based on the fixed three-dimensional scene model. In addition, when the first stereo scene model is updated based on the residual scene information, only the changed part of the scene is needed to be updated, and the unchanged part of the scene is not needed to be updated, so that the model updating efficiency is improved, and the generation efficiency of the panoramic all-around image is improved.
It should be noted that: the device for generating a vehicle panoramic all-around image according to the above embodiment is exemplified by the division of the above functional modules, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the apparatus for generating a vehicle panoramic all-around image and the method for generating a vehicle panoramic all-around image provided by the embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 8 is a schematic structural diagram of a device for generating a vehicle panoramic all-around image, where the device 800 for generating a vehicle panoramic all-around image can generate relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 801 and one or more memories 802, where the memory 802 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 801. Of course, the device 800 for generating a vehicle panoramic view image may further include a wired or wireless network interface, a keyboard, an input/output interface, and other components to facilitate input and output, and the device 800 for generating a vehicle panoramic view image may further include other components for implementing functions of the device, which is not described herein again.
In an exemplary embodiment, there is also provided a computer readable storage medium, such as a memory, comprising instructions executable by a processor in a vehicle panoramic all-around image generation apparatus to perform the vehicle panoramic all-around image generation method in the above embodiments. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A method for generating a panoramic all-round view image of a vehicle, the method comprising:
if the fact that the vehicle is switched from a first scene to a second scene is detected, determining residual scene information based on the first scene information and the second scene information;
wherein the residual scene information is used for indicating scene change of the second scene relative to the first scene, the first scene information comprises a first scene image around the vehicle, and the second scene information comprises a second scene image around the vehicle;
updating a first stereo scene model based on the residual scene information to obtain a second stereo scene model, wherein the first stereo scene is used for representing the first scene in a three-dimensional space, and the second stereo scene model is used for representing the second scene in the three-dimensional space;
determining a panoramic all-around view image of the vehicle in the second scene based on the second scene image and the second stereoscopic scene model.
2. The method of claim 1, wherein determining residual scene information based on the first scene information and the second scene information comprises:
determining second point cloud information based on the second scene information, wherein the second point cloud information is used for indicating a coordinate set of the second scene in a three-dimensional space;
determining a residual error between the second point cloud information and first point cloud information to obtain residual error point cloud information, wherein the first point cloud information is used for indicating a coordinate set of the first scene in a three-dimensional space;
and determining the residual point cloud information as the residual scene information.
3. The method of claim 1, wherein said updating the first stereoscopic scene model based on the residual scene information to obtain the second stereoscopic scene model comprises:
quantizing the residual scene information to obtain quantized residual scene information;
and updating the first stereo scene model based on the quantized residual scene information to obtain the second stereo scene model.
4. The method of claim 3, wherein the residual scene information is residual point cloud information, the residual point cloud information being a residual between second point cloud information and first point cloud information, the first point cloud information indicating a set of coordinates of the first scene in three-dimensional space, the second point cloud information indicating a set of coordinates of the second scene in three-dimensional space;
the quantizing the residual scene information to obtain quantized residual scene information includes:
quantizing the residual point cloud information to obtain quantized residual point cloud information;
the updating the first stereo scene model based on the quantized residual scene information to obtain the second stereo scene model includes:
and summing the quantized residual point cloud information and the first stereo scene model to obtain the second stereo scene model.
5. The method of claim 1, wherein said determining a panoramic all-around image of the vehicle in the second scene based on the second scene image and the second stereoscopic scene model comprises:
acquiring a first spatial mapping relation between the first scene image and an actual scene;
determining a second spatial mapping relation between a first sub-image and an actual scene based on the residual scene information and the shooting parameters, wherein the first sub-image is a partial image corresponding to a changed scene in the second scene image, and the changed scene is a partial scene of the second scene changed relative to the first scene;
and mapping the second scene image into the second stereo scene model based on the first mapping relation and the second mapping relation to obtain a panoramic all-around view image of the vehicle in the second scene.
6. The method of claim 5, wherein said mapping the second scene image into the second stereoscopic scene model based on the first mapping relationship and the second mapping relationship comprises:
mapping a second sub-image in the second scene image into the second stereoscopic scene model based on the first mapping relation, wherein the second sub-image is a partial image corresponding to an unchanged scene in the second scene image, and the unchanged scene is a partial scene in which the second scene is unchanged relative to the first scene;
mapping the first sub-image in the second scene image into the second stereoscopic scene model based on the second mapping relationship.
7. The method of claim 1, wherein prior to determining the residual scene information based on the first scene information and the second scene information, further comprising:
acquiring first scene information corresponding to a first scene where the vehicle is located;
constructing the first stereoscopic scene model based on the first scene information;
and determining a first spatial mapping relation between the first scene image and the actual scene based on the first scene information and the shooting parameters.
8. The method of claim 7, wherein after determining the first spatial mapping relationship between the first scene image and the actual scene based on the first scene information and the imaging parameters, further comprising:
and mapping the first scene image to the first stereo scene model based on the first spatial mapping relation to obtain a panoramic all-around view image of the vehicle in the first scene.
9. An apparatus for generating a panoramic all-round image for a vehicle, the apparatus comprising:
if the fact that the vehicle is switched from a first scene to a second scene is detected, determining residual scene information based on the first scene information and the second scene information;
wherein the residual scene information is used for indicating scene change of the second scene relative to the first scene, the first scene information comprises a first scene image around the vehicle, and the second scene information comprises a second scene image around the vehicle;
updating a first stereo scene model based on the residual scene information to obtain a second stereo scene model, wherein the first stereo scene is used for representing the first scene in a three-dimensional space, and the second stereo scene model is used for representing the second scene in the three-dimensional space;
determining a panoramic all-around view image of the vehicle in the second scene based on the second scene image and the second stereoscopic scene model.
10. An apparatus for generating a panoramic all-round image for a vehicle, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any of the methods of claims 1-8.
11. An on-board look-around system comprising a sensing element, a processor and a display, the sensing element comprising a plurality of cameras disposed about a vehicle;
the perception element is used for acquiring scene information of a scene where the vehicle is located, and the scene information comprises a scene image around the vehicle;
the processor is used for realizing the generation method of the vehicle panoramic all-round view image as claimed in any one of claims 1 to 8;
the display is used for displaying the panoramic all-around view image of the vehicle.
12. A non-transitory computer readable storage medium having stored thereon instructions, wherein the instructions when executed by a processor implement the steps of the method for generating a panoramic all-around image for a vehicle of any one of claims 1-8.
CN201910616903.5A 2019-07-09 2019-07-09 Method, device and system for generating panoramic looking-around image of vehicle and storage medium Active CN112215033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910616903.5A CN112215033B (en) 2019-07-09 2019-07-09 Method, device and system for generating panoramic looking-around image of vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910616903.5A CN112215033B (en) 2019-07-09 2019-07-09 Method, device and system for generating panoramic looking-around image of vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN112215033A true CN112215033A (en) 2021-01-12
CN112215033B CN112215033B (en) 2023-09-01

Family

ID=74047372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910616903.5A Active CN112215033B (en) 2019-07-09 2019-07-09 Method, device and system for generating panoramic looking-around image of vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN112215033B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009144994A1 (en) * 2008-05-29 2009-12-03 富士通株式会社 Vehicle image processor, and vehicle image processing system
US20100085358A1 (en) * 2008-10-08 2010-04-08 Strider Labs, Inc. System and method for constructing a 3D scene model from an image
CN106355546A (en) * 2015-07-13 2017-01-25 比亚迪股份有限公司 Vehicle panorama generating method and apparatus
CN106875467A (en) * 2015-12-11 2017-06-20 中国科学院深圳先进技术研究院 D Urban model Rapid Updating
WO2018121333A1 (en) * 2016-12-30 2018-07-05 艾迪普(北京)文化科技股份有限公司 Real-time generation method for 360-degree vr panoramic graphic image and video
CN109685891A (en) * 2018-12-28 2019-04-26 鸿视线科技(北京)有限公司 3 d modeling of building and virtual scene based on depth image generate system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009144994A1 (en) * 2008-05-29 2009-12-03 富士通株式会社 Vehicle image processor, and vehicle image processing system
US20100085358A1 (en) * 2008-10-08 2010-04-08 Strider Labs, Inc. System and method for constructing a 3D scene model from an image
CN106355546A (en) * 2015-07-13 2017-01-25 比亚迪股份有限公司 Vehicle panorama generating method and apparatus
CN106875467A (en) * 2015-12-11 2017-06-20 中国科学院深圳先进技术研究院 D Urban model Rapid Updating
WO2018121333A1 (en) * 2016-12-30 2018-07-05 艾迪普(北京)文化科技股份有限公司 Real-time generation method for 360-degree vr panoramic graphic image and video
CN109685891A (en) * 2018-12-28 2019-04-26 鸿视线科技(北京)有限公司 3 d modeling of building and virtual scene based on depth image generate system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TOMOYUKI MUKASA等: "3D Scene Mesh from CNN Depth Predictions and Sparse Monocular SLAM", 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), pages 912 - 919 *
刘冬;秦瑞;陈曦;李庆;: "3D车载环视全景生成方法", 计算机科学, no. 04, pages 302 - 305 *
肖甫等: "基于光照调整和特征曲线的全景图拼接", 工程图学学报, no. 1, pages 35 - 38 *

Also Published As

Publication number Publication date
CN112215033B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN106462996B (en) Method and device for displaying vehicle surrounding environment without distortion
CN107389088B (en) Error correction method, device, medium and equipment for vehicle-mounted inertial navigation
JP2019096072A (en) Object detection device, object detection method and program
CN111046762A (en) Object positioning method, device electronic equipment and storage medium
JP6891954B2 (en) Object detection device, object detection method, and program
CN114111568B (en) Method and device for determining appearance size of dynamic target, medium and electronic equipment
JP7107931B2 (en) Method and apparatus for estimating range of moving objects
JP2011155393A (en) Device and method for displaying image of vehicle surroundings
CN115867940A (en) Monocular depth surveillance from 3D bounding boxes
CN112215747A (en) Method and device for generating vehicle-mounted panoramic picture without vehicle bottom blind area and storage medium
JP2020126432A (en) Image processing system and image processing method
CN111160070A (en) Vehicle panoramic image blind area eliminating method and device, storage medium and terminal equipment
CN112215033B (en) Method, device and system for generating panoramic looking-around image of vehicle and storage medium
CN114821544B (en) Perception information generation method and device, vehicle, electronic equipment and storage medium
JP2003009141A (en) Processing device for image around vehicle and recording medium
CN114312577B (en) Vehicle chassis perspective method and device and electronic equipment
CN114648639B (en) Target vehicle detection method, system and device
CN115222815A (en) Obstacle distance detection method, obstacle distance detection device, computer device, and storage medium
CN114742726A (en) Blind area detection method and device, electronic equipment and storage medium
JP2024515761A (en) Data-driven dynamically reconstructed disparity maps
JP7169689B2 (en) Measurement system, measurement method, and measurement program
CN114219895A (en) Three-dimensional visual image construction method and device
CN115861316B (en) Training method and device for pedestrian detection model and pedestrian detection method
WO2020246202A1 (en) Measurement system, measurement method, and measurement program
CN116563817B (en) Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant