WO2022222121A1 - Panoramic image generation method, vehicle-mounted image processing apparatus, and vehicle - Google Patents

Panoramic image generation method, vehicle-mounted image processing apparatus, and vehicle Download PDF

Info

Publication number
WO2022222121A1
WO2022222121A1 PCT/CN2021/089132 CN2021089132W WO2022222121A1 WO 2022222121 A1 WO2022222121 A1 WO 2022222121A1 CN 2021089132 W CN2021089132 W CN 2021089132W WO 2022222121 A1 WO2022222121 A1 WO 2022222121A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
vehicle
point
coordinate
point cloud
Prior art date
Application number
PCT/CN2021/089132
Other languages
French (fr)
Chinese (zh)
Inventor
陈晓丽
张峻豪
黄为
王笑悦
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/089132 priority Critical patent/WO2022222121A1/en
Priority to CN202180001139.6A priority patent/CN113302648B/en
Publication of WO2022222121A1 publication Critical patent/WO2022222121A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Definitions

  • the present application relates to the technical field of vehicle-mounted surround view, and in particular, to a method for generating panoramic images, a vehicle-mounted image processing device, and a vehicle.
  • the vehicle surround view system uses cameras installed around the vehicle to reconstruct the vehicle and surrounding scenes, and performs perspective transformation and image stitching on the captured images to generate 3D panoramic images.
  • the vehicle-mounted surround view system first calibrates the installed 4-channel fisheye camera, and after obtaining the internal and external parameters of the camera, a three-dimensional bowl-shaped fixed model (as shown in Figure 1A) is constructed. Mapping the internal and external parameters to obtain pixel coordinates, and then map the corresponding pixels in the fisheye image to the bowl-shaped fixed model to obtain a 3D panoramic image.
  • the current 3D panoramic image is generated based on a 3D fixed model, which is a simulated embodiment of real objects around the vehicle.
  • a 3D fixed model which is a simulated embodiment of real objects around the vehicle.
  • Embodiments of the present application provide a method for generating a panoramic image, a vehicle-mounted image processing device, and a vehicle, which are used to eliminate stitching ghosts and dislocations in the panoramic image, and obtain a panoramic image consistent with the actual environment around the vehicle.
  • an embodiment of the present application provides a method for generating a panoramic image, which is applied to a vehicle-mounted image processing device, and the method includes: the vehicle-mounted image processing device obtains image information and depth information of objects around the vehicle, and the depth information is used to indicate the vehicle. Coordinate point information of each point on the surrounding object; obtain the initial model, for example, the initial model is a bowl-shaped 3D model, or the initial model is a cylindrical 3D model; the vehicle-mounted image processing device converts the first on the initial model according to the depth information. The coordinate point is adjusted to the second coordinate point, and the first model is generated; the vehicle-mounted image processing device adjusts the position of the first coordinate point in real time according to the depth information of the objects around the vehicle.
  • the The first model is a model obtained according to the actual distance of objects around the vehicle from the vehicle.
  • the shape of the first model may be an irregular shape, and the shape of the first model varies with the distance between the vehicle and objects around the vehicle.
  • the in-vehicle image processing apparatus acquires a panoramic image based on the image information and the first model.
  • the vehicle-mounted image processing device introduces depth information when creating a 3D model, and the vehicle-mounted image processing device adjusts the coordinate points on the initial model according to the depth information to obtain the first model.
  • the in-vehicle image processing device generates a virtual first model according to the objects around the vehicle in the real world, and generates a virtual vehicle model based on the real-world vehicle correspondence, that is, when the distance between the objects around the vehicle and the vehicle changes, the vehicle model and the first
  • the distance of the coordinate points corresponding to the objects on a model will also change, eliminating stitching ghosts and dislocations in the panoramic image, so as to obtain a 3D panoramic image that is consistent with the actual environment around the vehicle.
  • the adjusting the first coordinate point on the initial model to the second coordinate point according to the depth information, and generating the first model may include: the vehicle-mounted image processing device firstly adjusts the pixel point and the second coordinate point according to the image information.
  • the depth information corresponding to the pixel points convert the pixel points in the image information into the first point cloud in the camera coordinate system; then, the vehicle-mounted image processing device converts the first point cloud in the camera coordinate system into the second point cloud in the world coordinate system. point cloud; finally, the vehicle-mounted image processing device adjusts the first coordinate point to the second coordinate point through the coordinate point in the second point cloud to generate the first model, and the second coordinate point is obtained according to the coordinate point in the second point cloud.
  • the image information includes images collected by multiple image sensors, and after converting the first point cloud in the camera coordinate system to the second point cloud in the world coordinate system, the method further includes: on-board image processing The device splices multiple images collected by multiple image sensors in multiple second point clouds in the world coordinate system to obtain the target point cloud; the coordinate points on the target point cloud correspond to the actual distance between the objects around the vehicle and the image sensor.
  • the vehicle-mounted image processing apparatus adjusting the first coordinate point to the second coordinate point through the coordinate point in the second point cloud may include: first, the vehicle-mounted image processing apparatus determines a plurality of third coordinates within the neighborhood range of the first coordinate point point, the third coordinate point is the coordinate point on the target point cloud, and then, the vehicle-mounted image processing device determines the second coordinate point according to a plurality of third coordinate points; wherein, the second coordinate point is determined by the neighborhood range of the first coordinate point obtained from multiple third coordinate points within. Finally, the vehicle-mounted image processing device adjusts the first coordinate point to the second coordinate point.
  • the target point cloud is a point cloud obtained based on the actual distance between the objects around the vehicle and the vehicle
  • the vehicle-mounted image processing device determines the first coordinate point neighborhood on the initial model among a large number of scattered points in the target point cloud Multiple third coordinate points within the range, and then determine the second coordinate point according to the multiple third coordinate points, and then adjust the points on the initial model to the second coordinate point. The actual distance to accurately reconstruct the first model.
  • splicing multiple second point clouds of multiple images collected by multiple image sensors in the world coordinate system to obtain the target point cloud may include: first, the on-board image processing device performs two The overlapping areas of the first image and the second image collected by the adjacent image sensors are matched to obtain a rotation matrix and a translation matrix for transformation between the point cloud of the first image and the point cloud of the second image; then, the vehicle-mounted image processing device The point cloud of the second image is transformed by using the rotation matrix and the translation matrix, and the transformed point cloud of the second image and the point cloud of the first image are spliced.
  • the images collected by the two image sensors will have slight differences in angle and orientation, and the overlapping area of the images collected by the two image sensors (the same scene ) to match, so that the difference in angle and orientation of the images collected by the two image sensors can be found, that is, the difference can be balanced by rotation and translation, and then the point clouds of the images collected by multiple image sensors can be spliced to obtain a whole
  • the target point cloud of the slice is adjusted by adjusting the first coordinate point on the initial model through the points on the target point cloud, so that the 3D model can be accurately reconstructed.
  • the method further includes: the vehicle-mounted image processing device performs interpolation and smoothing processing on the first model to obtain the second model, and further, the vehicle-mounted image processing device is based on the image information. Perform texture mapping on the second model to generate a panoramic image.
  • the second model after interpolation processing is a 3D model with a smooth surface
  • the vehicle-mounted image processing device performs texture mapping on the second model with a smooth surface, thereby improving the rendering effect of the first model.
  • the objects around the vehicle include a first object and a second object, when the distance between the vehicle and the first object is the first distance, and the distance between the vehicle and the second object is the first object
  • the distance is two
  • the position of the vehicle is mapped to the first position in the panoramic image
  • the position of the first object is mapped to the second position of the panoramic image
  • the position of the second object is mapped to the third position in the panoramic image.
  • the method further includes: the vehicle-mounted image processing device displays a panoramic image, and in the panoramic image, the distance between the first position and the second position is greater than the distance between the first position and the third position.
  • the distance between the objects around the vehicle and the vehicle changes, the distance between the vehicle model and the coordinate points corresponding to the objects on the first model also changes, so that a panoramic image consistent with the actual environment around the vehicle is obtained, Eliminate stitching ghosts and dislocations, and improve the detection accuracy and driver experience in the stitched area.
  • an embodiment of the present application provides a vehicle-mounted surround view device, including: an acquisition module for acquiring image information and depth information of objects around the vehicle, where the depth information is used to indicate coordinate point information of each point on the objects around the vehicle;
  • the processing module is used for acquiring the initial model; adjusting the first coordinate point on the initial model to the second coordinate point according to the depth information to generate the first model; and acquiring the panoramic image based on the image information and the first model.
  • the processing module is further specifically configured to: convert the pixels in the image information into the first point cloud in the camera coordinate system according to the pixels in the image information and the depth information corresponding to the pixels; Convert the first point cloud in the camera coordinate system to the second point cloud in world coordinates; adjust the first coordinate point to the second coordinate point through the coordinate points in the second point cloud to generate the first model, the second coordinate point is obtained from the coordinate points in the second point cloud.
  • the image information includes images collected by multiple image sensors
  • the processing module is further specifically configured to: process multiple images collected by multiple image sensors on multiple second point clouds in the world coordinate system splicing to obtain the target point cloud; determining multiple third coordinate points within the neighborhood of the first coordinate point, the third coordinate point being the coordinate point on the target point cloud; determining the second coordinate point according to the multiple third coordinate points ; Adjust the first coordinate point to the second coordinate point.
  • the processing module is further specifically configured to: match the overlapping area of the first image and the second image collected by two adjacent image sensors to obtain the point cloud of the first image and the point cloud of the second image.
  • a rotation matrix and a translation matrix for transforming between point clouds using the rotation matrix and the translation matrix to transform the point cloud of the second image, and stitching the transformed point cloud of the second image and the point cloud of the first image.
  • the processing module is further specifically configured to: perform interpolation and smoothing processing on the first model to obtain a second model; and perform texture mapping on the second model according to image information to generate a panoramic image.
  • the objects around the vehicle include a first object and a second object.
  • the vehicle When the distance between the vehicle and the first object is the first distance, and the distance between the vehicle and the second object is the second distance, the vehicle The position of the object is mapped to the first position in the panoramic image, the position of the first object is mapped to the second position of the panoramic image, and the position of the second object is mapped to the third position in the panoramic image.
  • the device When the first distance is greater than the second distance , the device further includes a display module; the display module is further used to display a panoramic image, in which the distance between the first position and the second position is greater than the distance between the first position and the third position.
  • an embodiment of the present application provides an in-vehicle image processing device, including a processor, the processor is coupled with a memory, and the memory is used to store programs or instructions, when the programs or instructions are executed by the processor, the in-vehicle image processing device is The method as described in the first aspect above is performed.
  • an embodiment of the present application provides a vehicle-mounted surround view system, including a sensor, a vehicle-mounted display, and the vehicle-mounted image processing device described in the third aspect above, wherein the sensor and the vehicle-mounted display are both connected to the vehicle-mounted image processor, wherein , the sensor is used to collect image information and depth information, and the vehicle-mounted display is used to display panoramic images.
  • an embodiment of the present application provides a vehicle, including the vehicle-mounted surround view system as described in the fourth aspect.
  • an embodiment of the present application provides a computer program product, the computer program product includes computer program code, and when the computer program code is executed by a computer, enables the computer to implement any one of the above-mentioned first aspects. method.
  • an embodiment of the present application provides a computer-readable storage medium for storing a computer program or instruction, and when the computer program or instruction is executed, the computer executes the method described in any one of the above-mentioned first aspect.
  • an embodiment of the present application provides a chip, including a processor and a communication interface, where the processor is configured to read an instruction to execute the method described in any one of the foregoing first aspects.
  • Fig. 1A is a three-dimensional schematic diagram of a 3D model
  • 1B is a schematic side view of a 3D model
  • 2A is a schematic diagram of ghosting in a panoramic image in a conventional method
  • 2B is a schematic diagram of a stitching dislocation in a panoramic image in a traditional method
  • FIG. 3 is a schematic structural diagram of a vehicle-mounted surround view system in an embodiment of the application.
  • FIG. 4 is a schematic diagram of a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system in an embodiment of the application;
  • FIG. 5 is a schematic flowchart of steps of a method for generating a panoramic image according to an embodiment of the present application
  • FIG. 6 is a schematic diagram of the visualization effect of converting an image with depth information into a point cloud in an embodiment of the present application
  • FIG. 7 is a schematic diagram of matching an overlapping area in a first image and an overlapping area in a second image in an embodiment of the present application;
  • FIG. 8 is a schematic diagram of splicing images collected by adjacent camera sensors in an embodiment of the present application.
  • FIG. 9A is a three-dimensional schematic diagram of a third coordinate point within a neighborhood range of a first coordinate point on an initial model in an embodiment of the present application.
  • 9B is a schematic top view of a third coordinate point within the neighborhood of the first coordinate point on the initial model in the embodiment of the present application.
  • 9C and 9D are schematic top views of the first model obtained by adjusting the first coordinate point on the initial model in the embodiment of the present application;
  • FIG. 10 is a schematic diagram of a scene of a vehicle and surrounding objects in the real world and a vehicle and surrounding objects in a panoramic image according to an embodiment of the application;
  • FIG. 11 is a schematic diagram of performing interpolation processing on a first model in an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of an embodiment of an in-vehicle image processing apparatus in an embodiment of the application.
  • FIG. 13 is a schematic structural diagram of another embodiment of an in-vehicle image processing apparatus in an embodiment of the present application.
  • the vehicle-mounted surround view system can acquire images captured by multiple cameras installed on the vehicle body, and perform perspective transformation and image stitching on the acquired images to generate panoramic images.
  • the panoramic image is a 360-degree panoramic image around the vehicle, or the panoramic image is a 720-degree panoramic image.
  • cameras are installed around the car body, and the image information of objects around the car body is collected by the cameras installed in the front, rear, left and right directions of the car body, and the images collected by each two adjacent cameras are spliced and mapped to
  • the 3D model is a simulated embodiment of the real objects around the vehicle. Please refer to FIG.
  • XOZ represents the plane coordinate system
  • R represents the radius from the Z axis to the wall of the 3D model. Since the distance between the vehicle and surrounding objects changes during the driving process of the vehicle, and the size of the 3D model is fixed, when the actual distance between the vehicle and surrounding objects is less than R, two adjacent cameras collect The obtained images will overlap, resulting in ghosting in the image splicing area (as shown in Figure 2A). This leads to the dislocation of the images in the stitched area (as shown in Figure 2B).
  • a camera may also be installed on the vehicle body for capturing images of the vehicle bottom. For example, a camera installed on the chassis, or a camera installed around the vehicle, the camera's angle of view can capture the position where the vehicle bottom will pass.
  • an embodiment of the present application provides a method for generating a panoramic image, and the method is applied to a vehicle-mounted surround view system.
  • the vehicle surround view system includes a sensor 301 , a vehicle image processing device 302 and a vehicle display 303 , wherein the sensor 301 is connected to the vehicle image processing device 302 , and the vehicle image processing device 302 is connected to the vehicle display 303 .
  • the sensor 301 is used to collect image information and depth information of objects around the vehicle.
  • the in-vehicle image processing device 302 first obtains the depth information of objects around the vehicle, and creates an initial model, and then adjusts the position of the first coordinate point on the initial model according to the depth information to generate the first model; finally, the in-vehicle image processing device 302 The image information performs texture mapping on the first model to generate a panoramic image.
  • the vehicle-mounted image processing device 302 outputs the panoramic image to the vehicle-mounted display 303, and the vehicle-mounted display 303 is used for displaying the panoramic image.
  • the first coordinate point on the initial model is adjusted according to the depth information
  • the first coordinate point is adjusted to the second coordinate point to obtain the first model, so that the surrounding area of the vehicle is
  • the distance between the object (corresponding to the first model) and the vehicle (corresponding to the vehicle model) is basically equal to the distance between the vehicle model and the first model. Since the first model is accurately reconstructed according to the depth information, the stitching ghosts and images in the panoramic image are eliminated. dislocation, so as to obtain a 3D panoramic image that is consistent with the actual environment around the vehicle, and improve the detection accuracy of the patchwork area and the driver experience effect.
  • the depth information can be used to indicate the three-dimensional coordinate information of each point on the detected object.
  • Depth information is usually also called depth.
  • depth in the field of machine vision refers to the distance of each point in space relative to the camera sensor.
  • the depth information of objects around the vehicle refers to the distance between the three-dimensional coordinate point on the object around the vehicle and the camera sensor, that is, the distance between the three-dimensional coordinate point on the object around the vehicle and the position of the sensor on the vehicle body.
  • Figure 4 shows the world coordinate system (O w -X w Y w Z w ), the camera coordinate system (O c -X c Y c Z c ), the image coordinate system (o-xy ) and the pixel coordinate system (uv).
  • point P is a point in the world coordinate system, that is, a point in the real environment.
  • Point p is the imaging point of point P, the coordinates of point p in the image coordinate system are (x, y), and the coordinates in the pixel coordinate system are (u, v).
  • the origin (o) of the image coordinate system is located on the Z-axis of the camera coordinate system, and the distance between the origin (o) of the image coordinate system and the origin (O c ) of the camera coordinate system is f, where f is the camera focal length.
  • the world coordinate system also known as the measurement coordinate system, is a three-dimensional coordinate system.
  • the three-dimensional coordinate system can be a three-dimensional orthogonal coordinate system, a cylindrical coordinate system, a spherical coordinate system, and the like.
  • the world coordinate system may use a three-dimensional orthogonal coordinate system (X w , Y w , Z w ).
  • the spatial position of the camera, the vehicle, and objects around the vehicle can be described in the world coordinate system.
  • the position of the world coordinate system is determined by itself according to the actual situation.
  • the world coordinate system is selected to be centered on the vehicle (the vehicle position is located at the origin position O w of the world coordinate system), the Z w axis is perpendicular to the ground, the X w axis represents the direction of the vehicle, and the coordinate system
  • the marker changes with the movement of the vehicle.
  • the unit of the world coordinate system can be meters (m).
  • the camera coordinate system is a three-dimensional rectangular coordinate system (X c , Y c , Z c ).
  • the origin of the camera coordinate system is the optical center of the lens, the X c and Y c axes are respectively parallel to both sides of the image plane, and the Z c axis is the optical axis of the lens, which is perpendicular to the image plane.
  • the unit of the camera coordinate system can be meters (m).
  • the pixel coordinate system is in pixels, and the coordinate origin is at the upper left corner of the camera coordinate system.
  • the relationship between the image coordinate system and the pixel coordinate system may be: the origin of the image coordinate system is the midpoint of the pixel coordinate system.
  • the units of the image coordinate system may be millimeters (mm).
  • the transformation between the world coordinate system and the camera coordinate system is a rigid transformation, that is, only the spatial position (translation) and orientation (rotation) of the object are changed, but the shape of the object is not changed.
  • This transformation can be represented by rotation and translation. There is no rotation between the image coordinate system and the pixel coordinate system, but the coordinate origin of the image coordinate system and the pixel coordinate system are different.
  • Homogeneous coordinates which represent an n-dimensional vector with a (n+1)-dimensional vector, refer to a coordinate system used in projection geometry, just like Cartesian coordinates used in Euclidean geometry.
  • the homogeneous coordinates of a two-dimensional point (x, y) are represented as (hx, hy, h).
  • the homogeneous representation of a vector is not unique, and different values of h in homogeneous coordinates represent the same point.
  • the homogeneous coordinates (8, 4, 2) and (4, 2, 1) represent the two-dimensional point (4, 2).
  • the purpose of introducing homogeneous coordinates is mainly to combine multiplication and addition in matrix operations.
  • Homogeneous coordinates provide a method for transforming a set of points in two-, three-, and even higher-dimensional spaces from one coordinate system to another using matrix operations.
  • the purpose of introducing homogeneous coordinates is to facilitate computer graphics to perform affine geometric transformations. It can be understood that the introduction of homogeneous coordinates can use a matrix to describe rotation and translation at the same time, and use matrix multiplication to express the rotation and translation of an object.
  • the internal parameters of the camera may be parameters related to the characteristics of the camera itself, such as the focal length of the camera, the pixel size, and the like.
  • the external parameters of the camera can be parameters in the world coordinate system, such as the position of the camera, the direction of rotation, etc.
  • a point cloud can be a collection of massive points that express the spatial distribution of the target and the characteristics of the target surface under the same spatial reference system. After obtaining the spatial coordinates of each sampling point on the surface of the object, a collection of points is obtained, which is called “point cloud”. ” (point cloud).
  • Texture can be pixel feature information on a two-dimensional image.
  • a texture map When a texture is mapped to the surface of an object in a specific way, it is also called a texture map.
  • an embodiment of the present application provides a method for generating a panoramic image.
  • the execution body of the method is the vehicle-mounted surround view system in FIG. 3 , or the execution body of the method is the vehicle-mounted image in FIG. 3 .
  • the processing device, or the execution body of the method is a processor or chip in the vehicle image processing device.
  • the execution body of the method is described by taking the vehicle surround view system as an example.
  • Step 501 The vehicle-mounted surround view system acquires image information and depth information of objects around the vehicle.
  • the camera is a wide-angle camera (such as a fisheye camera), and at least one fisheye camera is arranged in each of the four directions of the front, rear, left, and right directions of the vehicle.
  • the fisheye camera is used to collect the image information around the vehicle in real time. Because the fisheye camera is a special ultra-wide-angle lens, its structure is imitated by the fish eye imaging, which can independently realize large-angle shooting, and the angle of view of the fisheye camera can even reach 180° , capable of monitoring objects in a wide range of scenes.
  • a fisheye camera is set in each direction around the vehicle, and four fisheye cameras can collect panoramic images around the vehicle, thereby saving the number of cameras and reducing costs.
  • the camera is not limited to a wide-angle camera, and the camera can also be an ordinary camera.
  • the viewing angle of the camera is increased by increasing the number of ordinary cameras. For example, two or three cameras are set in each direction of the vehicle circumference.
  • a camera is used to collect panoramic images around the vehicle by using ordinary cameras. Although the number of cameras is increased, the image information collected by ordinary cameras is not deformed, and the collected images have better effect.
  • the camera is taken as an example of a fisheye camera, and the number of fisheye cameras is taken as an example, that is, one fisheye camera is provided in each direction of the vehicle, front, rear, left, right, and right.
  • the in-vehicle surround view system obtains the depth information of objects around the vehicle, including the following two implementations.
  • the sensor 301 is an image sensor in a camera.
  • the image sensor can collect the image information of the objects around the vehicle, and transmit the image information to the vehicle-mounted image processing device, and the vehicle-mounted image processing device can obtain the depth information of the objects around the vehicle according to the image information.
  • the vehicle-mounted image processing device performs depth estimation on the image information collected by the fisheye camera to obtain depth information
  • the methods for obtaining the depth information by the vehicle-mounted image processing device include but are not limited to methods such as monocular depth estimation and binocular depth estimation.
  • the monocular depth estimation method may use the image data of a single view as the input of the trained depth model, and use the depth model to output the depth corresponding to each pixel in the image. That is, the monocular estimation method based on deep learning reflects the depth relationship according to the pixel value relationship, and maps the image information into a depth map.
  • the method of binocular depth estimation is to use a binocular camera to shoot left and right viewpoint images of the same scene, and use a stereo matching algorithm to obtain a disparity map, and then obtain a depth map.
  • a depth map also known as a distance image, refers to an image in which the distance (depth) from the image sensor (or camera) to each point in the scene is used as a pixel value. Colder means higher depth values.
  • only the image sensor can be used to obtain both the scene image around the vehicle and the depth information of the objects around the vehicle, without adding other relevant components for obtaining depth information, thus saving costs.
  • the sensor 301 further includes a depth sensor, and the depth sensor is used to collect depth information of objects around the vehicle.
  • Depth sensors include, but are not limited to, millimeter-wave radar and lidar.
  • the method of obtaining the depth information of objects around the vehicle through lidar is that the lidar emits laser light into space at certain time intervals, and records the signal of each scanning point from the radar to the object in the scene under test, and then reflects back through the object.
  • the interval time to the lidar, the distance between the surface of the object and the radar is calculated according to the interval time, that is, the depth information of the objects around the vehicle is obtained.
  • the method of obtaining depth information through millimeter-wave radar is that the millimeter-wave radar sends out a high-frequency continuous signal, the signal is reflected after encountering objects around the vehicle, and the receiver receives the reflected signal reflected by the object.
  • the above two methods for acquiring depth information are only exemplary descriptions, and do not limit specific methods for acquiring depth information.
  • the depth sensor collects the depth information of objects around the vehicle and transmits the depth information to the vehicle-mounted image processing device, which can save the steps of depth estimation according to the image information, thereby saving the calculation of the vehicle-mounted image processing device. force.
  • the method for acquiring depth information in the above-mentioned first implementation manner is used as an example for description.
  • Step 502 the vehicle-mounted surround view system acquires an initial model.
  • the vehicle-mounted image processing device generates an initial model
  • the initial model is a bowl-shaped 3D model, or the initial model is a cylindrical 3D model, and the like.
  • the initial model can be a smooth solid model, or the initial model can be a scatter model.
  • step 502 may be executed before step 501, or step 502 may be executed after step 501, and steps 502 and 501 may also be executed synchronously.
  • Step 503 The vehicle-mounted surround view system adjusts the first coordinate point on the initial model to the second coordinate point according to the depth information to obtain the first model.
  • FIG. 6 is a schematic diagram of a point cloud.
  • the vehicle-mounted image processing device converts the pixel coordinates in the image information into the first point cloud coordinates in the camera coordinate system according to the pixel coordinates in the image information and the depth information corresponding to the pixel coordinates.
  • the image information of the vehicle surrounding scene obtained in the above step 501 includes a plurality of pixel points, for example, a first pixel point (eg, ui , v i ) is included in the plurality of pixel points.
  • the depth information of the scene around the vehicle obtained in the above step 501 is a plurality of pixel points.
  • the vehicle-mounted image processing device converts the point under the pixel coordinate system to the point cloud coordinate under the camera coordinate system according to the first pixel point and the depth information corresponding to the first pixel point, and the point cloud coordinate (x i , y under the camera coordinate system) i , z i ) are represented by the following formula (1).
  • u 0 , v 0 represent the coordinates of the optical center in the image coordinate system
  • f x represents the focal length in the horizontal direction
  • f y represents the focal length in the vertical direction.
  • the vehicle-mounted image processing device converts the first point cloud coordinates (x i , y i , z i ) in the camera coordinate system to the second point cloud coordinates (X i , Y i , Z i ) in the world coordinate system
  • first point cloud coordinates the point cloud coordinates in the camera coordinate system
  • second point cloud coordinates the point cloud coordinates in the coordinate system
  • the coordinates of the first point cloud into the coordinates of the second point cloud please refer to the following formulas (2) and (3).
  • the formula (2) can be expressed as the following formula (3) by homogeneous coordinates.
  • r j represents the rotation matrix of the coordinate system transformation corresponding to the jth camera, for example, the value of j is 1, 2, 3, 4, such as the first camera is the front image sensor, the second camera is the left image sensor, the third camera is the rear image sensor, and the fourth camera is the right image sensor;
  • t j represents the translation matrix of the coordinate system transformation corresponding to the jth camera;
  • (* ) -1 means matrix inversion;
  • (X i , Y i , Z i ) are the coordinates of the three-dimensional coordinate point corresponding to the pixel point (u i , v i ) in the world coordinate system.
  • the vehicle-mounted image processing device splices (or connects) the image information collected by the four image sensors in the four point clouds in the world coordinate system to obtain a complete whole panoramic point cloud (ie, the target point cloud).
  • a complete whole panoramic point cloud ie, the target point cloud.
  • the vehicle-mounted image processing device matches the point clouds of the overlapping regions of the images (the first image and the second image) collected by two adjacent image sensors, so as to obtain a rotation matrix for transforming the first image and the second image (denoted by "R") and translation matrix (denoted by "T”).
  • the first image sensor and the second image sensor are adjacent image sensors (such as the front camera and the left camera, the left camera and the rear camera, the rear camera and the right camera, and the right camera and the front camera. ) will capture images with overlapping areas.
  • the first image sensor is used for capturing the first image
  • the second image sensor is used for capturing the second image.
  • the point cloud data of the overlapping area in the first image is marked as "P”
  • the point cloud data of the overlapping area in the second image is marked as "Q”.
  • a set of rotation matrices R and translation matrices T are found using the objective function of the following equation (4). It should be understood that due to the different poses of the two cameras, the images collected by the two image sensors will have slight differences in angle and orientation, and the overlapping area (same scene) of the images collected by the two image sensors can be found. This difference, thereby eliminating calculation errors.
  • f(q i ) represents the three-dimensional coordinates of the i-th point in the point cloud data Q
  • f( pi ) represents the three-dimensional coordinates of the i-th point in the point cloud data P
  • R h represents the rotation matrix of the point cloud data P for transformation
  • T h represents the translation matrix transformed by the point cloud data P
  • h j-1
  • j is the number of image sensors.
  • the number of image sensors is 4, and the value of h is 1, 2, 3.
  • Four image sensors are adjacent to each other, and a total of 3 sets of R and T need to be found.
  • the 3 groups R and T are "R 1 and T 1 ", "R 2 and T 2 " and “R 3 and T 3 ", respectively.
  • the point cloud of the original image is called “point cloud A”
  • the transformed point cloud is called “point cloud A”.
  • point cloud B is “point cloud B”.
  • the image collected by the front image sensor is called “image A”
  • the image collected by the left image sensor is called “image B”
  • the image collected by the rear image sensor is called “image C”
  • the image collected by the rear image sensor is called “image C”.
  • the image captured by the image sensor is called "Image D”.
  • the front image sensor collects image A
  • the left image sensor collects image B
  • the overlapping area of image A and image B is the image of the object in front of the left of the vehicle.
  • f(q i ) represents the point cloud data of the overlapping area in the point cloud A of the image A
  • f( pi ) represents the point cloud data of the overlapping area in the point cloud A of the image B.
  • the vehicle-mounted image processing device needs to match the overlapping area in image A with the overlapping area in image B.
  • the overlapping area in image A matches the overlapping area in image B.
  • the purpose of matching the overlapping area in image A with the overlapping area in image B is to find a set of R 1 and T 1 , so that the point cloud of image A or the point cloud of image B can be processed through R 1 and T 1 . Transform, and then stitch the point clouds of the images collected by the two adjacent image sensors.
  • the vehicle-mounted image processing device After obtaining R h and T h by the above formula (4), the vehicle-mounted image processing device splices the point clouds of the images collected by each two adjacent image sensors to obtain a complete whole panoramic point cloud.
  • the vehicle-mounted image processing apparatus uses the above formula (4) to match the overlapping area of point cloud A of image A with the overlapping area of point cloud A of image B, and find a set to obtain R 1 and T 1 .
  • the vehicle-mounted image processing device may first fix the point cloud A of the image A collected by the front image sensor.
  • the whole point cloud data of the point cloud A of the image B collected by the left image sensor in the world coordinate system can be rotated and translated After that, the point cloud B of the image B is obtained, and the point cloud B of the image B is spliced with the point cloud A of the image A.
  • the rear image sensor collects the image C
  • the vehicle-mounted image processing device finds a set of R 2 according to the overlapping area in the point cloud B of the image B and the point cloud of the overlapping area in the point cloud A of the image C through the above formula (4). and T 2 .
  • f(q i ) represents the point cloud data of the overlapping area in the point cloud B of the image B
  • f( pi ) represents the point cloud data of the overlapping area in the point cloud A of the image C.
  • the vehicle-mounted image processing device uses the above formula (4) to match the overlapping area of point cloud B of image B with the overlapping area of point cloud A of image C, and find a set to obtain R 2 and T 2 .
  • the vehicle-mounted image processing device fixes the point cloud B of the second image, and uses R 2 and T 2 to perform rotation and translation transformation on the point cloud A of the image C, so that the whole image C collected by the rear image sensor can be adjusted in the world coordinate system.
  • point cloud B of image C is obtained.
  • the vehicle-mounted image processing device splices the point cloud B of the image C and the point cloud B of the image B.
  • the right image sensor collects the image D, and the vehicle-mounted image processing device finds a set of R 3 and T 3 according to the overlapping area in the point cloud B of the image C and the overlapping area in the point cloud A of the image D through the above formula (4). .
  • f(q i ) represents the point cloud data of the overlapping area in the point cloud B of the image C
  • f( pi ) represents the point cloud data of the overlapping area in the point cloud A of the image D.
  • the in-vehicle image processing apparatus obtains R 3 and T 3 by using the above formula (4). Then, the vehicle-mounted image processing device fixes the point cloud B of the image C, and uses R 3 and T 3 to rotate and translate the point cloud A of the image D, so that the entire image D collected by the right camera in the world coordinate system can be transformed. After the point cloud data is rotated and translated, the point cloud B of the image D is obtained.
  • the vehicle-mounted image processing device splices the point cloud B of the image D and the point cloud B of the image C.
  • the vehicle-mounted image processing device then splices the point cloud B of the image D and the point cloud A of the image A to obtain the target point cloud.
  • the target point cloud is the entire point cloud after the vehicle-mounted image processing device splices point cloud A of image A, point cloud B of image B, point cloud C of image C, and point cloud B of image D.
  • the overlapping area (same scene) of the images collected by the two image sensors is matched, so that the difference in angle and orientation of the images collected by the two image sensors can be found, that is, the rotation and translation can balance this Therefore, a complete image can be obtained by stitching the point clouds of the images collected by multiple image sensors, and the 3D model can be reconstructed by adjusting the first coordinate point on the initial model through the points on the target point cloud.
  • the vehicle-mounted surround view system adjusts the first coordinate point on the initial model, adjusts the first coordinate point to the second coordinate point, and generates a first model.
  • the second coordinate point is obtained from a plurality of third coordinate points within the neighborhood of the first coordinate point
  • the third coordinate point is a coordinate point on the target point cloud.
  • the Z axis is perpendicular to the ground, then for a first coordinate point (X a , Y a , Z a ) on the initial model in the world coordinate system, Z a represents the distance from the ground Height, the vehicle-mounted image processing device determines a plurality of third coordinate points within the neighborhood of the first coordinate point among a large number of scattered points in the target point cloud, for example, the third coordinate point is a three-dimensional coordinate point with a height of Z a , the set of three-dimensional coordinate points whose height is Z a is M.
  • M includes three-dimensional coordinate points (X 1 , Y 1 , Z 1 ), (X 2 , Y 2 , Z 2 ) and (X 3 , Y 3 , Z 3 ), etc., where Z 1 , Z
  • the values of 2 and Z 3 are both equal to Z a
  • the vehicle-mounted image processing device is based on the X values (such as X 1 , X 2 and X 3 ) and Y values (such as Y 1 , Y 2 and Y 3 of each three-dimensional point in the M set) ) to adjust the values of X a and Y a .
  • each coordinate point in M can represent the actual distance between the vehicle and the scene around the vehicle.
  • the three-dimensional coordinate points in the M set are coordinate points within the neighborhood range of the first coordinate point (X a , Y a , Z a ), please refer to FIG. 9B for understanding, the “neighborhood range” of the first coordinate point refers to The intersection of "First Range” and "Second Range”.
  • the "first range” and the “second range” are exemplified.
  • the "first range” can be understood as the lateral range, that is, the range between two rays passing through the center point with the center point of the initial model as the center of the circle.
  • the range between c is the first range.
  • is an angle less than or equal to 10°, and the size of ⁇ can be set according to actual needs.
  • the "second range" can be understood as a longitudinal range.
  • the center point of the initial model is taken as the center of the circle, and the distance between the first coordinate point and the center of the circle is R1, then the second range is: the radius is R2 and the radius is R3.
  • the vehicle-mounted image processing device determines the second coordinate point according to the plurality of third coordinate points.
  • the vehicle-mounted image processing apparatus adjusts X a according to the X value of each three-dimensional coordinate point in the set M, and adjusts Y a according to the Y value of each three-dimensional coordinate point in the set M.
  • the point to be adjusted is the first coordinate point (X a , Y a , Z a ), the adjusted second coordinate point is (X a ′, Y a ′, Z a ), the adjusted X a ′ and Y a ' is represented by the following formulas (5) and (6).
  • X a ′ is the adjusted X value
  • X b belongs to the points in the neighborhood of X a
  • n is the number of points in the neighborhood
  • ⁇ (*) represents the neighborhood
  • ⁇ (X a ) represents the neighborhood of X a
  • the set of X values in the three-dimensional coordinate points in the M set for example, ⁇ (X a ) includes X 1 , X 2 and X 3 .
  • Y a ′ is the adjusted Y value
  • Y b belongs to the points in the neighborhood of Y a
  • n is the number of points in the neighborhood
  • ⁇ (*) represents the neighborhood
  • ⁇ (Y a ) represents the neighborhood of Y a
  • the set of Y values in the three-dimensional coordinate points in the M set for example, ⁇ (Y a ) includes Y 1 , Y 2 and Y 3 .
  • the three-dimensional coordinate point on the initial model is (X a , Y a , Z a ), and the position of the three-dimensional coordinate point is adjusted.
  • the (X a , Y a , Z a ) are adjusted to (X a ′, Y a ′, Z a ′) to obtain the first model.
  • only one three-dimensional point (X a , Y a , Z a ) on the initial model is adjusted as an example for description, and the adjustment of other first coordinate points can be obtained by the same method as above.
  • the number of the first coordinate points is not limited, the position of the first coordinate point is not limited, and the position of the adjusted second coordinate point is not limited.
  • the adjusted second coordinate points may be located "outside" the original model.
  • the adjusted second coordinate points (X a ', Y a ', Z a ') may be located "inside” the original model.
  • the vehicle-mounted image processing device adjusts the position of the first coordinate point in real time according to the depth information of the objects around the vehicle.
  • the shape of the initial model will change, that is, the first model is based on A model derived from the actual distance of objects around the vehicle from the vehicle.
  • the shape of the first model may be an irregular shape, and the shape of the first model varies with the distance of the vehicle from objects around the vehicle.
  • the target point cloud is a point cloud obtained based on the actual distance between the objects around the vehicle and the vehicle, and the vehicle-mounted image processing device determines the first coordinate point neighborhood on the initial model among a large number of scattered points in the target point cloud Multiple third coordinate points within the range, and then determine the second coordinate point according to the multiple third coordinate points, and then adjust the points on the initial model to the second coordinate point. The actual distance to accurately reconstruct the first model.
  • Step 504 The vehicle-mounted surround view system acquires a panoramic image based on the image information and the first model.
  • the vehicle-mounted surround view system performs texture mapping on the first model according to the image information to generate a 3D panoramic image (or called a "panoramic image").
  • step 501 four image sensors collect images in four directions around the vehicle, and the image sensors transmit the images in the four directions to the vehicle-mounted image processing device, which may use texture mapping to map.
  • the in-vehicle image processing device obtains the internal and external parameters of the camera through pre-calibration, performs external and internal parameter mapping on the three-dimensional coordinates of the optimized first model, and obtains two-dimensional pixel coordinates, and then obtains the two-dimensional pixel coordinates from the image (or also from the image collected by the fisheye camera).
  • Texture image to obtain the corresponding texture pixels, and correspond the coordinate points in the image to the surface of the first model, that is, which point on the first model corresponds to each pixel on the texture image for rendering and coloring , so that the entire texture image is covered on the first model, thereby obtaining a 3D panoramic image.
  • Step 505 the vehicle-mounted surround view system outputs a panoramic image.
  • the vehicle-mounted image processing device outputs the 3D panoramic image to the vehicle-mounted display, and the vehicle-mounted display displays the 3D panoramic image.
  • the vehicle-mounted image processing device introduces depth information when creating a 3D model, and the vehicle-mounted image processing device adjusts the coordinate points on the initial model according to the depth information to obtain the first model.
  • the in-vehicle image processing device generates a virtual first model corresponding to the objects around the vehicle in the real world, and generates a virtual vehicle model based on the corresponding vehicle in the real world.
  • the objects around the vehicle include a first object and a second object, the distance between the vehicle and the first object is the first distance, and the distance between the vehicle and the second object is the second distance.
  • the position of the vehicle is mapped to the first position in the panoramic image
  • the position of the first object is mapped to the second position of the panoramic image
  • the position of the second object is mapped to the third position in the panoramic image.
  • the first distance is greater than the second distance
  • the distance between the first position and the second position is greater than the distance between the first position and the third position. That is, when the distance between the objects around the vehicle and the vehicle changes, the distance between the vehicle model and the coordinate points corresponding to the objects on the first model will also change, so that a 3D panoramic image consistent with the actual environment around the vehicle can be obtained, eliminating the need for stitching. It can improve the detection accuracy and driver experience effect of the seam area.
  • step 503 and before step 504 it further includes the following steps: the vehicle-mounted image processing device performs interpolation and smoothing processing on the scattered points on the first model to obtain a second model, and the second model is the smoothed model. model; in step 504, the vehicle-mounted image processing device performs texture mapping on the second model according to the image information to generate a 3D panoramic image.
  • the first model is composed of a large number of scatter points.
  • the on-board image processing device directly connects the scatter points, and the obtained model may be bumpy, and then texture mapping the bumpy 3D model will result in poor visual effect of the generated panoramic image. . Therefore, the vehicle-mounted image processing device performs interpolation processing on the first model. Please refer to FIG. 11 .
  • FIG. 11 is a schematic diagram of the interpolation processing. The continuous function is interpolated on the basis of the 3D scatter model by the interpolation method, so that the 3D model The overall surface of the surface passes through the scatter points on the 3D model.
  • the second model after interpolation processing is a 3D model with a smooth surface
  • the vehicle-mounted image processing device performs texture mapping on the second model with a smooth surface, thereby improving the rendering effect of the 3D model.
  • the interpolation method in the embodiment of the present application may adopt interpolation methods such as spline interpolation, bicubic interpolation, and discrete smooth interpolation method.
  • the surface of the second model may be made smooth by the interpolation method, and the specific interpolation method is not limited.
  • an embodiment of the present application provides an in-vehicle image processing apparatus, and the in-vehicle image processing apparatus is configured to execute the method performed by the in-vehicle image processing apparatus in the above method embodiments.
  • the in-vehicle image processing apparatus 1200 includes an acquisition module 1201 and a processing module 1202 , and optionally, the in-vehicle image processing apparatus further includes a display module 1203 .
  • an acquisition module 1201 configured to acquire image information and depth information of objects around the vehicle, where the depth information is used to indicate coordinate point information of each point on the objects around the vehicle;
  • the processing module 1202 is used to obtain an initial model; adjust the first coordinate point on the initial model to a second coordinate point according to the depth information, and generate a first model; obtain based on the image information and the first model Panoramic image.
  • the processing module 1202 is a processor, and the processor is a general-purpose processor or a special-purpose processor or the like.
  • the processor includes a transceiver unit for implementing receiving and transmitting functions.
  • the transceiver unit is a transceiver circuit, or an interface, or an interface circuit.
  • Transceiver circuits, interfaces, or interface circuits for implementing receiving and transmitting functions are deployed separately, or optionally, integrated together.
  • the above-mentioned transceiver circuit, interface or interface circuit is used for reading and writing code or data, or the above-mentioned transceiver circuit, interface or interface circuit is used for signal transmission or transmission.
  • the acquisition module 1201 can be replaced by a transceiver module, optionally, the transceiver module is a communication interface.
  • the communication interface is an input-output interface or a transceiver circuit.
  • the input and output interface includes an input interface and an output interface.
  • the transceiver circuit includes an input interface circuit and an output interface circuit.
  • the transceiver module is configured to receive image information and depth information of objects around the vehicle from the sensor.
  • the acquisition module 1201 can also be replaced by the processing module 1202 .
  • the obtaining module 1201 is configured to perform step 501 in the embodiment corresponding to FIG. 5 .
  • the processing module 1202 is configured to execute step 502 , step 503 , step 504 and step 505 in the embodiment corresponding to FIG. 5 .
  • the display module 1203 is configured to perform step 505 in the embodiment corresponding to FIG. 5
  • the display module 1203 is configured to display a panoramic image.
  • the processing module 1202 is further specifically configured to: convert the pixels in the image information into the camera coordinate system according to the pixels in the image information and the depth information corresponding to the pixels the first point cloud under the camera coordinate system; convert the first point cloud under the camera coordinate system to the second point cloud under world coordinates; adjust the first coordinate point to the second point cloud through the coordinate points in the second point cloud Two coordinate points are used to generate the first model, and the second coordinate points are obtained according to the coordinate points in the second point cloud.
  • the image information includes images collected by multiple image sensors
  • the processing module 1202 is further specifically configured to: place multiple images collected by the multiple image sensors in the world coordinate system
  • the plurality of second point clouds obtained are spliced to obtain the target point cloud; determine a plurality of third coordinate points within the neighborhood of the first coordinate point, and the third coordinate point is the coordinate point on the target point cloud;
  • a second coordinate point is determined according to the plurality of third coordinate points; and the first coordinate point is adjusted to the second coordinate point.
  • the processing module 1202 is further specifically configured to: match the overlapping area of the first image and the second image collected by two adjacent image sensors to obtain the point of the first image A rotation matrix and a translation matrix for transforming between the cloud and the point cloud of the second image; using the rotation matrix and the translation matrix to transform the point cloud of the second image, and transforming the point cloud of the second image after the transformation The cloud is stitched with the point cloud of the first image.
  • the processing module 1202 is further specifically configured to: perform interpolation and smoothing processing on the first model to obtain a second model; and texture the second model according to the image information map to generate a panoramic image.
  • the objects around the vehicle include a first object and a second object, and when the distance between the vehicle and the first object is the first distance, the distance between the vehicle and the second object is the first object.
  • the distance is two
  • the position of the vehicle is mapped to the first position in the panoramic image
  • the position of the first object is mapped to the second position of the panoramic image
  • the position of the second object is mapped to the panoramic image the third position in The distance between the second positions is greater than the distance between the first position and the third position.
  • an in-vehicle image processing apparatus in an embodiment of the present application is used to perform steps 501 to 505 in the method embodiment corresponding to FIG. 5 .
  • the in-vehicle image processing apparatus may include one or more processors 1301, and the processors 1301 may also be referred to as processing units, which may implement certain control functions.
  • the processor 1301 may be a general-purpose processor or a special-purpose processor, for example, the processor 1301 is a graphics processor (graphics processing unit, GPU).
  • the central processing unit can be used to control the vehicle-mounted image processing device, execute software programs, and process data of the software programs.
  • the processor 1301 may also store instructions 1303, and the instructions 1303 may be executed by the processor, so that the in-vehicle image processing apparatus 1300 executes the methods described in the above method embodiments.
  • the processor 1301 may include a transceiver unit for implementing receiving and transmitting functions.
  • the transceiver unit may be a transceiver circuit, or an interface, or an interface circuit.
  • Transceiver circuits, interfaces or interface circuits used to implement receiving and transmitting functions may be separate or integrated.
  • the above-mentioned transceiver circuit, interface or interface circuit can be used for reading and writing code/data, or the above-mentioned transceiver circuit, interface or interface circuit can be used for signal transmission or transmission.
  • the in-vehicle image processing apparatus 1300 may include one or more memories 1302, and instructions 1304 may be stored thereon, and the instructions may be executed on the processor, so that the in-vehicle image processing apparatus 1300 executes the above method implementation. method described in the example.
  • data may also be stored in the memory.
  • instructions and/or data may also be stored in the processor.
  • the processor and the memory can be provided separately or integrated together.
  • the in-vehicle image processing apparatus 1300 may further include a transceiver 1305 .
  • the processor 1301 may be referred to as a processing unit, and controls the in-vehicle image processing apparatus 1300 .
  • the transceiver 1305 may be referred to as a transceiver unit, a transceiver, a transceiver circuit, a transceiver device, or a transceiver module, etc., and is used to implement a transceiver function.
  • the vehicle includes the vehicle-mounted surround view system shown in FIG. 3 .
  • the in-vehicle surround view system includes the in-vehicle image processing device shown in FIG. 12 , or the in-vehicle surround view system includes the in-vehicle image processing device shown in FIG. 13 , and the in-vehicle image processing device is used to perform step 501 in the above-mentioned embodiment corresponding to FIG. 5 . Go to step 505 .
  • the embodiments of the present application further provide a computer program product, the computer program product includes computer program code, when the computer program code is executed by the computer, the computer is made to realize the vehicle-mounted image processing device (or vehicle-mounted image processing device) in the above method embodiments. Look around system) method.
  • Embodiments of the present application further provide a computer-readable storage medium for storing computer programs or instructions, and when the computer programs or instructions are executed, cause the computer to execute the vehicle-mounted image processing device (or vehicle-mounted surround view system) in the above method embodiments. method of execution.
  • An embodiment of the present application provides a chip including a processor and a communication interface, where the processor is configured to read an instruction to execute the method performed by the vehicle-mounted image processing device (or vehicle-mounted surround view system) in the above method embodiments.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

A panoramic image generation method, a vehicle-mounted image processing apparatus, and a vehicle. The method in the embodiments of the present application comprises: acquiring image information and depth information of an object around a vehicle, wherein the depth information is used for indicating coordinate point information of each point on the object around the vehicle; acquiring an initial model; according to the depth information, adjusting a first coordinate point on the initial model to a second coordinate point, so as to generate a first model; and generating a panoramic image according to the image information and the first model. In the present embodiment, the position of a first coordinate point is adjusted in real time according to depth information of an object around a vehicle; and since the position of the first coordinate point on the initial model is adjusted in real time, that is, the first model is a model obtained according to the actual distance between the object around the vehicle and the vehicle, a splicing ghost and dislocation in a panoramic image are eliminated, so as to obtain a panoramic image which is consistent with the actual environment around the vehicle.

Description

一种全景图像的生成方法、车载图像处理装置及车辆A method for generating a panoramic image, a vehicle-mounted image processing device, and a vehicle 技术领域technical field
本申请涉及车载环视技术领域,尤其涉及一种全景图像的生成方法、车载图像处理装置及车辆。The present application relates to the technical field of vehicle-mounted surround view, and in particular, to a method for generating panoramic images, a vehicle-mounted image processing device, and a vehicle.
背景技术Background technique
一般来说,传统的基于图像的倒车影像系统只在车尾安装摄像头,摄像头只能覆盖车尾周围有限的区域,而车辆周围和车头的盲区增加了安全驾驶的隐患。为扩大驾驶员视野,提高驾驶安全性,车载环视系统应运而生。Generally speaking, traditional image-based reversing camera systems only install cameras at the rear of the vehicle, and the cameras can only cover a limited area around the rear of the vehicle, and the blind spots around the vehicle and the front of the vehicle increase the hidden danger of safe driving. In order to expand the driver's field of vision and improve driving safety, the vehicle surround view system came into being.
车载环视系统利用安装在车辆四周的摄像头重构车辆以及周围场景,对所拍摄的图像进行视角变换和图像拼接,生成3D全景图像。当前技术中,车载环视系统首先对已安装好的4路鱼眼相机进行标定,并得到相机的内外参数后,构建一个立体碗状的固定模型(如图1A所示),对3D固定模型上的点进行内外参数的映射,得到像素坐标,再将鱼眼图像中对应位置的像素映射在碗状的固定模型上,从而得到3D全景图像。The vehicle surround view system uses cameras installed around the vehicle to reconstruct the vehicle and surrounding scenes, and performs perspective transformation and image stitching on the captured images to generate 3D panoramic images. In the current technology, the vehicle-mounted surround view system first calibrates the installed 4-channel fisheye camera, and after obtaining the internal and external parameters of the camera, a three-dimensional bowl-shaped fixed model (as shown in Figure 1A) is constructed. Mapping the internal and external parameters to obtain pixel coordinates, and then map the corresponding pixels in the fisheye image to the bowl-shaped fixed model to obtain a 3D panoramic image.
当前3D全景图像是基于3D固定模型生成的,该3D固定模型为车辆周围真实物体的模拟体现。当3D固定模型与虚拟车辆之间的距离,和车辆与实际物体之间的距离不等时,在两个图像的拼接区域会出现拼接重影和错位的问题,从而降低车周障碍物的检测率或产生检测盲区,降低了行车安全性。The current 3D panoramic image is generated based on a 3D fixed model, which is a simulated embodiment of real objects around the vehicle. When the distance between the 3D fixed model and the virtual vehicle is not equal to the distance between the vehicle and the actual object, there will be stitching ghosting and dislocation problems in the stitching area of the two images, thereby reducing the detection of obstacles around the vehicle. rate or generate detection blind spots, reducing driving safety.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供了一种全景图像的生成方法、车载图像处理装置及车辆,用于消除全景图像中的拼接重影和错位,得到与车周实际环境一致的全景图像。Embodiments of the present application provide a method for generating a panoramic image, a vehicle-mounted image processing device, and a vehicle, which are used to eliminate stitching ghosts and dislocations in the panoramic image, and obtain a panoramic image consistent with the actual environment around the vehicle.
第一方面,本申请实施例提供了一种全景图像的生成方法,应用于车载图像处理装置,所述包括:车载图像处理装置获取车周物体的图像信息及深度信息,深度信息用于指示车周物体上各个点的坐标点信息;获取初始模型,例如,初始模型为碗状的3D模型,或者,初始模型为圆柱状的3D模型;车载图像处理装置根据深度信息将初始模型上的第一坐标点调整至第二坐标点,生成第一模型;车载图像处理装置根据车辆周围物体的深度信息实时调整第一坐标点的位置,由于实时调整初始模型上的第一坐标点的位置,即该第一模型是根据车辆周围的物体距离车辆的实际距离得到的模型,第一模型的形状可能是不规则的形状,并且该第一模型的形状是随着车辆与车周物体的距离的变化而变化的,车载图像处理装置基于图像信息和第一模型获取全景图像。本申请实施例中,车载图像处理装置在创建3D模型时,引入了深度信息,车载图像处理装置根据深度信息调整初始模型上的坐标点,得到第一模型。车载图像处理装置根据真实世界的车辆周围物体对应生成虚拟的第一模型,基于真实世界的车辆对应生成虚拟的车辆模型,即当车辆周围物体与车辆之间的距离发生变化时,车辆模型与第一模型上物体对应的坐标点的距离也会发生变化,消除全景图像中的拼接重影和错位,从而得到与车周实际环境一致的3D全景图像。In a first aspect, an embodiment of the present application provides a method for generating a panoramic image, which is applied to a vehicle-mounted image processing device, and the method includes: the vehicle-mounted image processing device obtains image information and depth information of objects around the vehicle, and the depth information is used to indicate the vehicle. Coordinate point information of each point on the surrounding object; obtain the initial model, for example, the initial model is a bowl-shaped 3D model, or the initial model is a cylindrical 3D model; the vehicle-mounted image processing device converts the first on the initial model according to the depth information. The coordinate point is adjusted to the second coordinate point, and the first model is generated; the vehicle-mounted image processing device adjusts the position of the first coordinate point in real time according to the depth information of the objects around the vehicle. Since the position of the first coordinate point on the initial model is adjusted in real time, the The first model is a model obtained according to the actual distance of objects around the vehicle from the vehicle. The shape of the first model may be an irregular shape, and the shape of the first model varies with the distance between the vehicle and objects around the vehicle. Alternatively, the in-vehicle image processing apparatus acquires a panoramic image based on the image information and the first model. In the embodiment of the present application, the vehicle-mounted image processing device introduces depth information when creating a 3D model, and the vehicle-mounted image processing device adjusts the coordinate points on the initial model according to the depth information to obtain the first model. The in-vehicle image processing device generates a virtual first model according to the objects around the vehicle in the real world, and generates a virtual vehicle model based on the real-world vehicle correspondence, that is, when the distance between the objects around the vehicle and the vehicle changes, the vehicle model and the first The distance of the coordinate points corresponding to the objects on a model will also change, eliminating stitching ghosts and dislocations in the panoramic image, so as to obtain a 3D panoramic image that is consistent with the actual environment around the vehicle.
在一个可选的实现方式中,所述根据深度信息将初始模型上的第一坐标点调整至第二坐标点,生成第一模型可以包括:车载图像处理装置首先根据图像信息中的像素点及像素点对应的深度信息,将图像信息中的像素点转换为相机坐标系下的第一点云;然后,车载图像处理装置将相机坐标系下的第一点云转换为世界坐标下的第二点云;最后车载图像处理装置通过第二点云中的坐标点调整第一坐标点至第二坐标点,生成第一模型,第二坐标点是根据第二点云中的坐标点得到的。In an optional implementation manner, the adjusting the first coordinate point on the initial model to the second coordinate point according to the depth information, and generating the first model may include: the vehicle-mounted image processing device firstly adjusts the pixel point and the second coordinate point according to the image information. The depth information corresponding to the pixel points, convert the pixel points in the image information into the first point cloud in the camera coordinate system; then, the vehicle-mounted image processing device converts the first point cloud in the camera coordinate system into the second point cloud in the world coordinate system. point cloud; finally, the vehicle-mounted image processing device adjusts the first coordinate point to the second coordinate point through the coordinate point in the second point cloud to generate the first model, and the second coordinate point is obtained according to the coordinate point in the second point cloud.
在一个可选的实现方式中,图像信息包括多个图像传感器采集的图像,将相机坐标系下的第一点云转换为世界坐标下的第二点云之后,该方法还包括:车载图像处理装置将多个图像传感器采集的多个图像在世界坐标系下的多片第二点云进行拼接,得到目标点云;该目标点云上的坐标点对应的就是车周物体到图像传感器的实际距离,车载图像处理装置通过第二点云中的坐标点调整第一坐标点至第二坐标点可以包括:首先,车载图像处理装置确定在第一坐标点邻域范围内的多个第三坐标点,第三坐标点为目标点云上的坐标点,然后,车载图像处理装置根据多个第三坐标点确定第二坐标点;其中,该第二坐标点是由第一坐标点邻域范围内的多个第三坐标点得到的。最后,车载图像处理装置将第一坐标点调整至第二坐标点。本实施例中,目标点云是基于车辆周围的物体距离车辆的实际距离得到的点云,车载图像处理装置在目标点云中的大量散点中,确定初始模型上的第一坐标点邻域范围内的多个第三坐标点,再根据多个第三坐标点确定第二坐标点,进而可以将初始模型上的点调整至第二坐标点,车载图像处理装置根据车辆周围的物体与车辆的实际距离精准地重建第一模型。In an optional implementation manner, the image information includes images collected by multiple image sensors, and after converting the first point cloud in the camera coordinate system to the second point cloud in the world coordinate system, the method further includes: on-board image processing The device splices multiple images collected by multiple image sensors in multiple second point clouds in the world coordinate system to obtain the target point cloud; the coordinate points on the target point cloud correspond to the actual distance between the objects around the vehicle and the image sensor. distance, the vehicle-mounted image processing apparatus adjusting the first coordinate point to the second coordinate point through the coordinate point in the second point cloud may include: first, the vehicle-mounted image processing apparatus determines a plurality of third coordinates within the neighborhood range of the first coordinate point point, the third coordinate point is the coordinate point on the target point cloud, and then, the vehicle-mounted image processing device determines the second coordinate point according to a plurality of third coordinate points; wherein, the second coordinate point is determined by the neighborhood range of the first coordinate point obtained from multiple third coordinate points within. Finally, the vehicle-mounted image processing device adjusts the first coordinate point to the second coordinate point. In this embodiment, the target point cloud is a point cloud obtained based on the actual distance between the objects around the vehicle and the vehicle, and the vehicle-mounted image processing device determines the first coordinate point neighborhood on the initial model among a large number of scattered points in the target point cloud Multiple third coordinate points within the range, and then determine the second coordinate point according to the multiple third coordinate points, and then adjust the points on the initial model to the second coordinate point. The actual distance to accurately reconstruct the first model.
在一个可选的实现方式中,将多个图像传感器采集的多个图像在世界坐标系下的多片第二点云进行拼接,得到目标点云可以包括:首先,车载图像处理装置对两个相邻图像传感器采集的第一图像和第二图像的重叠区域进行匹配,得到第一图像的点云和第二图像的点云之间进行变换的旋转矩阵和平移矩阵;然后,车载图像处理装置利用旋转矩阵和平移矩阵对第二图像的点云进行变换,将变换之后的第二图像的点云和第一图像的点云进行拼接。本实施例中,由于相邻两个相机的位姿不同,由此两个图像传感器采集到的图像会有微小角度和方位的差异,对两个图像传感器采集的图像的重叠区域(相同的景物)进行匹配,从而能够找到两个图像传感器采集到的图像的角度和方位的差异,即通过旋转和平移可以平衡这种差异,进而再将多个图像传感器采集图像的点云进行拼接得到一个整片的目标点云,通过目标点云上的点对初始模型上的第一坐标点进行调整,可以精准的重建3D模型。In an optional implementation manner, splicing multiple second point clouds of multiple images collected by multiple image sensors in the world coordinate system to obtain the target point cloud may include: first, the on-board image processing device performs two The overlapping areas of the first image and the second image collected by the adjacent image sensors are matched to obtain a rotation matrix and a translation matrix for transformation between the point cloud of the first image and the point cloud of the second image; then, the vehicle-mounted image processing device The point cloud of the second image is transformed by using the rotation matrix and the translation matrix, and the transformed point cloud of the second image and the point cloud of the first image are spliced. In this embodiment, since the poses of the two adjacent cameras are different, the images collected by the two image sensors will have slight differences in angle and orientation, and the overlapping area of the images collected by the two image sensors (the same scene ) to match, so that the difference in angle and orientation of the images collected by the two image sensors can be found, that is, the difference can be balanced by rotation and translation, and then the point clouds of the images collected by multiple image sensors can be spliced to obtain a whole The target point cloud of the slice is adjusted by adjusting the first coordinate point on the initial model through the points on the target point cloud, so that the 3D model can be accurately reconstructed.
在一个可选的实现方式中,生成第一模型之后,所述方法还包括:车载图像处理装置对第一模型进行插值光滑化处理,得到第二模型,进一步的,车载图像处理装置根据图像信息对第二模型进行纹理贴图,生成全景图像。本实施例中,插值处理后的第二模型为表面光滑的3D模型,车载图像处理装置对表面光滑的第二模型进行纹理贴图,从而提升第一模型渲染效果。In an optional implementation manner, after the first model is generated, the method further includes: the vehicle-mounted image processing device performs interpolation and smoothing processing on the first model to obtain the second model, and further, the vehicle-mounted image processing device is based on the image information. Perform texture mapping on the second model to generate a panoramic image. In this embodiment, the second model after interpolation processing is a 3D model with a smooth surface, and the vehicle-mounted image processing device performs texture mapping on the second model with a smooth surface, thereby improving the rendering effect of the first model.
在一个可选的实现方式中,在真实世界中,车周物体包括第一物体和第二物体,当车辆与第一物体间的距离为第一距离,车辆与第二物体间的距离为第二距离时,车辆的位置 映射到全景图像中的第一位置,而第一物体的位置映射到全景图像的第二位置,第二物体的位置映射到全景图像中的第三位置,当第一距离大于第二距离时,所述方法还包括:车载图像处理装置显示全景图像,在全景图像中,第一位置与第二位置之间的距离大于第一位置与第三位置之间的距离。本实施例中,当车辆周围物体与车辆之间的距离发生变化时,车辆模型与第一模型上物体对应的坐标点的距离也会发生变化,从而得到与车周实际环境一致的全景图像,消除拼接重影和错位,提高拼缝区域的检测精度和驾驶员体验效果。In an optional implementation manner, in the real world, the objects around the vehicle include a first object and a second object, when the distance between the vehicle and the first object is the first distance, and the distance between the vehicle and the second object is the first object When the distance is two, the position of the vehicle is mapped to the first position in the panoramic image, the position of the first object is mapped to the second position of the panoramic image, and the position of the second object is mapped to the third position in the panoramic image. When the distance is greater than the second distance, the method further includes: the vehicle-mounted image processing device displays a panoramic image, and in the panoramic image, the distance between the first position and the second position is greater than the distance between the first position and the third position. In this embodiment, when the distance between the objects around the vehicle and the vehicle changes, the distance between the vehicle model and the coordinate points corresponding to the objects on the first model also changes, so that a panoramic image consistent with the actual environment around the vehicle is obtained, Eliminate stitching ghosts and dislocations, and improve the detection accuracy and driver experience in the stitched area.
第二方面,本申请实施例提供了一种车载环视装置,包括:获取模块,用于获取车周物体的图像信息及深度信息,深度信息用于指示车周物体上各个点的坐标点信息;处理模块,用于获取初始模型;根据深度信息将初始模型上的第一坐标点调整至第二坐标点,生成第一模型;基于图像信息和第一模型获取全景图像。In a second aspect, an embodiment of the present application provides a vehicle-mounted surround view device, including: an acquisition module for acquiring image information and depth information of objects around the vehicle, where the depth information is used to indicate coordinate point information of each point on the objects around the vehicle; The processing module is used for acquiring the initial model; adjusting the first coordinate point on the initial model to the second coordinate point according to the depth information to generate the first model; and acquiring the panoramic image based on the image information and the first model.
在一个可选的实现方式中,处理模块还具体用于:根据图像信息中的像素点及像素点对应的深度信息,将图像信息中的像素点转换为相机坐标系下的第一点云;将相机坐标系下的第一点云转换为世界坐标下的第二点云;通过第二点云中的坐标点调整第一坐标点至第二坐标点,生成第一模型,第二坐标点是根据第二点云中的坐标点得到的。In an optional implementation manner, the processing module is further specifically configured to: convert the pixels in the image information into the first point cloud in the camera coordinate system according to the pixels in the image information and the depth information corresponding to the pixels; Convert the first point cloud in the camera coordinate system to the second point cloud in world coordinates; adjust the first coordinate point to the second coordinate point through the coordinate points in the second point cloud to generate the first model, the second coordinate point is obtained from the coordinate points in the second point cloud.
在一个可选的实现方式中,图像信息包括多个图像传感器采集的图像,处理模块还具体用于:将多个图像传感器采集的多个图像在世界坐标系下的多片第二点云进行拼接,得到目标点云;确定在第一坐标点邻域范围内的多个第三坐标点,第三坐标点为目标点云上的坐标点;根据多个第三坐标点确定第二坐标点;将第一坐标点调整至第二坐标点。In an optional implementation manner, the image information includes images collected by multiple image sensors, and the processing module is further specifically configured to: process multiple images collected by multiple image sensors on multiple second point clouds in the world coordinate system splicing to obtain the target point cloud; determining multiple third coordinate points within the neighborhood of the first coordinate point, the third coordinate point being the coordinate point on the target point cloud; determining the second coordinate point according to the multiple third coordinate points ; Adjust the first coordinate point to the second coordinate point.
在一个可选的实现方式中,处理模块还具体用于:对两个相邻图像传感器采集的第一图像和第二图像的重叠区域进行匹配,得到第一图像的点云和第二图像的点云之间进行变换的旋转矩阵和平移矩阵;利用旋转矩阵和平移矩阵对第二图像的点云进行变换,将变换之后的第二图像的点云和第一图像的点云进行拼接。In an optional implementation manner, the processing module is further specifically configured to: match the overlapping area of the first image and the second image collected by two adjacent image sensors to obtain the point cloud of the first image and the point cloud of the second image. A rotation matrix and a translation matrix for transforming between point clouds; using the rotation matrix and the translation matrix to transform the point cloud of the second image, and stitching the transformed point cloud of the second image and the point cloud of the first image.
在一个可选的实现方式中,处理模块还具体用于:对第一模型进行插值光滑化处理,得到第二模型;根据图像信息对第二模型进行纹理贴图,生成全景图像。In an optional implementation manner, the processing module is further specifically configured to: perform interpolation and smoothing processing on the first model to obtain a second model; and perform texture mapping on the second model according to image information to generate a panoramic image.
在一个可选的实现方式中,车周物体包括第一物体和第二物体,当车辆与第一物体间的距离为第一距离,车辆与第二物体间的距离为第二距离时,车辆的位置映射到全景图像中的第一位置,第一物体的位置映射到全景图像的第二位置,第二物体的位置映射到全景图像中的第三位置,当第一距离大于第二距离时,装置还包括显示模块;显示模块,还用于显示全景图像,在全景图像中,第一位置与第二位置之间的距离大于第一位置与第三位置之间的距离。In an optional implementation manner, the objects around the vehicle include a first object and a second object. When the distance between the vehicle and the first object is the first distance, and the distance between the vehicle and the second object is the second distance, the vehicle The position of the object is mapped to the first position in the panoramic image, the position of the first object is mapped to the second position of the panoramic image, and the position of the second object is mapped to the third position in the panoramic image. When the first distance is greater than the second distance , the device further includes a display module; the display module is further used to display a panoramic image, in which the distance between the first position and the second position is greater than the distance between the first position and the third position.
第三方面,本申请实施例提供了一种车载图像处理装置,包括处理器,处理器与存储器耦合,存储器用于存储程序或指令,当程序或指令被处理器执行时,使得车载图像处理装置执行如上述第一方面所述的方法。In a third aspect, an embodiment of the present application provides an in-vehicle image processing device, including a processor, the processor is coupled with a memory, and the memory is used to store programs or instructions, when the programs or instructions are executed by the processor, the in-vehicle image processing device is The method as described in the first aspect above is performed.
第四方面,本申请实施例提供了一种车载环视系统,包括传感器,车载显示器及如上述第三方面所述的车载图像处理装置,传感器和车载显示器均与所述车载图像处理器连接,其中,所述传感器用于采集图像信息和深度信息,所述车载显示器用于显示全景图像。In a fourth aspect, an embodiment of the present application provides a vehicle-mounted surround view system, including a sensor, a vehicle-mounted display, and the vehicle-mounted image processing device described in the third aspect above, wherein the sensor and the vehicle-mounted display are both connected to the vehicle-mounted image processor, wherein , the sensor is used to collect image information and depth information, and the vehicle-mounted display is used to display panoramic images.
第五方面,本申请实施例提供了一种车辆,包括如上述第四方面中所述的车载环视系统。In a fifth aspect, an embodiment of the present application provides a vehicle, including the vehicle-mounted surround view system as described in the fourth aspect.
第六方面,本申请实施例提供了一种计算机程序产品,计算机程序产品中包括计算机程序代码,当计算机程序代码被计算机执行时,使得计算机实现上述如上述第一方面中任一项所述的方法。In a sixth aspect, an embodiment of the present application provides a computer program product, the computer program product includes computer program code, and when the computer program code is executed by a computer, enables the computer to implement any one of the above-mentioned first aspects. method.
第七方面,本申请实施例提供了一种计算机可读存储介质,用于储存计算机程序或指令,计算机程序或指令被执行时使得计算机执行上述第一方面中任一项所述的方法。In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium for storing a computer program or instruction, and when the computer program or instruction is executed, the computer executes the method described in any one of the above-mentioned first aspect.
第八方面,本申请实施例提供了一种芯片,包括处理器和通信接口,所述处理器用于读取指令以执行上述第一方面中任一项所述的方法。In an eighth aspect, an embodiment of the present application provides a chip, including a processor and a communication interface, where the processor is configured to read an instruction to execute the method described in any one of the foregoing first aspects.
附图说明Description of drawings
图1A为一种3D模型的立体示意图;Fig. 1A is a three-dimensional schematic diagram of a 3D model;
图1B为一种3D模型的侧视示意图;1B is a schematic side view of a 3D model;
图2A为传统方法中全景图像中出现重影的示意图;2A is a schematic diagram of ghosting in a panoramic image in a conventional method;
图2B为传统方法中全景图像中出现拼接错位的示意图;2B is a schematic diagram of a stitching dislocation in a panoramic image in a traditional method;
图3为本申请实施例中一种车载环视系统的结构示意图;3 is a schematic structural diagram of a vehicle-mounted surround view system in an embodiment of the application;
图4为本申请实施例中世界坐标系,相机坐标系,图像坐标系及像素坐标系的示意图;4 is a schematic diagram of a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system in an embodiment of the application;
图5为本申请实施例中一种全景图像的生成方法的步骤流程示意图;FIG. 5 is a schematic flowchart of steps of a method for generating a panoramic image according to an embodiment of the present application;
图6为本申请实施例中带有深度信息的图像转化为点云的可视化效果示意图;6 is a schematic diagram of the visualization effect of converting an image with depth information into a point cloud in an embodiment of the present application;
图7为本申请实施例中第一图像中重叠区域和第二图像中重叠区域进行匹配的示意图;7 is a schematic diagram of matching an overlapping area in a first image and an overlapping area in a second image in an embodiment of the present application;
图8为本申请实施例中相邻相机传感器采集的图像的相拼接的示意图;8 is a schematic diagram of splicing images collected by adjacent camera sensors in an embodiment of the present application;
图9A为本申请实施例中对初始模型上第一坐标点邻域范围内第三坐标点的立体示意图;FIG. 9A is a three-dimensional schematic diagram of a third coordinate point within a neighborhood range of a first coordinate point on an initial model in an embodiment of the present application;
图9B为本申请实施例中初始模型上第一坐标点邻域范围内第三坐标点的俯视示意图;9B is a schematic top view of a third coordinate point within the neighborhood of the first coordinate point on the initial model in the embodiment of the present application;
图9C和图9D为本申请实施例中对初始模型上的第一坐标点进行调整得到第一模型的俯视示意图;9C and 9D are schematic top views of the first model obtained by adjusting the first coordinate point on the initial model in the embodiment of the present application;
图10为本申请实施例中真实世界中车辆与周围物体和全景图像中车辆与周围物体的场景示意图;10 is a schematic diagram of a scene of a vehicle and surrounding objects in the real world and a vehicle and surrounding objects in a panoramic image according to an embodiment of the application;
图11为本申请实施例中对第一模型进行插值化处理的示意图;11 is a schematic diagram of performing interpolation processing on a first model in an embodiment of the present application;
图12为本申请实施例中一种车载图像处理装置的一个实施例的结构示意图;12 is a schematic structural diagram of an embodiment of an in-vehicle image processing apparatus in an embodiment of the application;
图13为本申请实施例中一种车载图像处理装置的另一个实施例的结构示意图。FIG. 13 is a schematic structural diagram of another embodiment of an in-vehicle image processing apparatus in an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。本申请的说明书和权利要求书及附图中的术语“第一”、“第二”等是用于区别对象,而不必然用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换。The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. The terms "first", "second" and the like in the description and claims of the present application and the drawings are used to distinguish objects, and are not necessarily used to describe a specific order or sequence. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
车载环视系统可以获取安装在车身的多个摄像头所拍摄的图像,并对获取的图像进行视角变换和图像拼接,生成全景图像。示例性的,全景图像是车周360度的全景图像,或者,全景图像是一个720度的全景图像。通常情况下,车身四周安装摄像头,通过安装在车身前,后,左,右四个方向上的摄像头来采集车身周围物体的图像信息,将每两个相邻摄像头采集的图像进行拼接后映射到已经构建好的尺寸固定的3D模型上,该3D模型为车辆周围真实物体的模拟体现。请参阅图1B所示,XOZ表示平面坐标系,Z=0表示地平面,R表示Z轴到3D模型壁面的半径。由于车辆在行驶过程中,车辆与周围的物体之间的距离是发生变化的,而3D模型的尺寸是固定的,当车辆与周围物体之间的实际距离小于R时,相邻两个摄像头采集到的图像会发生重叠,从而导致在图像拼接区域产生重影(如图2A所示),当车辆与周围物理之间的距离大于R时,相邻两个摄像头采集的图像会产生盲区,从而导致拼接区域图像拼接错位(如图2B所示)。应理解的是,在车身上还可以安装摄像头用于拍摄车底的图像。例如,在底盘安装的摄像头,或者,在车周安装的摄像头,该摄像头的视角可以拍摄的到车底会经过的位置。The vehicle-mounted surround view system can acquire images captured by multiple cameras installed on the vehicle body, and perform perspective transformation and image stitching on the acquired images to generate panoramic images. Exemplarily, the panoramic image is a 360-degree panoramic image around the vehicle, or the panoramic image is a 720-degree panoramic image. Usually, cameras are installed around the car body, and the image information of objects around the car body is collected by the cameras installed in the front, rear, left and right directions of the car body, and the images collected by each two adjacent cameras are spliced and mapped to On the 3D model that has been constructed with a fixed size, the 3D model is a simulated embodiment of the real objects around the vehicle. Please refer to FIG. 1B , XOZ represents the plane coordinate system, Z=0 represents the ground plane, and R represents the radius from the Z axis to the wall of the 3D model. Since the distance between the vehicle and surrounding objects changes during the driving process of the vehicle, and the size of the 3D model is fixed, when the actual distance between the vehicle and surrounding objects is less than R, two adjacent cameras collect The obtained images will overlap, resulting in ghosting in the image splicing area (as shown in Figure 2A). This leads to the dislocation of the images in the stitched area (as shown in Figure 2B). It should be understood that a camera may also be installed on the vehicle body for capturing images of the vehicle bottom. For example, a camera installed on the chassis, or a camera installed around the vehicle, the camera's angle of view can capture the position where the vehicle bottom will pass.
基于上述问题,本申请实施例提供了一种全景图像的生成方法,该方法应用于车载环视系统。请参阅图3所示,车载环视系统包括传感器301、车载图像处理装置302和车载显示器303,其中,传感器301与车载图像处理装置302连接,车载图像处理装置302和车载显示器303连接。传感器301用于采集车辆周围物体的图像信息及深度信息。车载图像处理装置302首先获取车辆周围物体的深度信息,并创建初始模型,然后根据深度信息将初始模型上的第一坐标点的位置进行调整,生成第一模型;最后,车载图像处理装置302根据图像信息对第一模型进行纹理贴图,生成全景图像。车载图像处理装置302将全景图像输出至车载显示器303,车载显示器303用于显示全景图像。本申请实施例中,在创建3D模型时,引入了深度信息,根据深度信息调整初始模型上的第一坐标点,将第一坐标点调整至第二坐标点,得到第一模型,使得车辆周围物体(对应第一模型)与车辆(对应车辆模型)之间的距离基本等于车辆模型与第一模型的距离,由于根据深度信息精确的重建了第一模型,消除全景图像中的拼接重影和错位,从而得到与车周实际环境一致的3D全景图像,提高拼缝区域的检测精度和驾驶员体验效果。Based on the above problem, an embodiment of the present application provides a method for generating a panoramic image, and the method is applied to a vehicle-mounted surround view system. 3 , the vehicle surround view system includes a sensor 301 , a vehicle image processing device 302 and a vehicle display 303 , wherein the sensor 301 is connected to the vehicle image processing device 302 , and the vehicle image processing device 302 is connected to the vehicle display 303 . The sensor 301 is used to collect image information and depth information of objects around the vehicle. The in-vehicle image processing device 302 first obtains the depth information of objects around the vehicle, and creates an initial model, and then adjusts the position of the first coordinate point on the initial model according to the depth information to generate the first model; finally, the in-vehicle image processing device 302 The image information performs texture mapping on the first model to generate a panoramic image. The vehicle-mounted image processing device 302 outputs the panoramic image to the vehicle-mounted display 303, and the vehicle-mounted display 303 is used for displaying the panoramic image. In the embodiment of the present application, when creating a 3D model, depth information is introduced, the first coordinate point on the initial model is adjusted according to the depth information, and the first coordinate point is adjusted to the second coordinate point to obtain the first model, so that the surrounding area of the vehicle is The distance between the object (corresponding to the first model) and the vehicle (corresponding to the vehicle model) is basically equal to the distance between the vehicle model and the first model. Since the first model is accurately reconstructed according to the depth information, the stitching ghosts and images in the panoramic image are eliminated. dislocation, so as to obtain a 3D panoramic image that is consistent with the actual environment around the vehicle, and improve the detection accuracy of the patchwork area and the driver experience effect.
为了更好理解本申请,对本申请中涉及的词语进行示例说明。For a better understanding of the present application, the words and expressions involved in the present application are exemplified.
深度信息,可以用于指示所检测到的物体上各个点的三维坐标信息。深度信息通常情况下也称为深度,为了方便计算,深度在机器视觉领域指空间里的各个点相对于相机传感器的距离。本申请实施例中,车辆周围物体的深度信息是指车辆周围物体上的三维坐标点距离相机传感器的距离,即车辆周围物体上的三维坐标点距离车身上传感器所在位置的距离。The depth information can be used to indicate the three-dimensional coordinate information of each point on the detected object. Depth information is usually also called depth. For the convenience of calculation, depth in the field of machine vision refers to the distance of each point in space relative to the camera sensor. In the embodiment of the present application, the depth information of objects around the vehicle refers to the distance between the three-dimensional coordinate point on the object around the vehicle and the camera sensor, that is, the distance between the three-dimensional coordinate point on the object around the vehicle and the position of the sensor on the vehicle body.
请参阅图4所示,图4中示出了世界坐标系(O w-X wY wZ w),相机坐标系(O c-X cY cZ c),图像坐标系(o-xy)和像素坐标系(uv)。其中,P点为世界坐标系中的一点,即为真实环境中的点。p点为点P的成像点,p点在图像坐标系中的坐标为(x,y),在像素坐标系中的坐标为(u,v)。图像坐标系的原点(o)位于相机坐标系的Z轴上,且图像坐标 系的原点(o)与相机坐标系的原点(O c)的距离为f,其中,f为相机焦距。下面对图4中示出的4个坐标系进行说明。 Please refer to Figure 4, which shows the world coordinate system (O w -X w Y w Z w ), the camera coordinate system (O c -X c Y c Z c ), the image coordinate system (o-xy ) and the pixel coordinate system (uv). Among them, point P is a point in the world coordinate system, that is, a point in the real environment. Point p is the imaging point of point P, the coordinates of point p in the image coordinate system are (x, y), and the coordinates in the pixel coordinate system are (u, v). The origin (o) of the image coordinate system is located on the Z-axis of the camera coordinate system, and the distance between the origin (o) of the image coordinate system and the origin (O c ) of the camera coordinate system is f, where f is the camera focal length. The four coordinate systems shown in FIG. 4 will be described below.
世界坐标系,也称为测量坐标系,是一个三维坐标系,例如,三维坐标系可以是三维正交坐标系,柱坐标系,球坐标系等。本申请实施例中,世界坐标系可以使用三维正交坐标系(X w,Y w,Z w)。在世界坐标系中可以描述相机、车辆及车辆周围物体的空间位置。世界坐标系的位置根据实际情况自行确定。本申请中,为了处理方便,选择世界坐标系以车辆为中心(车辆位置位于世界坐标系的原点位置O w),Z w轴垂直于地面,X w轴代表方向是车辆前进方向,且坐标系标会随着车辆的运动而变动。世界坐标系的单位可以是米(m)。 The world coordinate system, also known as the measurement coordinate system, is a three-dimensional coordinate system. For example, the three-dimensional coordinate system can be a three-dimensional orthogonal coordinate system, a cylindrical coordinate system, a spherical coordinate system, and the like. In this embodiment of the present application, the world coordinate system may use a three-dimensional orthogonal coordinate system (X w , Y w , Z w ). The spatial position of the camera, the vehicle, and objects around the vehicle can be described in the world coordinate system. The position of the world coordinate system is determined by itself according to the actual situation. In this application, for the convenience of processing, the world coordinate system is selected to be centered on the vehicle (the vehicle position is located at the origin position O w of the world coordinate system), the Z w axis is perpendicular to the ground, the X w axis represents the direction of the vehicle, and the coordinate system The marker changes with the movement of the vehicle. The unit of the world coordinate system can be meters (m).
相机坐标系,是一个三维直角坐标系(X c,Y c,Z c)。相机坐标系的原点是镜头的光心,X c、Y c轴分别与像面的两边平行,Z c轴为镜头的光轴,与像平面垂直。相机坐标系的单位可以是米(m)。 The camera coordinate system is a three-dimensional rectangular coordinate system (X c , Y c , Z c ). The origin of the camera coordinate system is the optical center of the lens, the X c and Y c axes are respectively parallel to both sides of the image plane, and the Z c axis is the optical axis of the lens, which is perpendicular to the image plane. The unit of the camera coordinate system can be meters (m).
像素坐标系,把相机坐标系的坐标做一次变换,得到常用的平面像素坐标(此时z=1)。像素坐标系以像素为单位,坐标原点在相机坐标系的左上角。Pixel coordinate system, transform the coordinates of the camera coordinate system once to obtain the commonly used plane pixel coordinates (z=1 at this time). The pixel coordinate system is in pixels, and the coordinate origin is at the upper left corner of the camera coordinate system.
图像坐标系和像素坐标系的关系可以是:图像坐标系的原点为像素坐标系的中点。图像坐标系的单位可以是毫米(mm)。The relationship between the image coordinate system and the pixel coordinate system may be: the origin of the image coordinate system is the midpoint of the pixel coordinate system. The units of the image coordinate system may be millimeters (mm).
世界坐标系与相机坐标系之间的变换是刚性变换,也就是只改变物体的空间位置(平移)和朝向(旋转),而不改变物体的形状。用旋转和平移可以表示这种变换。图像坐标系和像素坐标系之间没有旋转,只是图像坐标系和像素坐标系的坐标原点不同。The transformation between the world coordinate system and the camera coordinate system is a rigid transformation, that is, only the spatial position (translation) and orientation (rotation) of the object are changed, but the shape of the object is not changed. This transformation can be represented by rotation and translation. There is no rotation between the image coordinate system and the pixel coordinate system, but the coordinate origin of the image coordinate system and the pixel coordinate system are different.
齐次坐标,是将一个n维的向量用一个(n+1)维向量来表示,是指一个用于投影几何里的坐标系统,如同用于欧氏几何里的笛卡儿坐标。例如,二维点(x,y)的齐次坐标表示为(hx,hy,h)。一个向量的齐次表示是不唯一的,齐次坐标中的h取不同的值都表示的是同一个点。比如齐次坐标(8,4,2),(4,2,1)表示的都是二维点(4,2)。引入齐次坐标的目的主要是合并矩阵运算中的乘法和加法。齐次坐标提供了用矩阵运算把二维、三维甚至高维空间中的一个点集从一个坐标系变换到另一个坐标系的方法。引入齐次坐标的目的是方便计算机图形学进行仿射几何变换。可以理解的是,引入齐次坐标可以使用矩阵同时描述旋转和平移,利用矩阵相乘来表述物体的旋转和平移。Homogeneous coordinates, which represent an n-dimensional vector with a (n+1)-dimensional vector, refer to a coordinate system used in projection geometry, just like Cartesian coordinates used in Euclidean geometry. For example, the homogeneous coordinates of a two-dimensional point (x, y) are represented as (hx, hy, h). The homogeneous representation of a vector is not unique, and different values of h in homogeneous coordinates represent the same point. For example, the homogeneous coordinates (8, 4, 2) and (4, 2, 1) represent the two-dimensional point (4, 2). The purpose of introducing homogeneous coordinates is mainly to combine multiplication and addition in matrix operations. Homogeneous coordinates provide a method for transforming a set of points in two-, three-, and even higher-dimensional spaces from one coordinate system to another using matrix operations. The purpose of introducing homogeneous coordinates is to facilitate computer graphics to perform affine geometric transformations. It can be understood that the introduction of homogeneous coordinates can use a matrix to describe rotation and translation at the same time, and use matrix multiplication to express the rotation and translation of an object.
相机的内参数,可以是与相机自身特性相关的参数,比如相机的焦距、像素大小等。The internal parameters of the camera may be parameters related to the characteristics of the camera itself, such as the focal length of the camera, the pixel size, and the like.
相机的外参数,可以是在世界坐标系中的参数,比如相机的位置、旋转方向等。The external parameters of the camera can be parameters in the world coordinate system, such as the position of the camera, the direction of rotation, etc.
点云,可以是在同一空间参考系下表达目标空间分布和目标表面特性的海量点集合,在获取物体表面每个采样点的空间坐标后,得到的是点的集合,称之为“点云”(point cloud)。A point cloud can be a collection of massive points that express the spatial distribution of the target and the characteristics of the target surface under the same spatial reference system. After obtaining the spatial coordinates of each sampling point on the surface of the object, a collection of points is obtained, which is called "point cloud". ” (point cloud).
纹理可以为二维图像上的像素特征信息,当把纹理按照特定的方式映射到物体表面上也称纹理贴图(texture)。Texture can be pixel feature information on a two-dimensional image. When a texture is mapped to the surface of an object in a specific way, it is also called a texture map.
请参阅图5所示,本申请实施例提供了一种全景图像的生成方法,该方法的执行主体为上述图3中的车载环视系统,或者,该方法的执行主体为图3中的车载图像处理装置,或者,该方法的执行主体为车载图像处理装置中的处理器或者芯片,本申请实施例中,该方法的执行主体以车载环视系统为例进行说明。Referring to FIG. 5 , an embodiment of the present application provides a method for generating a panoramic image. The execution body of the method is the vehicle-mounted surround view system in FIG. 3 , or the execution body of the method is the vehicle-mounted image in FIG. 3 . The processing device, or the execution body of the method is a processor or chip in the vehicle image processing device. In the embodiments of the present application, the execution body of the method is described by taking the vehicle surround view system as an example.
步骤501、车载环视系统获取车周物体的图像信息及深度信息。Step 501: The vehicle-mounted surround view system acquires image information and depth information of objects around the vehicle.
车辆的四周设置多个摄像头,例如该摄像头为广角摄像头(如鱼眼摄像头),车辆的前后左右四个方向中每个方向上设置至少一个鱼眼摄像头。鱼眼摄像头用于实时采集车辆周围的图像信息,由于鱼眼摄像头是一种超广角的特殊镜头,其构造仿照鱼类眼睛成像,可以独立实现大角度拍摄,鱼眼摄像头视角甚至可达180°,能够监控大范围场景中的物体。在车辆的四周每个方向上设置一个鱼眼摄像头,通过4个鱼眼摄像头就能够采集车辆四周的全景图像,从而能够节省摄像头的数量,降低成本。当然,本申请实施例中,摄像头并不限定于广角摄像头,摄像头也可以是普通摄像头,通过增加普通摄像头的数量来增大摄像头的视角,例如,在车周每个方向上设置2个或3个摄像头,利用普通摄像头采集车辆周围的全景图像,虽然增加了摄像头的数量,但是普通摄像头采集的图像信息无变形,采集到的图像效果较好。本申请实施例中,摄像头以鱼眼摄像头为例,且鱼眼摄像头的数量以4个为例,即车辆前后左右每个方向上设置1个鱼眼摄像头为例进行说明。Multiple cameras are arranged around the vehicle, for example, the camera is a wide-angle camera (such as a fisheye camera), and at least one fisheye camera is arranged in each of the four directions of the front, rear, left, and right directions of the vehicle. The fisheye camera is used to collect the image information around the vehicle in real time. Because the fisheye camera is a special ultra-wide-angle lens, its structure is imitated by the fish eye imaging, which can independently realize large-angle shooting, and the angle of view of the fisheye camera can even reach 180° , capable of monitoring objects in a wide range of scenes. A fisheye camera is set in each direction around the vehicle, and four fisheye cameras can collect panoramic images around the vehicle, thereby saving the number of cameras and reducing costs. Of course, in the embodiment of the present application, the camera is not limited to a wide-angle camera, and the camera can also be an ordinary camera. The viewing angle of the camera is increased by increasing the number of ordinary cameras. For example, two or three cameras are set in each direction of the vehicle circumference. A camera is used to collect panoramic images around the vehicle by using ordinary cameras. Although the number of cameras is increased, the image information collected by ordinary cameras is not deformed, and the collected images have better effect. In the embodiment of the present application, the camera is taken as an example of a fisheye camera, and the number of fisheye cameras is taken as an example, that is, one fisheye camera is provided in each direction of the vehicle, front, rear, left, right, and right.
车载环视系统获取车辆周围物体的深度信息包括下述两种实现方式。第一种实现方式中,传感器301为摄像头中的图像传感器。图像传感器可以采集车辆周围物体的图像信息,并将图像信息传输至车载图像处理装置,车载图像处理装置可以根据图像信息获取车周物体的深度信息。示例性的,车载图像处理装置对鱼眼摄像头采集的图像信息进行深度估计获得深度信息,车载图像处理装置获取深度信息的方法包括但不限于单目深度估计和双目深度估计等方法。例如,单目深度估计方法可以是利用单一视角的图像数据作为已经训练好的深度模型的输入,利用深度模型输出图像中每个像素对应的深度。即基于深度学习的单目估计方法是依据像素值关系反映深度关系,把图像信息映射成深度图。双目深度估计的方法是利用双目相机拍摄同一场景的左、右两幅视点图像,运用立体匹配算法获取视差图,进而获取深度图。深度图也被称为距离影像,是指将从图像传感器(或摄像头)到场景中各点的距离(深度)作为像素值的图像,例如,深度图中色调越暖表示深度值越小,色调越冷表示深度值越大。在该第一种实现方式中,可以仅使用图像传感器就可以既获取车辆周围的场景图像,又可以获取到车辆周围物体的深度信息,无需增加其他的获取深度信息的相关部件,节省成本。The in-vehicle surround view system obtains the depth information of objects around the vehicle, including the following two implementations. In a first implementation manner, the sensor 301 is an image sensor in a camera. The image sensor can collect the image information of the objects around the vehicle, and transmit the image information to the vehicle-mounted image processing device, and the vehicle-mounted image processing device can obtain the depth information of the objects around the vehicle according to the image information. Exemplarily, the vehicle-mounted image processing device performs depth estimation on the image information collected by the fisheye camera to obtain depth information, and the methods for obtaining the depth information by the vehicle-mounted image processing device include but are not limited to methods such as monocular depth estimation and binocular depth estimation. For example, the monocular depth estimation method may use the image data of a single view as the input of the trained depth model, and use the depth model to output the depth corresponding to each pixel in the image. That is, the monocular estimation method based on deep learning reflects the depth relationship according to the pixel value relationship, and maps the image information into a depth map. The method of binocular depth estimation is to use a binocular camera to shoot left and right viewpoint images of the same scene, and use a stereo matching algorithm to obtain a disparity map, and then obtain a depth map. A depth map, also known as a distance image, refers to an image in which the distance (depth) from the image sensor (or camera) to each point in the scene is used as a pixel value. Colder means higher depth values. In the first implementation, only the image sensor can be used to obtain both the scene image around the vehicle and the depth information of the objects around the vehicle, without adding other relevant components for obtaining depth information, thus saving costs.
第二种实现方式中,传感器301还包括深度传感器,深度传感器用于采集车辆周围物体的深度信息。深度传感器包括但不限定毫米波雷达和激光雷达等。例如,通过激光雷达获取车辆周围物体的深度信息的方法是,激光雷达按照一定时间间隔向空间发射激光,并记录各个扫描点的信号从雷达到被测场景中的物体,随后又经过物体反射回到激光雷达的相隔时间,根据该相隔时间推算出物体表面与雷达之间的距离,也即获取车辆周围物体的深度信息。再如,通过毫米波雷达获得深度信息的方法是,毫米波雷达发出高频连续的信号,信号遇到车辆周围的物体之后被反射,接收器接收到被物体反射的反射信号。雷达从发射信号到接收反射信号之间具有一定的时间间隔t。t=2d/c,其中d为雷达到物体的距离,c为光速。本申请实施例中,以上所述两种获取深度信息的方法也仅是示例性的说明,并不限定获取深度信息的具体方法。在该第二种实现方式中,深度传感器采集车周物体的深度信息,并将该深度信息传输至车载图像处理装置,可以节省根据图像信息进行深度估计的步骤,从而节省车载图像处理装置的算力。本申请实施例中,以上述第一种实现方式中 获取深度信息的方法为例进行说明。In the second implementation manner, the sensor 301 further includes a depth sensor, and the depth sensor is used to collect depth information of objects around the vehicle. Depth sensors include, but are not limited to, millimeter-wave radar and lidar. For example, the method of obtaining the depth information of objects around the vehicle through lidar is that the lidar emits laser light into space at certain time intervals, and records the signal of each scanning point from the radar to the object in the scene under test, and then reflects back through the object. The interval time to the lidar, the distance between the surface of the object and the radar is calculated according to the interval time, that is, the depth information of the objects around the vehicle is obtained. For another example, the method of obtaining depth information through millimeter-wave radar is that the millimeter-wave radar sends out a high-frequency continuous signal, the signal is reflected after encountering objects around the vehicle, and the receiver receives the reflected signal reflected by the object. The radar has a certain time interval t from transmitting the signal to receiving the reflected signal. t=2d/c, where d is the distance from the radar to the object, and c is the speed of light. In this embodiment of the present application, the above two methods for acquiring depth information are only exemplary descriptions, and do not limit specific methods for acquiring depth information. In the second implementation manner, the depth sensor collects the depth information of objects around the vehicle and transmits the depth information to the vehicle-mounted image processing device, which can save the steps of depth estimation according to the image information, thereby saving the calculation of the vehicle-mounted image processing device. force. In the embodiment of the present application, the method for acquiring depth information in the above-mentioned first implementation manner is used as an example for description.
步骤502、车载环视系统获取初始模型。 Step 502, the vehicle-mounted surround view system acquires an initial model.
示例性的,车载图像处理装置生成初始模型,初始模型为碗状的3D模型,或者,初始模型为圆柱状的3D模型等。可选地,初始模型可以为光滑的立体模型,或者,初始模型可以是散点模型。Exemplarily, the vehicle-mounted image processing device generates an initial model, and the initial model is a bowl-shaped 3D model, or the initial model is a cylindrical 3D model, and the like. Alternatively, the initial model can be a smooth solid model, or the initial model can be a scatter model.
需要说明的是,上述步骤501和步骤502之间没有时序上的限定,步骤502可以在步骤501之前执行,或者,步骤502可以在步骤501之后执行,步骤502和步骤501也可以同步执行。It should be noted that there is no timing limitation between the above steps 501 and 502, and step 502 may be executed before step 501, or step 502 may be executed after step 501, and steps 502 and 501 may also be executed synchronously.
步骤503、车载环视系统根据深度信息将所述初始模型上第一坐标点调整至第二坐标点,得到第一模型。Step 503: The vehicle-mounted surround view system adjusts the first coordinate point on the initial model to the second coordinate point according to the depth information to obtain the first model.
首先,请参阅图6所示,图6为点云示意图。车载图像处理装置基于获取到的深度信息,并根据图像信息中的像素点坐标及像素点坐标对应的深度信息,将图像信息中的像素点坐标转换为相机坐标系下的第一点云坐标。上述步骤501中得到的车周场景的图像信息包括多个像素点,例如,多个像素点中包括第一像素点(如u i,v i)。上述步骤501中得到的车周场景的深度信息。车载图像处理装置根据第一像素点和该第一像素点对应的深度信息将像素坐标系下的点转换到相机坐标系下的点云坐标,相机坐标系下的点云坐标(x i,y i,z i)如下述式(1)所示。 First, please refer to FIG. 6 , which is a schematic diagram of a point cloud. Based on the acquired depth information, the vehicle-mounted image processing device converts the pixel coordinates in the image information into the first point cloud coordinates in the camera coordinate system according to the pixel coordinates in the image information and the depth information corresponding to the pixel coordinates. The image information of the vehicle surrounding scene obtained in the above step 501 includes a plurality of pixel points, for example, a first pixel point (eg, ui , v i ) is included in the plurality of pixel points. The depth information of the scene around the vehicle obtained in the above step 501 . The vehicle-mounted image processing device converts the point under the pixel coordinate system to the point cloud coordinate under the camera coordinate system according to the first pixel point and the depth information corresponding to the first pixel point, and the point cloud coordinate (x i , y under the camera coordinate system) i , z i ) are represented by the following formula (1).
Figure PCTCN2021089132-appb-000001
Figure PCTCN2021089132-appb-000001
其中,u 0,v 0表示光心在图像坐标系中的坐标,f x表示水平方向上的焦距,f y表示竖直方向上的焦距。 Among them, u 0 , v 0 represent the coordinates of the optical center in the image coordinate system, f x represents the focal length in the horizontal direction, and f y represents the focal length in the vertical direction.
然后,车载图像处理装置将相机坐标系下的第一点云坐标(x i,y i,z i)转换到世界坐标系下的第二点云坐标(X i,Y i,Z i),需要说明的是,本实施例中为了区别相机坐标系下的点云坐标和世界坐标系下的点云坐标,将相机坐标系下的点云坐标称为“第一点云坐标”,将世界坐标系下的点云坐标称为“第二点云坐标”。具体的,将第一点云坐标转换为第二点云坐标请参阅下述式(2)和式(3)所示。 Then, the vehicle-mounted image processing device converts the first point cloud coordinates (x i , y i , z i ) in the camera coordinate system to the second point cloud coordinates (X i , Y i , Z i ) in the world coordinate system, It should be noted that, in this embodiment, in order to distinguish the point cloud coordinates in the camera coordinate system from the point cloud coordinates in the world coordinate system, the point cloud coordinates in the camera coordinate system are called "first point cloud coordinates", and the world coordinate system is called "first point cloud coordinates". The point cloud coordinates in the coordinate system are called "second point cloud coordinates". Specifically, for converting the coordinates of the first point cloud into the coordinates of the second point cloud, please refer to the following formulas (2) and (3).
Figure PCTCN2021089132-appb-000002
Figure PCTCN2021089132-appb-000002
式(2)通过齐次坐标下可以表示为如下式(3)。The formula (2) can be expressed as the following formula (3) by homogeneous coordinates.
Figure PCTCN2021089132-appb-000003
Figure PCTCN2021089132-appb-000003
其中,上述式(2)和式(3)中,r j表示第j个相机对应的坐标系变换的旋转矩阵,例如,j的取值为1,2,3,4,如第1个相机为前图像传感器,第2个相机为左图像传感器,第3个相机为后图像传感器,第4个相机为右图像传感器;t j表示第j个相机对应的坐标系变 换的平移矩阵;(*) -1表示矩阵求逆;
Figure PCTCN2021089132-appb-000004
表示第j个相机的外参;(X i,Y i,Z i)为像素点(u i,v i)对应的三维坐标点在世界坐标系下的坐标。
Among them, in the above formulas (2) and (3), r j represents the rotation matrix of the coordinate system transformation corresponding to the jth camera, for example, the value of j is 1, 2, 3, 4, such as the first camera is the front image sensor, the second camera is the left image sensor, the third camera is the rear image sensor, and the fourth camera is the right image sensor; t j represents the translation matrix of the coordinate system transformation corresponding to the jth camera; (* ) -1 means matrix inversion;
Figure PCTCN2021089132-appb-000004
Indicates the external parameters of the jth camera; (X i , Y i , Z i ) are the coordinates of the three-dimensional coordinate point corresponding to the pixel point (u i , v i ) in the world coordinate system.
再后,车载图像处理装置将4个图像传感器采集到的图像信息在世界坐标系下的4片点云进行拼接(或连接),得到一个完整的整片全景点云(即目标点云)。示例性的,请参阅下述步骤(a)和步骤(b)进行说明。Then, the vehicle-mounted image processing device splices (or connects) the image information collected by the four image sensors in the four point clouds in the world coordinate system to obtain a complete whole panoramic point cloud (ie, the target point cloud). Exemplarily, please refer to the following steps (a) and (b) for description.
(a)、车载图像处理装置对两个相邻图像传感器所采集图像(第一图像和第二图像)的重叠区域点云进行匹配,从而得到对第一图像和第二图像进行变换的旋转矩阵(用“R”表示)和平移矩阵(用“T”表示)。应理解,请参阅图7所示,第一图像传感器和第二图像传感器为相邻的图像传感器(如前摄像头和左摄像头,左摄像头和后摄像头,后摄像头和右摄像头,右摄像头和前摄像头)拍摄到的图像会有重叠区域。第一图像传感器用于采集第一图像,第二图像传感器用于采集第二图像。第一图像中重叠区域的点云数据记为“P”,第二图像中重叠区域的点云数据记为“Q”。利用如下式(4)的目标函数找到一组旋转矩阵R和平移矩阵T。应理解,由于两个相机的位姿不同,由此两个图像传感器采集到的图像会有微小角度和方位的差异,利用两个图像传感器采集的图像的重叠区域(相同的景物)就可以找到这种差异,从而消除计算误差。(a), the vehicle-mounted image processing device matches the point clouds of the overlapping regions of the images (the first image and the second image) collected by two adjacent image sensors, so as to obtain a rotation matrix for transforming the first image and the second image (denoted by "R") and translation matrix (denoted by "T"). It should be understood that, referring to FIG. 7 , the first image sensor and the second image sensor are adjacent image sensors (such as the front camera and the left camera, the left camera and the rear camera, the rear camera and the right camera, and the right camera and the front camera. ) will capture images with overlapping areas. The first image sensor is used for capturing the first image, and the second image sensor is used for capturing the second image. The point cloud data of the overlapping area in the first image is marked as "P", and the point cloud data of the overlapping area in the second image is marked as "Q". A set of rotation matrices R and translation matrices T are found using the objective function of the following equation (4). It should be understood that due to the different poses of the two cameras, the images collected by the two image sensors will have slight differences in angle and orientation, and the overlapping area (same scene) of the images collected by the two image sensors can be found. This difference, thereby eliminating calculation errors.
Figure PCTCN2021089132-appb-000005
Figure PCTCN2021089132-appb-000005
其中,f(q i)表示点云数据Q中第i点的三维坐标,f(p i)表示点云数据P中第i点的三维坐标;R h表示点云数据P进行变换的旋转矩阵,T h表示点云数据P进行变换的平移矩阵;h=j-1;j为图像传感器的数量。例如,图像传感器的数量为4个,h的取值为1,2,3。四个图像传感器两两相邻,一共需要找到3组R和T。3组R和T分别为“R 1和T 1”,“R 2和T 2”和“R 3和T 3”。从上述式(4)可以看出,R×f(p i)+T为f(p i)为图像B的点云经过旋转和平移之后的三维坐标,通过上述式(4)找到一组R h和T h,使得目标函数具有最小值,即点云数据P经过旋转和平移之后和点云数据Q之间具有最小值,从而使得两个重叠区域点云数据Q和点云数据P相匹配。 Among them, f(q i ) represents the three-dimensional coordinates of the i-th point in the point cloud data Q, f( pi ) represents the three-dimensional coordinates of the i-th point in the point cloud data P; R h represents the rotation matrix of the point cloud data P for transformation , T h represents the translation matrix transformed by the point cloud data P; h=j-1; j is the number of image sensors. For example, the number of image sensors is 4, and the value of h is 1, 2, 3. Four image sensors are adjacent to each other, and a total of 3 sets of R and T need to be found. The 3 groups R and T are "R 1 and T 1 ", "R 2 and T 2 " and "R 3 and T 3 ", respectively. It can be seen from the above formula (4) that R×f( pi )+T is f( pi ) which is the three-dimensional coordinates of the point cloud of the image B after rotation and translation, and a set of R is found through the above formula (4). h and Th , so that the objective function has a minimum value, that is, the point cloud data P has a minimum value between the point cloud data P and the point cloud data Q after rotation and translation, so that the point cloud data Q and the point cloud data P in the two overlapping areas match .
需要说明的是,为了区别原图像的点云和经过旋转和平移变换之后的点云,本实施例中,将原图像的点云称为“点云A”,将经过变换之后的点云称为“点云B”。为了区别不同图像传感器采集的图像,将前图像传感器采集的图像称为“图像A”,左图像传感器采集的图像称为“图像B”,后图像传感器采集的图像称为“图像C”,右图像传感器采集的图像称为“图像D”。It should be noted that, in order to distinguish the point cloud of the original image from the point cloud transformed by rotation and translation, in this embodiment, the point cloud of the original image is called "point cloud A", and the transformed point cloud is called "point cloud A". is "point cloud B". In order to distinguish the images collected by different image sensors, the image collected by the front image sensor is called "image A", the image collected by the left image sensor is called "image B", the image collected by the rear image sensor is called "image C", and the image collected by the rear image sensor is called "image C". The image captured by the image sensor is called "Image D".
示例性的,请参阅图8所示,前图像传感器采集图像A,左图像传感器采集图像B,图像A和图像B的重叠区域为车辆的左前方物体的图像。在上述式(4)中,f(q i)表示图像A的点云A中重叠区域的点云数据,f(p i)表示图像B的点云A中重叠区域的点云数据,根据上述式(4)得到R 2和T 28 , the front image sensor collects image A, the left image sensor collects image B, and the overlapping area of image A and image B is the image of the object in front of the left of the vehicle. In the above formula (4), f(q i ) represents the point cloud data of the overlapping area in the point cloud A of the image A, and f( pi ) represents the point cloud data of the overlapping area in the point cloud A of the image B. According to the above Formula (4) yields R 2 and T 2 .
车载图像处理装置需要对图像A中的重叠区域和图像B中的重叠区域进行匹配,当上述式(4)具有最小值时,图像A中的重叠区域和图像B中的重叠区域相匹配。图像A中的重叠区域和图像B中的重叠区域匹配的目的是,能够找到一组R 1和T 1,从而通过R 1和 T 1就可以对图像A的点云或图像B的点云进行变换,进而将这两个相邻的图像传感器采集的图像的点云进行拼接。 The vehicle-mounted image processing device needs to match the overlapping area in image A with the overlapping area in image B. When the above formula (4) has a minimum value, the overlapping area in image A matches the overlapping area in image B. The purpose of matching the overlapping area in image A with the overlapping area in image B is to find a set of R 1 and T 1 , so that the point cloud of image A or the point cloud of image B can be processed through R 1 and T 1 . Transform, and then stitch the point clouds of the images collected by the two adjacent image sensors.
(b)、车载图像处理装置利用上述式(4)得到R h和T h之后,将每两个相邻图像传感器采集的图像的点云进行拼接,得到一个完整的整片全景点云。示例性的,车载图像处理装置利用上述式(4)将图像A的点云A的重叠区域和图像B的点云A的重叠区域进行匹配,找到一组得到R 1和T 1。车载图像处理装置得到R 1和T 1之后,车载图像处理装置可以先固定前图像传感器采集的图像A的点云A。然后,利用R 1和T 1对图像B的点云A进行旋转和平移变换,就可以将左图像传感器采集的图像B的点云A在世界坐标系下的整片点云数据经过旋转和平移之后,得到图像B的点云B,将图像B的点云B再和图像A的点云A进行拼接。 (b) After obtaining R h and T h by the above formula (4), the vehicle-mounted image processing device splices the point clouds of the images collected by each two adjacent image sensors to obtain a complete whole panoramic point cloud. Exemplarily, the vehicle-mounted image processing apparatus uses the above formula (4) to match the overlapping area of point cloud A of image A with the overlapping area of point cloud A of image B, and find a set to obtain R 1 and T 1 . After the vehicle-mounted image processing device obtains R 1 and T 1 , the vehicle-mounted image processing device may first fix the point cloud A of the image A collected by the front image sensor. Then, using R 1 and T 1 to rotate and translate the point cloud A of the image B, the whole point cloud data of the point cloud A of the image B collected by the left image sensor in the world coordinate system can be rotated and translated After that, the point cloud B of the image B is obtained, and the point cloud B of the image B is spliced with the point cloud A of the image A.
同理,后图像传感器采集图像C,车载图像处理装置根据图像B的点云B中的重叠区域和图像C的点云A中的重叠区域的点云通过上述式(4)找到一组R 2和T 2。其中,f(q i)表示图像B的点云B中重叠区域的点云数据,f(p i)表示图像C的点云A中重叠区域的点云数据。车载图像处理装置利用上述式(4)将图像B的点云B的重叠区域和图像C的点云A的重叠区域进行匹配,找到一组得到R 2和T 2。然后车载图像处理装置固定第二图像的点云B,利用R 2和T 2对图像C的点云A进行旋转和平移变换,就可以将后图像传感器采集的图像C在世界坐标系下的整片点云经过旋转和平移之后,得到图像C的点云B。进一步的,车载图像处理装置将图像C的点云B和图像B的点云B进行拼接。同理,右图像传感器采集图像D,车载图像处理装置根据图像C的点云B中的重叠区域和图像D的点云A中的重叠区域通过上述式(4)找到一组R 3和T 3。其中,f(q i)表示图像C的点云B中重叠区域的点云数据,f(p i)表示图像D的点云A中重叠区域的点云数据。车载图像处理装置利用上述式(4)得到R 3和T 3。然后,车载图像处理装置固定图像C的点云B,利用R 3和T 3对图像D的点云A进行旋转和平移变换,就可以将右摄像头采集的图像D在世界坐标系下的整片点云数据经过旋转和平移之后,得到图像D的点云B。进一步的,车载图像处理装置将图像D的点云B和图像C的点云B进行拼接。车载图像处理装置再将图像D的点云B和图像A的点云A进行拼接,得到目标点云。该目标点云是车载图像处理装置对图像A的点云A,图像B的点云B,图像C的点云C和图像D的点云B拼接后的整片点云。本实施例中,对两个图像传感器采集的图像的重叠区域(相同的景物)进行匹配,从而能够找到两个图像传感器采集到的图像的角度和方位的差异,即通过旋转和平移可以平衡这种差异,从而可以将多个图像传感器采集图像的点云进行拼接得到一个完整的图像,通过目标点云上的点对初始模型上的第一坐标点进行调整,可以重建3D模型。 In the same way, the rear image sensor collects the image C, and the vehicle-mounted image processing device finds a set of R 2 according to the overlapping area in the point cloud B of the image B and the point cloud of the overlapping area in the point cloud A of the image C through the above formula (4). and T 2 . Among them, f(q i ) represents the point cloud data of the overlapping area in the point cloud B of the image B, and f( pi ) represents the point cloud data of the overlapping area in the point cloud A of the image C. The vehicle-mounted image processing device uses the above formula (4) to match the overlapping area of point cloud B of image B with the overlapping area of point cloud A of image C, and find a set to obtain R 2 and T 2 . Then the vehicle-mounted image processing device fixes the point cloud B of the second image, and uses R 2 and T 2 to perform rotation and translation transformation on the point cloud A of the image C, so that the whole image C collected by the rear image sensor can be adjusted in the world coordinate system. After the point cloud is rotated and translated, point cloud B of image C is obtained. Further, the vehicle-mounted image processing device splices the point cloud B of the image C and the point cloud B of the image B. Similarly, the right image sensor collects the image D, and the vehicle-mounted image processing device finds a set of R 3 and T 3 according to the overlapping area in the point cloud B of the image C and the overlapping area in the point cloud A of the image D through the above formula (4). . Among them, f(q i ) represents the point cloud data of the overlapping area in the point cloud B of the image C, and f( pi ) represents the point cloud data of the overlapping area in the point cloud A of the image D. The in-vehicle image processing apparatus obtains R 3 and T 3 by using the above formula (4). Then, the vehicle-mounted image processing device fixes the point cloud B of the image C, and uses R 3 and T 3 to rotate and translate the point cloud A of the image D, so that the entire image D collected by the right camera in the world coordinate system can be transformed. After the point cloud data is rotated and translated, the point cloud B of the image D is obtained. Further, the vehicle-mounted image processing device splices the point cloud B of the image D and the point cloud B of the image C. The vehicle-mounted image processing device then splices the point cloud B of the image D and the point cloud A of the image A to obtain the target point cloud. The target point cloud is the entire point cloud after the vehicle-mounted image processing device splices point cloud A of image A, point cloud B of image B, point cloud C of image C, and point cloud B of image D. In this embodiment, the overlapping area (same scene) of the images collected by the two image sensors is matched, so that the difference in angle and orientation of the images collected by the two image sensors can be found, that is, the rotation and translation can balance this Therefore, a complete image can be obtained by stitching the point clouds of the images collected by multiple image sensors, and the 3D model can be reconstructed by adjusting the first coordinate point on the initial model through the points on the target point cloud.
最后,车载环视系统调整初始模型上的第一坐标点,将第一坐标点调整至第二坐标点,生成第一模型。其中,该第二坐标点是由第一坐标点邻域范围内的多个第三坐标点得到的,第三坐标点是目标点云上的坐标点。请参阅图9A所示,在世界坐标系下,Z轴与地面垂直,那么对于世界坐标系下初始模型上一个第一坐标点(X a,Y a,Z a),Z a表示距离地面的高度,车载图像处理装置在目标点云中的大量散点中,确定在第一坐标点邻域范围内的多个第三坐标点,例如,第三坐标点是高度为Z a的三维坐标点,高度为Z a的三维坐标点的集合为M。 例如,在M中包括三维坐标点(X 1,Y 1,Z 1),(X 2,Y 2,Z 2)和(X 3,Y 3,Z 3)等等,其中,Z 1,Z 2和Z 3的值均与Z a相等,车载图像处理装置根据M集合中各个三维点的X值(如X 1,X 2和X 3)和Y值(如Y 1,Y 2和Y 3)来调整X a和Y a的值。应理解,M中的各坐标点的X值和Y值能够表示车辆距离车周景物的实际距离。并且M集合中的三维坐标点是第一坐标点(X a,Y a,Z a)邻域范围内的坐标点,请参阅图9B进行理解,第一坐标点的“邻域范围”是指“第一范围”和“第二范围”的交集。对“第一范围”和“第二范围”进行示例性说明。“第一范围”可以理解为横向范围,即以初始模型中心点为圆心,经过中心点的两条射线之间的范围。例如,中心点到第一坐标点的射线a,射线a围绕中心点逆时针旋转α度(°),得到射线b,射线a围绕中心点顺时针旋转α°,得到射线c,射线b和射线c之间的范围为第一范围。例如,α是小于或者等于10°的角度,α的大小可以根据实际需要进行设置。“第二范围”可以理解为纵向范围,示例性的,以初始模型的中心点为圆心,第一坐标点距离圆心的距离为R1,则第二范围为:半径为R2和半径为R3的两个同心圆组成的圆环覆盖的范围。其中,R2小于R1且R3大于R1,R1-R2=g1,R3-R1=g2,g1和g2可以相等,也可以不等,g1和g2可以根据经验值或实验值进行设定。进一步的,车载图像处理装置根据所述多个第三坐标点确定第二坐标点。示例性的,车载图像处理装置根据集合M中各三维坐标点的X值调整X a,根据集合M中各三维坐标点的Y值调整Y a。待调整的点为第一坐标点(X a,Y a,Z a),调整后的第二坐标点为(X a′,Y a′,Z a),调整后的X a′和Y a′如下式(5)和式(6)所示。 Finally, the vehicle-mounted surround view system adjusts the first coordinate point on the initial model, adjusts the first coordinate point to the second coordinate point, and generates a first model. Wherein, the second coordinate point is obtained from a plurality of third coordinate points within the neighborhood of the first coordinate point, and the third coordinate point is a coordinate point on the target point cloud. Referring to Figure 9A, in the world coordinate system, the Z axis is perpendicular to the ground, then for a first coordinate point (X a , Y a , Z a ) on the initial model in the world coordinate system, Z a represents the distance from the ground Height, the vehicle-mounted image processing device determines a plurality of third coordinate points within the neighborhood of the first coordinate point among a large number of scattered points in the target point cloud, for example, the third coordinate point is a three-dimensional coordinate point with a height of Z a , the set of three-dimensional coordinate points whose height is Z a is M. For example, M includes three-dimensional coordinate points (X 1 , Y 1 , Z 1 ), (X 2 , Y 2 , Z 2 ) and (X 3 , Y 3 , Z 3 ), etc., where Z 1 , Z The values of 2 and Z 3 are both equal to Z a , and the vehicle-mounted image processing device is based on the X values (such as X 1 , X 2 and X 3 ) and Y values (such as Y 1 , Y 2 and Y 3 of each three-dimensional point in the M set) ) to adjust the values of X a and Y a . It should be understood that the X value and the Y value of each coordinate point in M can represent the actual distance between the vehicle and the scene around the vehicle. And the three-dimensional coordinate points in the M set are coordinate points within the neighborhood range of the first coordinate point (X a , Y a , Z a ), please refer to FIG. 9B for understanding, the “neighborhood range” of the first coordinate point refers to The intersection of "First Range" and "Second Range". The "first range" and the "second range" are exemplified. The "first range" can be understood as the lateral range, that is, the range between two rays passing through the center point with the center point of the initial model as the center of the circle. For example, the ray a from the center point to the first coordinate point, the ray a rotates α degrees (°) counterclockwise around the center point to obtain the ray b, and the ray a rotates α° clockwise around the center point to obtain the ray c, the ray b and the ray The range between c is the first range. For example, α is an angle less than or equal to 10°, and the size of α can be set according to actual needs. The "second range" can be understood as a longitudinal range. For example, the center point of the initial model is taken as the center of the circle, and the distance between the first coordinate point and the center of the circle is R1, then the second range is: the radius is R2 and the radius is R3. The range covered by a ring composed of concentric circles. Among them, R2 is less than R1 and R3 is greater than R1, R1-R2=g1, R3-R1=g2, g1 and g2 can be equal or unequal, and g1 and g2 can be set according to empirical or experimental values. Further, the vehicle-mounted image processing device determines the second coordinate point according to the plurality of third coordinate points. Exemplarily, the vehicle-mounted image processing apparatus adjusts X a according to the X value of each three-dimensional coordinate point in the set M, and adjusts Y a according to the Y value of each three-dimensional coordinate point in the set M. The point to be adjusted is the first coordinate point (X a , Y a , Z a ), the adjusted second coordinate point is (X a ′, Y a ′, Z a ), the adjusted X a ′ and Y a ' is represented by the following formulas (5) and (6).
Figure PCTCN2021089132-appb-000006
Figure PCTCN2021089132-appb-000006
其中,X a′为调整后的X值,X b属于X a邻域内的点,n为邻域内点的数量,δ(*)表示邻域,δ(X a)表示X a的邻域,即M集合中的三维坐标点中X值的集合,例如,δ(X a)包括X 1,X 2和X 3Among them, X a ′ is the adjusted X value, X b belongs to the points in the neighborhood of X a , n is the number of points in the neighborhood, δ(*) represents the neighborhood, δ(X a ) represents the neighborhood of X a , That is, the set of X values in the three-dimensional coordinate points in the M set, for example, δ(X a ) includes X 1 , X 2 and X 3 .
Figure PCTCN2021089132-appb-000007
Figure PCTCN2021089132-appb-000007
其中,Y a′为调整后的Y值,Y b属于Y a邻域内的点,n为邻域内点的数量,δ(*)表示邻域,δ(Y a)表示Y a的邻域,即M集合中的三维坐标点中Y值的集合,例如,δ(Y a)包括Y 1,Y 2和Y 3Among them, Y a ′ is the adjusted Y value, Y b belongs to the points in the neighborhood of Y a , n is the number of points in the neighborhood, δ(*) represents the neighborhood, δ(Y a ) represents the neighborhood of Y a , That is, the set of Y values in the three-dimensional coordinate points in the M set, for example, δ(Y a ) includes Y 1 , Y 2 and Y 3 .
请参阅图9C和9D所示,初始模型上的三维坐标点为(X a,Y a,Z a),对该三维坐标点的位置进行调整,根据上述式(5)和式(6)将(X a,Y a,Z a)调整至(X a′,Y a′,Z a′),得到第一模型。应理解,本实施例中,仅是以调整初始模型上的一个三维点(X a,Y a,Z a)为例进行说明,其他第一坐标点的调整可以采用上述的方法同样得到,在此不一一赘述说明,并且在实际应用中,第一坐标点的数量并不限定,第一坐标点的位置并不限定,并且调整后的第二坐标点的位置并不限定,例如请参阅图9C所示,调整后的第二坐标点(X a′,Y a′,Z a′)可能位于初始模型的“外侧”。请参阅图9D所示,调整后的第二坐标点(X a′,Y a′,Z a′)可能位于初始模型的“内侧”。车载图像处理装置根据车辆周围物体的深度信息实时调整第一坐标点的位置,由于实时调整初始模型上的第一坐标点的位置,那么初始模型的形状会发生变化,即该第一模型是根据车辆周围的物体距离车辆的实际距离得到的模型。第一模型的形状可能是不规则的形状,并且该第一模型的形状是随着车辆与车周物体的距离的变化而变化的。 本实施例中,目标点云是基于车辆周围的物体距离车辆的实际距离得到的点云,车载图像处理装置在目标点云中的大量散点中,确定初始模型上的第一坐标点邻域范围内的多个第三坐标点,再根据多个第三坐标点确定第二坐标点,进而可以将初始模型上的点调整至第二坐标点,车载图像处理装置根据车辆周围的物体与车辆的实际距离精准地重建第一模型。 Please refer to FIGS. 9C and 9D , the three-dimensional coordinate point on the initial model is (X a , Y a , Z a ), and the position of the three-dimensional coordinate point is adjusted. According to the above formulas (5) and (6), the (X a , Y a , Z a ) are adjusted to (X a ′, Y a ′, Z a ′) to obtain the first model. It should be understood that, in this embodiment, only one three-dimensional point (X a , Y a , Z a ) on the initial model is adjusted as an example for description, and the adjustment of other first coordinate points can be obtained by the same method as above. This will not be described one by one, and in practical applications, the number of the first coordinate points is not limited, the position of the first coordinate point is not limited, and the position of the adjusted second coordinate point is not limited. For example, please refer to As shown in Figure 9C, the adjusted second coordinate points (X a ', Y a ', Z a ') may be located "outside" the original model. Referring to Figure 9D, the adjusted second coordinate points (X a ', Y a ', Z a ') may be located "inside" the original model. The vehicle-mounted image processing device adjusts the position of the first coordinate point in real time according to the depth information of the objects around the vehicle. Since the position of the first coordinate point on the initial model is adjusted in real time, the shape of the initial model will change, that is, the first model is based on A model derived from the actual distance of objects around the vehicle from the vehicle. The shape of the first model may be an irregular shape, and the shape of the first model varies with the distance of the vehicle from objects around the vehicle. In this embodiment, the target point cloud is a point cloud obtained based on the actual distance between the objects around the vehicle and the vehicle, and the vehicle-mounted image processing device determines the first coordinate point neighborhood on the initial model among a large number of scattered points in the target point cloud Multiple third coordinate points within the range, and then determine the second coordinate point according to the multiple third coordinate points, and then adjust the points on the initial model to the second coordinate point. The actual distance to accurately reconstruct the first model.
步骤504、车载环视系统基于所述图像信息和所述第一模型获取全景图像。Step 504: The vehicle-mounted surround view system acquires a panoramic image based on the image information and the first model.
车载环视系统根据图像信息对第一模型进行纹理贴图,生成3D全景图像(或称为“全景图像”)。The vehicle-mounted surround view system performs texture mapping on the first model according to the image information to generate a 3D panoramic image (or called a "panoramic image").
在步骤501中,4个图像传感器采集车周4个方向的图像,图像传感器将4个方向的图像传输至车载图像处理装置,车载图像处理装置可以采用纹理映射形式进行贴图。示例性的,车载图像处理装置利用事先标定得到相机内外参数,对优化后的第一模型的三维坐标进行外参、内参映射,得到二维像素坐标,再从鱼眼相机采集的图像(或也称为“纹理图像”)中获取对应的纹理像素,将图像中的坐标点对应到第一模型表面上,即纹理图像上的每个像素点对应到第一模型上的哪个点进行渲染填色,由此将整个纹理图像都覆盖到第一模型上,从而得到3D全景图像。In step 501 , four image sensors collect images in four directions around the vehicle, and the image sensors transmit the images in the four directions to the vehicle-mounted image processing device, which may use texture mapping to map. Exemplarily, the in-vehicle image processing device obtains the internal and external parameters of the camera through pre-calibration, performs external and internal parameter mapping on the three-dimensional coordinates of the optimized first model, and obtains two-dimensional pixel coordinates, and then obtains the two-dimensional pixel coordinates from the image (or also from the image collected by the fisheye camera). (called "texture image") to obtain the corresponding texture pixels, and correspond the coordinate points in the image to the surface of the first model, that is, which point on the first model corresponds to each pixel on the texture image for rendering and coloring , so that the entire texture image is covered on the first model, thereby obtaining a 3D panoramic image.
步骤505、车载环视系统输出全景图像。 Step 505 , the vehicle-mounted surround view system outputs a panoramic image.
车载图像处理装置将3D全景图像输出至车载显示器,车载显示器显示3D全景图像。The vehicle-mounted image processing device outputs the 3D panoramic image to the vehicle-mounted display, and the vehicle-mounted display displays the 3D panoramic image.
本申请实施例中,车载图像处理装置在创建3D模型时,引入了深度信息,车载图像处理装置根据深度信息调整初始模型上的坐标点,得到第一模型。车载图像处理装置根据真实世界的车辆周围物体对应生成虚拟的第一模型,基于真实世界的车辆对应生成虚拟的车辆模型。例如,请参阅图10所示,在真实世界中,车周物体包括第一物体和第二物体,车辆与第一物体间的距离为第一距离,车辆与第二物体间的距离为第二距离。所述车辆的位置映射到所述全景图像中的第一位置,所述第一物体的位置映射到所述全景图像的第二位置,所述第二物体的位置映射到全景图像中的第三位置。当所述第一距离大于所述第二距离时,在所述全景图像中,第一位置与第二位置之间的距离大于所述第一位置与所述第三位置之间的距离。即当车辆周围物体与车辆之间的距离发生变化时,车辆模型与第一模型上物体对应的坐标点的距离也会发生变化,从而得到与车周实际环境一致的3D全景图像,消除拼接重影和错位问题,提高拼缝区域的检测精度和驾驶员体验效果。In the embodiment of the present application, the vehicle-mounted image processing device introduces depth information when creating a 3D model, and the vehicle-mounted image processing device adjusts the coordinate points on the initial model according to the depth information to obtain the first model. The in-vehicle image processing device generates a virtual first model corresponding to the objects around the vehicle in the real world, and generates a virtual vehicle model based on the corresponding vehicle in the real world. For example, please refer to Figure 10. In the real world, the objects around the vehicle include a first object and a second object, the distance between the vehicle and the first object is the first distance, and the distance between the vehicle and the second object is the second distance. The position of the vehicle is mapped to the first position in the panoramic image, the position of the first object is mapped to the second position of the panoramic image, and the position of the second object is mapped to the third position in the panoramic image. Location. When the first distance is greater than the second distance, in the panoramic image, the distance between the first position and the second position is greater than the distance between the first position and the third position. That is, when the distance between the objects around the vehicle and the vehicle changes, the distance between the vehicle model and the coordinate points corresponding to the objects on the first model will also change, so that a 3D panoramic image consistent with the actual environment around the vehicle can be obtained, eliminating the need for stitching. It can improve the detection accuracy and driver experience effect of the seam area.
可选地,在步骤503之后,步骤504之前,还包括如下步骤:车载图像处理装置对第一模型上的散点进行插值光滑化处理,得到第二模型,第二模型为光滑化处理后的模型;在步骤504中,车载图像处理装置根据图像信息对第二模型进行纹理贴图,生成3D全景图像。Optionally, after step 503 and before step 504, it further includes the following steps: the vehicle-mounted image processing device performs interpolation and smoothing processing on the scattered points on the first model to obtain a second model, and the second model is the smoothed model. model; in step 504, the vehicle-mounted image processing device performs texture mapping on the second model according to the image information to generate a 3D panoramic image.
第一模型是由大量的散点组成的,车载图像处理装置直接将散点进行连接,得到的模型可能是凹凸的,再对凹凸的3D模型进行纹理贴图会导致生成的全景图像视觉效果不佳。由此,车载图像处理装置对第一模型进行插值化处理,请参阅图11所示,图11为插值处理的示意图,通过插值方法在3D散点模型的基础上补插连续函数,使得3D模型表面整体曲面通过3D模型上的散点。插值处理后的第二模型为表面光滑的3D模型,车载图像处理装置对表面光滑的第二模型进行纹理贴图,从而提升3D模型渲染效果。本申请实施例中 的插值方法可以采用样条插值、双三次插值、离散平滑插值方法等插值方法。本申请实施例中通过插值方法使得第二模型的表面光滑即可,具体的插值方法并不限定。The first model is composed of a large number of scatter points. The on-board image processing device directly connects the scatter points, and the obtained model may be bumpy, and then texture mapping the bumpy 3D model will result in poor visual effect of the generated panoramic image. . Therefore, the vehicle-mounted image processing device performs interpolation processing on the first model. Please refer to FIG. 11 . FIG. 11 is a schematic diagram of the interpolation processing. The continuous function is interpolated on the basis of the 3D scatter model by the interpolation method, so that the 3D model The overall surface of the surface passes through the scatter points on the 3D model. The second model after interpolation processing is a 3D model with a smooth surface, and the vehicle-mounted image processing device performs texture mapping on the second model with a smooth surface, thereby improving the rendering effect of the 3D model. The interpolation method in the embodiment of the present application may adopt interpolation methods such as spline interpolation, bicubic interpolation, and discrete smooth interpolation method. In the embodiment of the present application, the surface of the second model may be made smooth by the interpolation method, and the specific interpolation method is not limited.
请参阅图12所示,本申请实施例提供了一种车载图像处理装置,车载图像处理装置用于执行上述方法实施例中车载图像处理装置所执行的方法。车载图像处理装置1200包括获取模块1201和处理模块1202,可选地,车载图像处理装置还包括显示模块1203。Referring to FIG. 12 , an embodiment of the present application provides an in-vehicle image processing apparatus, and the in-vehicle image processing apparatus is configured to execute the method performed by the in-vehicle image processing apparatus in the above method embodiments. The in-vehicle image processing apparatus 1200 includes an acquisition module 1201 and a processing module 1202 , and optionally, the in-vehicle image processing apparatus further includes a display module 1203 .
获取模块1201,用于获取车周物体的图像信息及深度信息,所述深度信息用于指示所述车周物体上各个点的坐标点信息;an acquisition module 1201, configured to acquire image information and depth information of objects around the vehicle, where the depth information is used to indicate coordinate point information of each point on the objects around the vehicle;
处理模块1202,用于获取初始模型;根据所述深度信息将所述初始模型上的第一坐标点调整至第二坐标点,生成第一模型;基于所述图像信息和所述第一模型获取全景图像。The processing module 1202 is used to obtain an initial model; adjust the first coordinate point on the initial model to a second coordinate point according to the depth information, and generate a first model; obtain based on the image information and the first model Panoramic image.
可选地,处理模块1202为处理器,处理器是通用处理器或者专用处理器等。可选地,处理器包括用于实现接收和发送功能的收发单元。例如该收发单元是收发电路,或者是接口,或者是接口电路。用于实现接收和发送功能的收发电路、接口或接口电路是分开的部署的,可选地,是集成在一起部署的。上述收发电路、接口或接口电路用于代码或数据的读写,或者,上述收发电路、接口或接口电路用于信号的传输或传递。Optionally, the processing module 1202 is a processor, and the processor is a general-purpose processor or a special-purpose processor or the like. Optionally, the processor includes a transceiver unit for implementing receiving and transmitting functions. For example, the transceiver unit is a transceiver circuit, or an interface, or an interface circuit. Transceiver circuits, interfaces, or interface circuits for implementing receiving and transmitting functions are deployed separately, or optionally, integrated together. The above-mentioned transceiver circuit, interface or interface circuit is used for reading and writing code or data, or the above-mentioned transceiver circuit, interface or interface circuit is used for signal transmission or transmission.
可选地,获取模块1201可以收发模块代替,可选地,收发模块为通信接口。可选地,通信接口是输入输出接口或者收发电路。输入输出接口包括输入接口和输出接口。收发电路包括输入接口电路和输出接口电路。Optionally, the acquisition module 1201 can be replaced by a transceiver module, optionally, the transceiver module is a communication interface. Optionally, the communication interface is an input-output interface or a transceiver circuit. The input and output interface includes an input interface and an output interface. The transceiver circuit includes an input interface circuit and an output interface circuit.
可选地,收发模块用于从传感器接收车周物体的图像信息及深度信息。Optionally, the transceiver module is configured to receive image information and depth information of objects around the vehicle from the sensor.
可选地,获取模块1201也可以由处理模块1202来代替。Optionally, the acquisition module 1201 can also be replaced by the processing module 1202 .
进一步的,获取模块1201用于执行图5对应的实施例中的步骤501。处理模块1202用于执行图5对应的实施例中的步骤502、步骤503、步骤504和步骤505。可选地,显示模块1203用于执行图5对应的实施例中的步骤505,显示模块1203用于显示全景图像。Further, the obtaining module 1201 is configured to perform step 501 in the embodiment corresponding to FIG. 5 . The processing module 1202 is configured to execute step 502 , step 503 , step 504 and step 505 in the embodiment corresponding to FIG. 5 . Optionally, the display module 1203 is configured to perform step 505 in the embodiment corresponding to FIG. 5 , and the display module 1203 is configured to display a panoramic image.
具体的,在一种可选的实现方式中,所述处理模块1202还具体用于:根据图像信息中的像素点及像素点对应的深度信息,将图像信息中的像素点转换为相机坐标系下的第一点云;将所述相机坐标系下的第一点云转换为世界坐标下的第二点云;通过所述第二点云中的坐标点调整所述第一坐标点至第二坐标点,生成所述第一模型,所述第二坐标点是根据所述第二点云中的坐标点得到的。Specifically, in an optional implementation manner, the processing module 1202 is further specifically configured to: convert the pixels in the image information into the camera coordinate system according to the pixels in the image information and the depth information corresponding to the pixels the first point cloud under the camera coordinate system; convert the first point cloud under the camera coordinate system to the second point cloud under world coordinates; adjust the first coordinate point to the second point cloud through the coordinate points in the second point cloud Two coordinate points are used to generate the first model, and the second coordinate points are obtained according to the coordinate points in the second point cloud.
在一种可选的实现方式中,所述图像信息包括多个图像传感器采集的图像,所述处理模块1202还具体用于:将所述多个图像传感器采集的多个图像在世界坐标系下的多片第二点云进行拼接,得到目标点云;确定在所述第一坐标点邻域范围内的多个第三坐标点,所述第三坐标点为目标点云上的坐标点;根据所述多个第三坐标点确定第二坐标点;将所述第一坐标点调整至所述第二坐标点。In an optional implementation manner, the image information includes images collected by multiple image sensors, and the processing module 1202 is further specifically configured to: place multiple images collected by the multiple image sensors in the world coordinate system The plurality of second point clouds obtained are spliced to obtain the target point cloud; determine a plurality of third coordinate points within the neighborhood of the first coordinate point, and the third coordinate point is the coordinate point on the target point cloud; A second coordinate point is determined according to the plurality of third coordinate points; and the first coordinate point is adjusted to the second coordinate point.
在一种可选的实现方式中,所述处理模块1202还具体用于:对两个相邻图像传感器采集的第一图像和第二图像的重叠区域进行匹配,得到所述第一图像的点云和所述第二图像的点云之间进行变换的旋转矩阵和平移矩阵;利用所述旋转矩阵和平移矩阵对所述第二图像的点云进行变换,将变换之后的第二图像的点云和第一图像的点云进行拼接。In an optional implementation manner, the processing module 1202 is further specifically configured to: match the overlapping area of the first image and the second image collected by two adjacent image sensors to obtain the point of the first image A rotation matrix and a translation matrix for transforming between the cloud and the point cloud of the second image; using the rotation matrix and the translation matrix to transform the point cloud of the second image, and transforming the point cloud of the second image after the transformation The cloud is stitched with the point cloud of the first image.
在一种可选的实现方式中,所述处理模块1202还具体用于:对所述第一模型进行插值光滑化处理,得到第二模型;根据所述图像信息对所述第二模型进行纹理贴图,生成全景图像。In an optional implementation manner, the processing module 1202 is further specifically configured to: perform interpolation and smoothing processing on the first model to obtain a second model; and texture the second model according to the image information map to generate a panoramic image.
在一种可选的实现方式中,所述车周物体包括第一物体和第二物体,当车辆与第一物体间的距离为第一距离,所述车辆与第二物体间的距离为第二距离时,所述车辆的位置映射到所述全景图像中的第一位置,所述第一物体的位置映射到所述全景图像的第二位置,所述第二物体的位置映射到全景图像中的第三位置,当所述第一距离大于所述第二距离时,所述显示模块1203,还用于显示所述全景图像,在所述全景图像中,所述第一位置与所述第二位置之间的距离大于所述第一位置与所述第三位置之间的距离。In an optional implementation manner, the objects around the vehicle include a first object and a second object, and when the distance between the vehicle and the first object is the first distance, the distance between the vehicle and the second object is the first object. When the distance is two, the position of the vehicle is mapped to the first position in the panoramic image, the position of the first object is mapped to the second position of the panoramic image, and the position of the second object is mapped to the panoramic image the third position in The distance between the second positions is greater than the distance between the first position and the third position.
请参阅图13所示,本申请实施例中一种车载图像处理装置,该车载图像处理装置用于执行图5对应的方法实施例中的步骤501至步骤505。具体可以参见上述方法实施例中的说明。车载图像处理装置包括可以包括一个或多个处理器1301,所述处理器1301也可以称为处理单元,可以实现一定的控制功能。所述处理器1301可以是通用处理器或者专用处理器等,例如,该处理器1301为图形处理器(graphics processing unit,GPU)。中央处理器可以用于对车载图像处理装置进行控制,执行软件程序,处理软件程序的数据。Referring to FIG. 13 , an in-vehicle image processing apparatus in an embodiment of the present application is used to perform steps 501 to 505 in the method embodiment corresponding to FIG. 5 . For details, refer to the descriptions in the foregoing method embodiments. The in-vehicle image processing apparatus may include one or more processors 1301, and the processors 1301 may also be referred to as processing units, which may implement certain control functions. The processor 1301 may be a general-purpose processor or a special-purpose processor, for example, the processor 1301 is a graphics processor (graphics processing unit, GPU). The central processing unit can be used to control the vehicle-mounted image processing device, execute software programs, and process data of the software programs.
在一种可选的设计中,处理器1301也可以存有指令1303,所述指令1303可以被所述处理器运行,使得所述车载图像处理装置1300执行上述方法实施例中描述的方法。In an optional design, the processor 1301 may also store instructions 1303, and the instructions 1303 may be executed by the processor, so that the in-vehicle image processing apparatus 1300 executes the methods described in the above method embodiments.
在另一种可选的设计中,处理器1301中可以包括用于实现接收和发送功能的收发单元。例如该收发单元可以是收发电路,或者是接口,或者是接口电路。用于实现接收和发送功能的收发电路、接口或接口电路可以是分开的,也可以集成在一起。上述收发电路、接口或接口电路可以用于代码/数据的读写,或者,上述收发电路、接口或接口电路可以用于信号的传输或传递。In another optional design, the processor 1301 may include a transceiver unit for implementing receiving and transmitting functions. For example, the transceiver unit may be a transceiver circuit, or an interface, or an interface circuit. Transceiver circuits, interfaces or interface circuits used to implement receiving and transmitting functions may be separate or integrated. The above-mentioned transceiver circuit, interface or interface circuit can be used for reading and writing code/data, or the above-mentioned transceiver circuit, interface or interface circuit can be used for signal transmission or transmission.
所述车载图像处理装置1300中可以包括一个或多个存储器1302,其上可以存有指令1304,所述指令可在所述处理器上被运行,使得所述车载图像处理装置1300执行上述方法实施例中描述的方法。可选的,所述存储器中还可以存储有数据。可选的,处理器中也可以存储指令和/或数据。所述处理器和存储器可以单独设置,也可以集成在一起。The in-vehicle image processing apparatus 1300 may include one or more memories 1302, and instructions 1304 may be stored thereon, and the instructions may be executed on the processor, so that the in-vehicle image processing apparatus 1300 executes the above method implementation. method described in the example. Optionally, data may also be stored in the memory. Optionally, instructions and/or data may also be stored in the processor. The processor and the memory can be provided separately or integrated together.
可选地,所述车载图像处理装置1300还可以包括收发器1305。所述处理器1301可以称为处理单元,对所述车载图像处理装置1300进行控制。所述收发器1305可以称为收发单元、收发机、收发电路、收发装置或收发模块等,用于实现收发功能。Optionally, the in-vehicle image processing apparatus 1300 may further include a transceiver 1305 . The processor 1301 may be referred to as a processing unit, and controls the in-vehicle image processing apparatus 1300 . The transceiver 1305 may be referred to as a transceiver unit, a transceiver, a transceiver circuit, a transceiver device, or a transceiver module, etc., and is used to implement a transceiver function.
本申请实施例中还一种车辆,车辆包括上述图3所示的车载环视系统。车载环视系统包括如图12所示的车载图像处理装置,或,车载环视系统包括如图13所示的车载图像处理装置,车载图像处理装置用于执行上述图5对应的实施例中的步骤501至步骤505。There is also a vehicle in the embodiment of the present application, and the vehicle includes the vehicle-mounted surround view system shown in FIG. 3 . The in-vehicle surround view system includes the in-vehicle image processing device shown in FIG. 12 , or the in-vehicle surround view system includes the in-vehicle image processing device shown in FIG. 13 , and the in-vehicle image processing device is used to perform step 501 in the above-mentioned embodiment corresponding to FIG. 5 . Go to step 505 .
本申请实施例还提供了一种计算机程序产品,所述计算机程序产品中包括计算机程序代码,当所述计算机程序代码被计算机执行时,使得计算机实现上述方法实施例中车载图像处理装置(或车载环视系统)执行的方法。The embodiments of the present application further provide a computer program product, the computer program product includes computer program code, when the computer program code is executed by the computer, the computer is made to realize the vehicle-mounted image processing device (or vehicle-mounted image processing device) in the above method embodiments. Look around system) method.
本申请实施例还提供了一种计算机可读存储介质,用于储存计算机程序或指令,所述计算机程序或指令被执行时使得计算机执行上述方法实施例中车载图像处理装置(或车载 环视系统)执行的方法。Embodiments of the present application further provide a computer-readable storage medium for storing computer programs or instructions, and when the computer programs or instructions are executed, cause the computer to execute the vehicle-mounted image processing device (or vehicle-mounted surround view system) in the above method embodiments. method of execution.
本申请实施例中提供了一种芯片,包括处理器和通信接口,所述处理器用于读取指令以执行上述方法实施例中车载图像处理装置(或车载环视系统)执行的方法。An embodiment of the present application provides a chip including a processor and a communication interface, where the processor is configured to read an instruction to execute the method performed by the vehicle-mounted image processing device (or vehicle-mounted surround view system) in the above method embodiments.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。As mentioned above, the above embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand: The technical solutions recorded in the embodiments are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions in the embodiments of the present application.

Claims (18)

  1. 一种全景图像的生成方法,其特征在于,包括:A method for generating panoramic images, comprising:
    获取车周物体的图像信息及深度信息,所述深度信息用于指示所述车周物体上各个点的坐标点信息;Obtain image information and depth information of objects around the vehicle, where the depth information is used to indicate coordinate point information of each point on the objects around the vehicle;
    获取初始模型;get the initial model;
    根据所述深度信息将所述初始模型上的第一坐标点调整至第二坐标点,生成第一模型;Adjusting the first coordinate point on the initial model to the second coordinate point according to the depth information to generate a first model;
    基于所述图像信息和所述第一模型获取全景图像。A panoramic image is acquired based on the image information and the first model.
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述深度信息将所述初始模型上的第一坐标点调整至第二坐标点,生成第一模型,包括:The method according to claim 1, wherein the generating the first model by adjusting the first coordinate point on the initial model to the second coordinate point according to the depth information comprises:
    根据图像信息中的像素点及像素点对应的深度信息,将图像信息中的像素点转换为相机坐标系下的第一点云;Convert the pixels in the image information to the first point cloud in the camera coordinate system according to the pixels in the image information and the depth information corresponding to the pixels;
    将所述相机坐标系下的第一点云转换为世界坐标下的第二点云;Converting the first point cloud in the camera coordinate system to the second point cloud in world coordinates;
    通过所述第二点云中的坐标点调整所述第一坐标点至第二坐标点,生成所述第一模型,所述第二坐标点是根据所述第二点云中的坐标点得到的。The first model is generated by adjusting the first coordinate point to the second coordinate point through the coordinate point in the second point cloud, and the second coordinate point is obtained according to the coordinate point in the second point cloud of.
  3. 根据权利要求2所述的方法,其特征在于,所述图像信息包括多个图像传感器采集的图像,所述将所述相机坐标系下的第一点云转换为世界坐标下的第二点云之后,所述方法还包括:The method according to claim 2, wherein the image information comprises images collected by a plurality of image sensors, and the converting the first point cloud in the camera coordinate system into the second point cloud in world coordinates Afterwards, the method further includes:
    将所述多个图像传感器采集的多个图像在世界坐标系下的多片第二点云进行拼接,得到目标点云;splicing multiple images collected by the multiple image sensors on multiple second point clouds in the world coordinate system to obtain a target point cloud;
    所述通过所述第二点云中的坐标点调整所述第一坐标点至第二坐标点,包括:The adjusting from the first coordinate point to the second coordinate point through the coordinate points in the second point cloud includes:
    确定在所述第一坐标点邻域范围内的多个第三坐标点,所述第三坐标点为目标点云上的坐标点;determining a plurality of third coordinate points within the neighborhood of the first coordinate point, where the third coordinate points are coordinate points on the target point cloud;
    根据所述多个第三坐标点确定第二坐标点;determining a second coordinate point according to the plurality of third coordinate points;
    将所述第一坐标点调整至所述第二坐标点。Adjust the first coordinate point to the second coordinate point.
  4. 根据权利要求3所述的方法,其特征在于,所述将所述多个图像传感器采集的多个图像在世界坐标系下的多片第二点云进行拼接,得到目标点云,包括:The method according to claim 3, characterized in that, splicing multiple second point clouds of multiple images collected by the multiple image sensors in the world coordinate system to obtain the target point cloud, comprising:
    对两个相邻图像传感器采集的第一图像和第二图像的重叠区域进行匹配,得到所述第一图像的点云和所述第二图像的点云之间进行变换的旋转矩阵和平移矩阵;Match the overlapping areas of the first image and the second image collected by two adjacent image sensors to obtain a rotation matrix and a translation matrix for transformation between the point cloud of the first image and the point cloud of the second image ;
    利用所述旋转矩阵和平移矩阵对所述第二图像的点云进行变换,将变换之后的第二图像的点云和第一图像的点云进行拼接。The point cloud of the second image is transformed by using the rotation matrix and the translation matrix, and the transformed point cloud of the second image and the point cloud of the first image are stitched together.
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述生成第一模型之后,所述方法还包括:The method according to any one of claims 1-4, wherein after the generating the first model, the method further comprises:
    对所述第一模型进行插值光滑化处理,得到第二模型;Interpolating and smoothing the first model to obtain a second model;
    所述基于所述图像信息和第一模型获取全景图像,包括:The obtaining a panoramic image based on the image information and the first model includes:
    根据所述图像信息对所述第二模型进行纹理贴图,生成全景图像。Perform texture mapping on the second model according to the image information to generate a panoramic image.
  6. 根据权利要求1所述的方法,其特征在于,所述车周物体包括第一物体和第二物体,当车辆与第一物体间的距离为第一距离,所述车辆与第二物体间的距离为第二距离时,所 述车辆的位置映射到所述全景图像中的第一位置,所述第一物体的位置映射到所述全景图像的第二位置,所述第二物体的位置映射到全景图像中的第三位置,当所述第一距离大于所述第二距离时,所述方法还包括:The method according to claim 1, wherein the objects around the vehicle include a first object and a second object, and when the distance between the vehicle and the first object is the first distance, the distance between the vehicle and the second object is the first distance. When the distance is the second distance, the position of the vehicle is mapped to the first position in the panoramic image, the position of the first object is mapped to the second position of the panoramic image, and the position of the second object is mapped to a third position in the panoramic image, when the first distance is greater than the second distance, the method further includes:
    显示所述全景图像,在所述全景图像中,所述第一位置与所述第二位置之间的距离大于所述第一位置与所述第三位置之间的距离。The panoramic image is displayed, in which the distance between the first position and the second position is greater than the distance between the first position and the third position.
  7. 一种车载环视装置,其特征在于,包括:A vehicle-mounted surround-view device, comprising:
    获取模块,用于获取车周物体的图像信息及深度信息,所述深度信息用于指示所述车周物体上各个点的坐标点信息;an acquisition module for acquiring image information and depth information of objects around the vehicle, where the depth information is used to indicate coordinate point information of each point on the objects around the vehicle;
    处理模块,用于获取初始模型;根据所述深度信息将所述初始模型上的第一坐标点调整至第二坐标点,生成第一模型;基于所述图像信息和所述第一模型获取全景图像。a processing module for acquiring an initial model; adjusting the first coordinate point on the initial model to a second coordinate point according to the depth information to generate a first model; obtaining a panorama based on the image information and the first model image.
  8. 根据权利要求7所述的装置,其特征在于,所述处理模块还具体用于:The device according to claim 7, wherein the processing module is further specifically configured to:
    根据图像信息中的像素点及像素点对应的深度信息,将图像信息中的像素点转换为相机坐标系下的第一点云;Convert the pixels in the image information to the first point cloud in the camera coordinate system according to the pixels in the image information and the depth information corresponding to the pixels;
    将所述相机坐标系下的第一点云转换为世界坐标下的第二点云;Converting the first point cloud in the camera coordinate system to the second point cloud in world coordinates;
    通过所述第二点云中的坐标点调整所述第一坐标点至第二坐标点,生成所述第一模型,所述第二坐标点是根据所述第二点云中的坐标点得到的。The first model is generated by adjusting the first coordinate point to the second coordinate point through the coordinate point in the second point cloud, and the second coordinate point is obtained according to the coordinate point in the second point cloud of.
  9. 根据权利要求8所述的装置,其特征在于,所述图像信息包括多个图像传感器采集的图像,所述处理模块还具体用于:The device according to claim 8, wherein the image information comprises images collected by a plurality of image sensors, and the processing module is further specifically configured to:
    将所述多个图像传感器采集的多个图像在世界坐标系下的多片第二点云进行拼接,得到目标点云;splicing multiple images collected by the multiple image sensors on multiple second point clouds in the world coordinate system to obtain a target point cloud;
    确定在所述第一坐标点邻域范围内的多个第三坐标点,所述第三坐标点为目标点云上的坐标点;determining a plurality of third coordinate points within the neighborhood of the first coordinate point, where the third coordinate points are coordinate points on the target point cloud;
    根据所述多个第三坐标点确定第二坐标点;determining a second coordinate point according to the plurality of third coordinate points;
    将所述第一坐标点调整至所述第二坐标点。Adjust the first coordinate point to the second coordinate point.
  10. 根据权利要求9所述的装置,其特征在于,所述处理模块还具体用于:The device according to claim 9, wherein the processing module is further specifically configured to:
    对两个相邻图像传感器采集的第一图像和第二图像的重叠区域进行匹配,得到所述第一图像的点云和所述第二图像的点云之间进行变换的旋转矩阵和平移矩阵;Match the overlapping areas of the first image and the second image collected by two adjacent image sensors to obtain a rotation matrix and a translation matrix for transformation between the point cloud of the first image and the point cloud of the second image ;
    利用所述旋转矩阵和平移矩阵对所述第二图像的点云进行变换,将变换之后的第二图像的点云和第一图像的点云进行拼接。The point cloud of the second image is transformed by using the rotation matrix and the translation matrix, and the transformed point cloud of the second image and the point cloud of the first image are stitched together.
  11. 根据权利要求7-10中任一项所述的装置,其特征在于,所述处理模块还具体用于:The device according to any one of claims 7-10, wherein the processing module is further specifically configured to:
    对所述第一模型进行插值光滑化处理,得到第二模型;Interpolating and smoothing the first model to obtain a second model;
    根据所述图像信息对所述第二模型进行纹理贴图,生成全景图像。Perform texture mapping on the second model according to the image information to generate a panoramic image.
  12. 根据权利要求7所述的装置,其特征在于,所述车周物体包括第一物体和第二物体,当车辆与第一物体间的距离为第一距离,所述车辆与第二物体间的距离为第二距离时,所述车辆的位置映射到所述全景图像中的第一位置,所述第一物体的位置映射到所述全景图像的第二位置,所述第二物体的位置映射到全景图像中的第三位置,当所述第一距离大于所述第二距离时,所述装置还包括显示模块;所述显示模块,还用于显示所述全景图像, 在所述全景图像中,所述第一位置与所述第二位置之间的距离大于所述第一位置与所述第三位置之间的距离。The device according to claim 7, wherein the objects around the vehicle include a first object and a second object, and when the distance between the vehicle and the first object is the first distance, the distance between the vehicle and the second object is the first distance. When the distance is the second distance, the position of the vehicle is mapped to the first position in the panoramic image, the position of the first object is mapped to the second position of the panoramic image, and the position of the second object is mapped to a third position in the panoramic image, when the first distance is greater than the second distance, the device further includes a display module; the display module is further configured to display the panoramic image, where the panoramic image , the distance between the first position and the second position is greater than the distance between the first position and the third position.
  13. 一种车载图像处理装置,其特征在于,包括处理器,所述处理器与存储器耦合,所述存储器用于存储程序或指令,当所述程序或指令被所述处理器执行时,使得所述车载图像处理装置执行如权利要求1至6中任一项所述的方法。An in-vehicle image processing device, characterized in that it includes a processor, the processor is coupled with a memory, and the memory is used to store programs or instructions, and when the programs or instructions are executed by the processor, the The in-vehicle image processing device executes the method according to any one of claims 1 to 6 .
  14. 一种车载环视系统,其特征在于,包括传感器,如上述权利要求13所述的车载图像处理装置和车载显示器,所述传感器和所述车载显示器均与所述车载图像处理器连接,其中,所述传感器用于采集图像信息和深度信息,所述车载显示器用于显示全景图像。An in-vehicle surround view system, characterized in that it includes a sensor, the in-vehicle image processing device and the in-vehicle display as claimed in claim 13, wherein the sensor and the in-vehicle display are both connected to the in-vehicle image processor, wherein the The sensor is used to collect image information and depth information, and the vehicle-mounted display is used to display panoramic images.
  15. 一种车辆,其特征在于,包括如权利要求14所述的车载环视系统。A vehicle, characterized by comprising the vehicle-mounted surround view system as claimed in claim 14 .
  16. 一种计算机程序产品,所述计算机程序产品中包括计算机程序代码,其特征在于,当所述计算机程序代码被计算机执行时,使得计算机实现上述如权利要求1至6中任一项所述的方法。A computer program product, comprising computer program code, characterized in that, when the computer program code is executed by a computer, the computer is made to implement the method according to any one of claims 1 to 6 .
  17. 一种计算机可读存储介质,其特征在于,用于储存计算机程序或指令,所述计算机程序或指令被执行时使得计算机执行如权利要求1至6中任一项所述的方法。A computer-readable storage medium, characterized in that it is used for storing computer programs or instructions, which, when executed, cause a computer to perform the method according to any one of claims 1 to 6 .
  18. 一种芯片,其特征在于,包括处理器和通信接口,所述处理器用于读取指令以执行权利要求1至6中任一项所述的方法。A chip, characterized by comprising a processor and a communication interface, wherein the processor is configured to read instructions to execute the method of any one of claims 1 to 6 .
PCT/CN2021/089132 2021-04-23 2021-04-23 Panoramic image generation method, vehicle-mounted image processing apparatus, and vehicle WO2022222121A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/089132 WO2022222121A1 (en) 2021-04-23 2021-04-23 Panoramic image generation method, vehicle-mounted image processing apparatus, and vehicle
CN202180001139.6A CN113302648B (en) 2021-04-23 2021-04-23 Panoramic image generation method, vehicle-mounted image processing device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/089132 WO2022222121A1 (en) 2021-04-23 2021-04-23 Panoramic image generation method, vehicle-mounted image processing apparatus, and vehicle

Publications (1)

Publication Number Publication Date
WO2022222121A1 true WO2022222121A1 (en) 2022-10-27

Family

ID=77331312

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/089132 WO2022222121A1 (en) 2021-04-23 2021-04-23 Panoramic image generation method, vehicle-mounted image processing apparatus, and vehicle

Country Status (2)

Country Link
CN (1) CN113302648B (en)
WO (1) WO2022222121A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578502A (en) * 2022-11-18 2023-01-06 杭州枕石智能科技有限公司 Image generation method and device, electronic equipment and storage medium
CN115830161A (en) * 2022-11-21 2023-03-21 北京城市网邻信息技术有限公司 Method, device and equipment for generating house type graph and storage medium
CN115904294A (en) * 2023-01-09 2023-04-04 山东矩阵软件工程股份有限公司 Environment visualization method, system, storage medium and electronic device
CN115994952A (en) * 2023-02-01 2023-04-21 镁佳(北京)科技有限公司 Calibration method and device for panoramic image system, computer equipment and storage medium
CN116596741A (en) * 2023-04-10 2023-08-15 北京城市网邻信息技术有限公司 Point cloud display diagram generation method and device, electronic equipment and storage medium
CN116704129A (en) * 2023-06-14 2023-09-05 维坤智能科技(上海)有限公司 Panoramic view-based three-dimensional image generation method, device, equipment and storage medium
CN116962649A (en) * 2023-09-19 2023-10-27 安徽送变电工程有限公司 Image monitoring and adjusting system and line construction model
CN117406185A (en) * 2023-12-14 2024-01-16 深圳市其域创新科技有限公司 External parameter calibration method, device and equipment between radar and camera and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113793255A (en) * 2021-09-09 2021-12-14 百度在线网络技术(北京)有限公司 Method, apparatus, device, storage medium and program product for image processing
CN114549321A (en) * 2022-02-25 2022-05-27 小米汽车科技有限公司 Image processing method and apparatus, vehicle, and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243637A (en) * 2015-09-21 2016-01-13 武汉海达数云技术有限公司 Panorama image stitching method based on three-dimensional laser point cloud
US20170061689A1 (en) * 2015-08-24 2017-03-02 Caterpillar Inc. System for improving operator visibility of machine surroundings
US20190050959A1 (en) * 2017-08-11 2019-02-14 Caterpillar Inc. Machine surround view system and method for generating 3-dimensional composite surround view using same
CN111559314A (en) * 2020-04-27 2020-08-21 长沙立中汽车设计开发股份有限公司 Depth and image information fused 3D enhanced panoramic looking-around system and implementation method
CN111968184A (en) * 2020-08-24 2020-11-20 北京茵沃汽车科技有限公司 Method, device and medium for realizing view follow-up in panoramic looking-around system
CN112101092A (en) * 2020-07-31 2020-12-18 北京智行者科技有限公司 Automatic driving environment sensing method and system
CN112347825A (en) * 2019-08-09 2021-02-09 杭州海康威视数字技术股份有限公司 Method and system for adjusting vehicle body all-round model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103863192B (en) * 2014-04-03 2017-04-12 深圳市德赛微电子技术有限公司 Method and system for vehicle-mounted panoramic imaging assistance

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170061689A1 (en) * 2015-08-24 2017-03-02 Caterpillar Inc. System for improving operator visibility of machine surroundings
CN105243637A (en) * 2015-09-21 2016-01-13 武汉海达数云技术有限公司 Panorama image stitching method based on three-dimensional laser point cloud
US20190050959A1 (en) * 2017-08-11 2019-02-14 Caterpillar Inc. Machine surround view system and method for generating 3-dimensional composite surround view using same
CN112347825A (en) * 2019-08-09 2021-02-09 杭州海康威视数字技术股份有限公司 Method and system for adjusting vehicle body all-round model
CN111559314A (en) * 2020-04-27 2020-08-21 长沙立中汽车设计开发股份有限公司 Depth and image information fused 3D enhanced panoramic looking-around system and implementation method
CN112101092A (en) * 2020-07-31 2020-12-18 北京智行者科技有限公司 Automatic driving environment sensing method and system
CN111968184A (en) * 2020-08-24 2020-11-20 北京茵沃汽车科技有限公司 Method, device and medium for realizing view follow-up in panoramic looking-around system

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578502A (en) * 2022-11-18 2023-01-06 杭州枕石智能科技有限公司 Image generation method and device, electronic equipment and storage medium
CN115830161A (en) * 2022-11-21 2023-03-21 北京城市网邻信息技术有限公司 Method, device and equipment for generating house type graph and storage medium
CN115830161B (en) * 2022-11-21 2023-10-31 北京城市网邻信息技术有限公司 House type diagram generation method, device, equipment and storage medium
CN115904294A (en) * 2023-01-09 2023-04-04 山东矩阵软件工程股份有限公司 Environment visualization method, system, storage medium and electronic device
CN115994952A (en) * 2023-02-01 2023-04-21 镁佳(北京)科技有限公司 Calibration method and device for panoramic image system, computer equipment and storage medium
CN115994952B (en) * 2023-02-01 2024-01-30 镁佳(北京)科技有限公司 Calibration method and device for panoramic image system, computer equipment and storage medium
CN116596741A (en) * 2023-04-10 2023-08-15 北京城市网邻信息技术有限公司 Point cloud display diagram generation method and device, electronic equipment and storage medium
CN116596741B (en) * 2023-04-10 2024-05-07 北京城市网邻信息技术有限公司 Point cloud display diagram generation method and device, electronic equipment and storage medium
CN116704129B (en) * 2023-06-14 2024-01-30 维坤智能科技(上海)有限公司 Panoramic view-based three-dimensional image generation method, device, equipment and storage medium
CN116704129A (en) * 2023-06-14 2023-09-05 维坤智能科技(上海)有限公司 Panoramic view-based three-dimensional image generation method, device, equipment and storage medium
CN116962649A (en) * 2023-09-19 2023-10-27 安徽送变电工程有限公司 Image monitoring and adjusting system and line construction model
CN116962649B (en) * 2023-09-19 2024-01-09 安徽送变电工程有限公司 Image monitoring and adjusting system and line construction model
CN117406185A (en) * 2023-12-14 2024-01-16 深圳市其域创新科技有限公司 External parameter calibration method, device and equipment between radar and camera and storage medium
CN117406185B (en) * 2023-12-14 2024-02-23 深圳市其域创新科技有限公司 External parameter calibration method, device and equipment between radar and camera and storage medium

Also Published As

Publication number Publication date
CN113302648A (en) 2021-08-24
CN113302648B (en) 2022-09-16

Similar Documents

Publication Publication Date Title
WO2022222121A1 (en) Panoramic image generation method, vehicle-mounted image processing apparatus, and vehicle
CN111062873B (en) Parallax image splicing and visualization method based on multiple pairs of binocular cameras
CN111223038B (en) Automatic splicing method of vehicle-mounted looking-around images and display device
KR101265667B1 (en) Device for 3d image composition for visualizing image of vehicle around and method therefor
CN111862179B (en) Three-dimensional object modeling method and apparatus, image processing device, and medium
US8817079B2 (en) Image processing apparatus and computer-readable recording medium
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN107113376B (en) A kind of image processing method, device and video camera
US10176595B2 (en) Image processing apparatus having automatic compensation function for image obtained from camera, and method thereof
CN111028155B (en) Parallax image splicing method based on multiple pairs of binocular cameras
JP6522630B2 (en) Method and apparatus for displaying the periphery of a vehicle, and driver assistant system
US11303807B2 (en) Using real time ray tracing for lens remapping
SG189284A1 (en) Rapid 3d modeling
JP2011215063A (en) Camera attitude parameter estimation device
CN109769110B (en) Method and device for generating 3D asteroid dynamic graph and portable terminal
JP2023505891A (en) Methods for measuring environmental topography
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
CN112837207A (en) Panoramic depth measuring method, four-eye fisheye camera and binocular fisheye camera
JP5361758B2 (en) Image generation method, image generation apparatus, and program
KR20170019793A (en) Apparatus and method for providing around view
Lin et al. A low-cost portable polycamera for stereoscopic 360 imaging
JP2002092597A (en) Method and device for processing image
JPWO2020022373A1 (en) Driving support device and driving support method, program
US20230038125A1 (en) Method And Apparatus for Image Registration
Kim Generation of stereo images from the heterogeneous cameras

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21937359

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21937359

Country of ref document: EP

Kind code of ref document: A1