CN115170630A - Map generation method, map generation device, electronic device, vehicle, and storage medium - Google Patents

Map generation method, map generation device, electronic device, vehicle, and storage medium Download PDF

Info

Publication number
CN115170630A
CN115170630A CN202210778725.8A CN202210778725A CN115170630A CN 115170630 A CN115170630 A CN 115170630A CN 202210778725 A CN202210778725 A CN 202210778725A CN 115170630 A CN115170630 A CN 115170630A
Authority
CN
China
Prior art keywords
point cloud
frame
target
obstacle
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210778725.8A
Other languages
Chinese (zh)
Other versions
CN115170630B (en
Inventor
袁鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210778725.8A priority Critical patent/CN115170630B/en
Publication of CN115170630A publication Critical patent/CN115170630A/en
Application granted granted Critical
Publication of CN115170630B publication Critical patent/CN115170630B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a map generation method, a map generation device, an electronic device, a vehicle and a storage medium, and relates to the technical field of automatic driving, wherein the method comprises the following steps: the method comprises the steps of obtaining multi-frame image information and multi-frame point cloud information of the surrounding environment of a vehicle, determining an obstacle area corresponding to a moving obstacle in each environment image, determining a target obstacle point cloud set corresponding to the frame of point cloud information according to the obstacle area and the point cloud data aiming at each frame of point cloud information, determining a target point cloud set corresponding to each frame of point cloud information according to the target obstacle point cloud set and the point cloud data, and generating a point cloud map according to the target point cloud set. According to the method, the target obstacle point cloud set corresponding to the moving obstacle is determined through the obstacle area corresponding to the moving obstacle in the environment image, and the point cloud map without the moving obstacle is generated according to the target point cloud set obtained by removing the target obstacle point cloud set from the point cloud data, so that the noise in the point cloud map can be reduced, and the precision of the point cloud map can be ensured.

Description

Map generation method, map generation device, electronic device, vehicle and storage medium
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a map generation method and apparatus, an electronic device, a vehicle, and a storage medium.
Background
With the rapid development of computer technology, the application of high-precision positioning is becoming more and more extensive, and for example, high-precision positioning plays a significant role in automatic driving. At present, pose information with higher precision is provided mainly through an SLAM (positioning and Mapping in Chinese) technology, a point cloud map is generated for subsequent laser positioning, and a vector map is generated through the point cloud map and further used for visual positioning. However, in an urban environment, an indoor environment or a campus environment, due to the fact that the number of the moving obstacles is large, the moving obstacles and moving tracks thereof can be left in the point cloud map as noise, and the accuracy of subsequent positioning can be affected. Therefore, how to obtain a high-precision point cloud map without moving obstacles is an important problem to be solved urgently.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a map generation method, apparatus, electronic device, vehicle, and storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a map generation method, the method including:
acquiring multi-frame image information and multi-frame point cloud information of the surrounding environment of the vehicle; the image information comprises environment images corresponding to a plurality of acquisition areas, and the point cloud information comprises point cloud data corresponding to the acquisition areas; the image information corresponds to the point cloud information one by one;
determining a corresponding obstacle area of a moving obstacle in the vehicle surroundings in each of the environmental images;
aiming at each frame of point cloud information, determining a target obstacle point cloud set corresponding to the frame of point cloud information according to the obstacle area and the point cloud data;
determining a target point cloud set corresponding to each frame of point cloud information according to the target obstacle point cloud set and the point cloud data;
and generating a point cloud map according to the target point cloud set.
Optionally, the determining a corresponding obstacle area of a moving obstacle in the vehicle surroundings in each of the environment images includes:
and for each frame of image information, all the environment images included in the frame of image information are spliced to obtain a spliced environment image corresponding to the frame of image information, and the obstacle area corresponding to the moving obstacle in each environment image included in the frame of image information is determined through a pre-trained obstacle detection model according to the spliced environment image.
Optionally, the determining, according to the obstacle region and the point cloud data, a target obstacle point cloud set corresponding to the frame of point cloud information includes:
for each acquisition area, projecting each point to be selected in the point cloud data corresponding to the acquisition area, which is included in the frame of point cloud information, to a target environment image corresponding to the acquisition area to obtain a projection point corresponding to the point to be selected, and taking the point to be selected corresponding to the projection point matched with the obstacle area as an obstacle point to be selected to obtain an obstacle point cloud set corresponding to the acquisition area; the target environment image is the environment image corresponding to the acquisition area and included in the image information corresponding to the frame point cloud information;
and aiming at each acquisition region, clustering the obstacle points to be selected in the obstacle point cloud set to be selected corresponding to the acquisition region to obtain at least one clustering point cloud set, and taking the largest clustering point cloud set in the at least one clustering point cloud set as the target obstacle point cloud set.
Optionally, the determining a target point cloud set corresponding to each frame of point cloud information according to the target obstacle point cloud set and the point cloud data includes:
and removing the target obstacle point cloud set corresponding to the frame of point cloud information from the point cloud data included in the frame of point cloud information to obtain the target point cloud set corresponding to the frame of point cloud information.
Optionally, the generating a point cloud map according to the target point cloud set includes:
determining the target odometer pose of the vehicle corresponding to each frame of point cloud information according to the target point cloud set corresponding to each frame of point cloud information;
and aiming at each frame of point cloud information, splicing the target odometer pose corresponding to the frame of point cloud information and the target point cloud set corresponding to the frame of point cloud information to obtain the point cloud map.
Optionally, the determining, according to the target point cloud set corresponding to each frame of point cloud information, the target odometer pose of the vehicle corresponding to the frame of point cloud information includes:
extracting characteristic points of target points in each target point cloud set to obtain target characteristic points corresponding to the target points;
determining the position and posture of the odometer to be selected of the vehicle corresponding to each frame of point cloud information according to the target feature points;
and optimizing the pose of the to-be-selected odometer by using a preset optimization algorithm to obtain the pose of the target odometer.
According to a second aspect of the embodiments of the present disclosure, there is provided a map generating apparatus, the apparatus including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is configured to acquire multi-frame image information and multi-frame point cloud information of the surrounding environment of the vehicle; the image information comprises environment images corresponding to a plurality of acquisition areas, and the point cloud information comprises point cloud data corresponding to the acquisition areas; the image information corresponds to the point cloud information one by one;
a determination module configured to determine a corresponding obstacle area in each of the environmental images for a moving obstacle in the vehicle surroundings;
the determining module is further configured to determine, for each frame of the point cloud information, a target obstacle point cloud set corresponding to the frame of the point cloud information according to the obstacle area and the point cloud data;
the determining module is further configured to determine a target point cloud set corresponding to each frame of point cloud information according to the target obstacle point cloud set and the point cloud data;
and the generating module is configured to generate a point cloud map according to the target point cloud set.
Optionally, the determining module is configured to:
and for each frame of image information, all the environment images included in the frame of image information are spliced to obtain a spliced environment image corresponding to the frame of image information, and the corresponding obstacle area of the mobile obstacle in each environment image included in the frame of image information is determined through a pre-trained obstacle detection model according to the spliced environment image.
Optionally, the determining module includes:
the first determining submodule is configured to project each point to be selected in the point cloud data corresponding to the acquisition area, which is included in the frame of point cloud information, to a target environment image corresponding to the acquisition area to obtain a projection point corresponding to the point to be selected, and use the point to be selected corresponding to the projection point matched with the obstacle area as an obstacle point to be selected to obtain an obstacle point cloud set corresponding to the acquisition area; the target environment image is the environment image corresponding to the acquisition area and included in the image information corresponding to the frame of point cloud information;
and the second determining submodule is configured to cluster the obstacle points to be selected in the obstacle point cloud set to be selected corresponding to each acquisition area to obtain at least one clustering point cloud set, and the largest clustering point cloud set in the at least one clustering point cloud set is used as the target obstacle point cloud set.
Optionally, the determining module is configured to:
and removing the target obstacle point cloud set corresponding to the frame of point cloud information from the point cloud data included in the frame of point cloud information to obtain the target point cloud set corresponding to the frame of point cloud information.
Optionally, the generating module includes:
the third determining sub-module is configured to determine a target odometer pose of the vehicle corresponding to each frame of point cloud information according to the target point cloud set corresponding to each frame of point cloud information;
and the splicing sub-module is configured to splice the target odometer pose corresponding to each frame of point cloud information and the target point cloud set corresponding to the frame of point cloud information to obtain the point cloud map.
Optionally, the third determining sub-module is configured to:
extracting feature points of target points in each target point cloud set to obtain target feature points corresponding to the target points;
determining the position and the pose of the odometer to be selected of the vehicle corresponding to each frame of point cloud information according to the target feature points;
and optimizing the pose of the to-be-selected odometer by using a preset optimization algorithm to obtain the pose of the target odometer.
According to a third aspect of an embodiment of the present disclosure, there is provided an electronic apparatus including:
a first processor;
a first memory for storing first processor-executable instructions;
wherein the first processor is configured to perform the steps of the map generation method provided by the first aspect of the present disclosure.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a vehicle including:
a second processor;
a second memory for storing second processor-executable instructions;
wherein the second processor is configured to perform the steps of the map generation method provided by the first aspect of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the map generation method provided by the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the method comprises the steps of firstly obtaining multi-frame image information and multi-frame point cloud information of the surrounding environment of a vehicle, wherein the image information comprises environment images corresponding to a plurality of collecting areas, the point cloud information comprises point cloud data corresponding to the collecting areas, the image information corresponds to the point cloud information one by one, then determining an obstacle area corresponding to a moving obstacle in the surrounding environment of the vehicle in each environment image, then determining a target obstacle point cloud set corresponding to the point cloud information according to the obstacle area and the point cloud data aiming at each frame of point cloud information, determining a target point cloud set corresponding to each frame of point cloud information according to the target obstacle point cloud set and the point cloud data, and finally generating a point cloud map according to the target point cloud set. According to the method, the target obstacle point cloud set corresponding to the moving obstacle is determined through the obstacle area corresponding to the moving obstacle in the environment image, and the point cloud map without the moving obstacle is generated according to the target point cloud set obtained by removing the target obstacle point cloud set from the point cloud data, so that the noise in the point cloud map can be reduced, and the precision of the point cloud map can be ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flowchart illustrating a map generation method in accordance with an exemplary embodiment.
Fig. 2 is a flow chart illustrating one step 103 according to the embodiment shown in fig. 1.
Fig. 3 is a flow chart illustrating one step 105 according to the embodiment shown in fig. 1.
FIG. 4 is a block diagram illustrating a map generation apparatus in accordance with an exemplary embodiment.
FIG. 5 is a block diagram illustrating a determination module according to the embodiment shown in FIG. 4.
FIG. 6 is a block diagram of one generation module shown in accordance with the embodiment shown in FIG. 4.
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
FIG. 8 is a functional block diagram schematic of a vehicle shown in an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
It should be noted that all actions of acquiring signals, information or data in the present application are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Before describing the map generation method, apparatus, electronic device, vehicle, and storage medium provided by the present disclosure, an application scenario related to various embodiments of the present disclosure is first described. The application scenario may include a vehicle provided with a plurality of image capture devices and a lidar. Each image acquisition device corresponds to an acquisition region in the vehicle surroundings, and the acquisition region may be a region in the vehicle surroundings that the image acquisition device can acquire within the field of view of the image acquisition device (i.e., the field of view of each image acquisition device corresponds to one acquisition region), and each image acquisition device is configured to acquire image information of the acquisition region corresponding to the image acquisition device. The laser radar is used for emitting laser beams according to a certain collection frequency so as to obtain point cloud information of the surrounding environment of the vehicle. The visible range of the lidar is divided into a plurality of visual field ranges (the total visible range of the lidar is 360 °), and the visual field range of one lidar corresponds to the visual field range of one image acquisition device (thus establishing a corresponding relationship between the acquisition region and the visual field range of the lidar). The image acquisition device can be a device with an image acquisition function, such as a panoramic camera, a camera and an image sensor, for example, the image acquisition device can adopt four panoramic cameras, and the four panoramic cameras can respectively acquire an acquisition area in front of the vehicle, an acquisition area on the left side of the vehicle, an acquisition area on the right side of the vehicle and an acquisition area behind the vehicle. The laser radar can be a multi-line laser radar, and the vehicle can be an automobile which is not limited to a traditional automobile, a pure electric automobile or a hybrid automobile, and can also be suitable for other types of motor vehicles or non-motor vehicles.
FIG. 1 is a flow chart illustrating a map generation method in accordance with an exemplary embodiment. As shown in fig. 1, the method may include the steps of:
in step 101, multi-frame image information and multi-frame point cloud information of the vehicle surroundings are acquired. The image information comprises environment images corresponding to a plurality of acquisition areas, the point cloud information comprises point cloud data corresponding to the plurality of acquisition areas, and the image information and the point cloud information correspond to each other one by one.
For example, a high-precision point cloud map without moving obstacles can be generated by detecting moving obstacles in image information of the surrounding environment of the vehicle, mapping the detection result to point cloud information of the surrounding environment of the vehicle, removing the moving obstacles from the point cloud information, and further utilizing the point cloud information after removing the moving obstacles. Specifically, each image acquisition device may periodically acquire an environmental image corresponding to an acquisition area corresponding to the image acquisition device according to a specified period within a preset time range, where the environmental images acquired by all the image acquisition devices in the same period are a frame of image information. Meanwhile, point cloud data corresponding to each acquisition area can be periodically acquired by the laser radar according to different visual field ranges of the laser radar within a preset time range and according to a specified period, and all the point cloud data acquired by the laser radar in each period are one frame of point cloud information. The image information and the point cloud information are in one-to-one correspondence (i.e., the image information and the point cloud information are associated), and each frame of image information is aligned with the point cloud information corresponding to the frame of image information in terms of time.
Further, in order to associate the image information with the point cloud information, before acquiring multi-frame image information and multi-frame point cloud information, each image acquisition device needs to be calibrated in a combined manner with the laser radar. The method comprises the steps of calibrating internal parameters of each image acquisition device and calibrating external parameters between each image acquisition device and the laser radar. By internal reference calibration, the projection coefficient and distortion coefficient K of each image acquisition device can be obtained i And a conversion relationship between the camera coordinate system and the pixel coordinate system
Figure BDA0003724076150000081
By external reference calibration, the conversion relation between the laser radar coordinate system and the camera coordinate system can be obtained
Figure BDA0003724076150000082
In addition, when the vehicle is in a motion state, the point cloud data acquired by the laser radar has motion distortion, and in order to ensure the accuracy of the point cloud information, the acquired point cloud information can be subjected to motion compensation.
In step 102, the corresponding obstacle area of the moving obstacle in the vehicle surroundings in each environment image is determined.
In step 103, for each frame of point cloud information, a target obstacle point cloud set corresponding to the frame of point cloud information is determined according to the obstacle area and the point cloud data.
Specifically, after obtaining multiple frames of image information and multiple frames of point cloud information, the target detection may be performed on the environment image included in each frame of image information through a target detection algorithm, so as to obtain a corresponding obstacle region of the moving obstacle in each environment image included in the frame of image information. The moving obstacle may be, for example, a pedestrian, an animal, a motor vehicle, a non-motor vehicle, or the like. Then, for each frame of point cloud information, the obstacle area corresponding to each environmental image included in the image information corresponding to the frame of point cloud information may be mapped to the point cloud data included in the frame of point cloud information, so as to obtain a target obstacle point cloud set corresponding to the frame of point cloud information.
In step 104, a target point cloud set corresponding to each frame of point cloud information is determined according to the target obstacle point cloud set and the point cloud data.
In step 105, a point cloud map is generated according to the target point cloud set.
For example, for each frame of point cloud information, a target obstacle point cloud set corresponding to the frame of point cloud information is removed from the point cloud data included in the frame of point cloud information, so as to obtain a target point cloud set corresponding to the frame of point cloud information. And then, feature point extraction can be carried out on target points in a target point cloud set corresponding to each frame of point cloud information to obtain target feature points corresponding to each frame of point cloud information, and the position and the pose of the target odometer of the vehicle corresponding to each frame of point cloud information are determined according to the target feature points corresponding to each frame of point cloud information. And finally, generating a three-dimensional high-precision point cloud map without moving obstacles and tracks left by the moving obstacles and without noise points by utilizing an SLAM algorithm according to the target point cloud set.
It should be noted that, in consideration of the real-time problem of the target detection algorithm and the SLAM algorithm, in order to ensure that the whole algorithm can meet the real-time requirement, the present disclosure may employ two CPUs (english: central Processing Unit, chinese: central Processing Unit). For example, a target detection algorithm may be run on the CPU1, the detected obstacle region may be sent to the CPU2 in real time through a TCP/IP (Transmission Control Protocol/Internet Protocol, chinese: transmission Control Protocol/Internet Protocol) communication Protocol, and the target obstacle point cloud set elimination and SLAM algorithm may be run on the CPU 2.
In summary, according to the present disclosure, multi-frame image information and multi-frame point cloud information of a vehicle surrounding environment are first obtained, where the image information includes environment images corresponding to a plurality of acquisition regions, the point cloud information includes point cloud data corresponding to the plurality of acquisition regions, the image information corresponds to the point cloud information one to one, then an obstacle region corresponding to a moving obstacle in the vehicle surrounding environment in each environment image is determined, then, for each frame of point cloud information, a target obstacle point cloud set corresponding to the frame of point cloud information is determined according to the obstacle region and the point cloud data, a target point cloud set corresponding to each frame of point cloud information is determined according to the target obstacle point cloud set and the point cloud data, and finally, a point cloud map is generated according to the target point cloud set. According to the method, the target obstacle point cloud set corresponding to the moving obstacle is determined through the obstacle area corresponding to the moving obstacle in the environment image, and the point cloud map without the moving obstacle is generated according to the target point cloud set obtained by removing the target obstacle point cloud set from the point cloud data, so that the noise in the point cloud map can be reduced, and the precision of the point cloud map can be ensured.
Alternatively, step 102 may be implemented by:
and according to the spliced environment image, determining a corresponding obstacle area of the moving obstacle in each environment image included by the frame of image information through a pre-trained obstacle detection model.
For example, after obtaining the multi-frame image information and the multi-frame point cloud information, all the environment images included in each frame of image information may be stitched to obtain a stitched environment image corresponding to the frame of image information. Furthermore, in order to ensure the accuracy of the environmental image to the environmental description, the calibrated distortion coefficient K of each image acquisition device can be used before the environmental images are spliced i And performing distortion removal processing on each environment image. Then, for each frame of image information, the stitched environmental image corresponding to the frame of image information may be used as an input of the obstacle detection model, so as to obtain an obstacle region corresponding to the moving obstacle output by the obstacle detection model in each environmental image included in the frame of image information.
Fig. 2 is a flow chart illustrating one step 103 according to the embodiment shown in fig. 1. As shown in fig. 2, step 103 may include the steps of:
in step 1031, for each acquisition region, projecting each point to be selected in the point cloud data corresponding to the acquisition region included in the frame of point cloud information to a target environment image corresponding to the acquisition region to obtain a projection point corresponding to the to be selected, and taking the point to be selected corresponding to the projection point matched with the obstacle region as an obstacle point to be selected to obtain an obstacle point cloud set corresponding to the acquisition region. And the target environment image is an environment image corresponding to the acquisition area and included in the image information corresponding to the frame point cloud information.
For example, a conversion relation between the calibrated laser radar coordinate system and the camera coordinate system can be utilized for each acquisition region
Figure BDA0003724076150000111
Transferring each point to be selected in the point cloud data corresponding to the acquisition area included in the frame of point cloud information to a camera coordinate system, and utilizing the conversion relation between the calibrated camera coordinate system and the pixel coordinate system
Figure BDA0003724076150000112
And projecting the points to be selected to a pixel coordinate system from a camera coordinate system so as to project each point to be selected corresponding to the acquisition area to a target environment image corresponding to the acquisition area to obtain a projection point corresponding to the point to be selected. Then, the position relation between the coordinates of the projection points in the target environment image and the obstacle region can be compared, and the candidate point corresponding to the projection point located in the obstacle region (i.e., the projection point matched with the obstacle region) is used as the candidate obstacle point, so that the candidate obstacle point cloud set corresponding to the acquisition region is obtained.
In step 1032, for each acquisition area, clustering the obstacle points to be selected in the obstacle point cloud set to be selected corresponding to the acquisition area to obtain at least one cluster point cloud set, and using the largest cluster point cloud set in the at least one cluster point cloud set as the target obstacle point cloud set.
For example, the obstacle area determined in step 102 is a quadrilateral area, and in actual situations, the moving obstacle is mostly irregular. That is, the determined obstacle area may contain other objects besides the moving obstacle, but the moving obstacle occupies a major portion of the obstacle area. Therefore, for each acquisition region, the obstacle points to be selected in the obstacle point cloud set to be selected corresponding to the acquisition region may be clustered (for example, the obstacle points to be selected may be clustered by an euclidean distance or a K-means clustering algorithm), so as to obtain at least one cluster point cloud set, and the largest cluster point cloud set in the at least one cluster point cloud set is used as the target obstacle point cloud set.
Fig. 3 is a flow chart illustrating one step 105 according to the embodiment shown in fig. 1. As shown in fig. 3, step 105 may include the steps of:
in step 1051, the target odometer pose of the vehicle corresponding to each frame of point cloud information is determined according to the target point cloud set corresponding to each frame of point cloud information.
In step 1052, for each frame of point cloud information, the target odometer pose corresponding to the frame of point cloud information and the target point cloud set corresponding to the frame of point cloud information are spliced to obtain a point cloud map.
For example, feature point extraction may be performed on a target point in each target point cloud set to obtain a target feature point corresponding to each target point. The target feature points may include ground points, column points, planar object points, and the like. And secondly, determining the position and pose of the vehicle to be selected corresponding to each frame of point cloud information according to the target feature points. The mode of determining the pose of the candidate odometer can be as follows: according to different environments, different weight values are given to various target feature points (for example, when a vehicle runs on a plane, the weight value corresponding to a ground point can be set to be higher, the weight value corresponding to a column point and the weight value corresponding to a planar object are set to be lower), inter-frame matching is carried out according to the target feature points corresponding to every two adjacent frames of point cloud information and the weight values corresponding to the target feature points, the pose variation quantity corresponding to every two frames of point cloud information is obtained, and the pose of the to-be-selected odometer corresponding to every frame of point cloud information is determined according to the pose variation quantity corresponding to every two frames of point cloud information.
And then, the positions and postures of the to-be-selected odometer inevitably have accumulated errors, so that after the positions and postures of the to-be-selected odometer corresponding to each frame of point cloud information are determined, the positions and postures of the to-be-selected odometer can be optimized by utilizing a preset optimization algorithm to obtain the positions and postures of the target odometer corresponding to each frame of point cloud information. For example, an optimization problem can be constructed according to historical frame point cloud information, and a nonlinear optimization method is used for performing pose optimization on the pose of the odometer to be selected corresponding to each frame of point cloud information to obtain the pose of the target odometer corresponding to each frame of point cloud information.
And finally, splicing the target odometer pose corresponding to each frame of point cloud information and the target point cloud set corresponding to the frame of point cloud information by utilizing an SLAM algorithm to generate a point cloud map.
In summary, according to the present disclosure, multi-frame image information and multi-frame point cloud information of a vehicle surrounding environment are first obtained, where the image information includes environment images corresponding to a plurality of acquisition regions, the point cloud information includes point cloud data corresponding to the plurality of acquisition regions, the image information corresponds to the point cloud information one to one, then an obstacle region corresponding to a moving obstacle in the vehicle surrounding environment in each environment image is determined, then, for each frame of point cloud information, a target obstacle point cloud set corresponding to the frame of point cloud information is determined according to the obstacle region and the point cloud data, a target point cloud set corresponding to each frame of point cloud information is determined according to the target obstacle point cloud set and the point cloud data, and finally, a point cloud map is generated according to the target point cloud set. According to the method, the target obstacle point cloud set corresponding to the moving obstacle is determined through the obstacle area corresponding to the moving obstacle in the environment image, and the point cloud map without the moving obstacle is generated according to the target point cloud set obtained by removing the target obstacle point cloud set from the point cloud data, so that the noise in the point cloud map can be reduced, and the precision of the point cloud map is ensured.
FIG. 4 is a block diagram illustrating a map generation apparatus in accordance with an exemplary embodiment. Referring to fig. 4, the map generating apparatus 200 includes an obtaining module 201, a determining module 202, and a generating module 203.
An acquisition module 201 configured to acquire a plurality of frames of image information and a plurality of frames of point cloud information of a vehicle surrounding environment; the image information comprises environment images corresponding to a plurality of acquisition areas, and the point cloud information comprises point cloud data corresponding to the acquisition areas; the image information corresponds to the point cloud information one by one;
a determination module 202 configured to determine a corresponding obstacle area in each environment image for a moving obstacle in the environment surrounding the vehicle.
The determining module 202 is further configured to determine, for each frame of point cloud information, a target obstacle point cloud set corresponding to the frame of point cloud information according to the obstacle region and the point cloud data.
The determining module 202 is further configured to determine a target point cloud set corresponding to each frame of point cloud information according to the target obstacle point cloud set and the point cloud data.
And the generating module 203 is configured to generate a point cloud map according to the target point cloud set.
Optionally, the determining module 202 is configured to:
and according to the spliced environment image, determining a corresponding obstacle area of the moving obstacle in each environment image included by the frame of image information through a pre-trained obstacle detection model.
FIG. 5 is a block diagram illustrating a determination module according to the embodiment shown in FIG. 4. As shown in fig. 5, the determining module 202 includes:
the first determining submodule 2021 is configured to project, for each acquisition region, each point to be selected in the point cloud data corresponding to the acquisition region included in the frame of point cloud information into a target environment image corresponding to the acquisition region, to obtain a projection point corresponding to the point to be selected, and use the point to be selected corresponding to the projection point matched with the obstacle region as an obstacle point to be selected, to obtain an obstacle point cloud set corresponding to the acquisition region. And the target environment image is an environment image corresponding to the acquisition area and included by the image information corresponding to the frame point cloud information.
The second determining sub-module 2022 is configured to cluster the obstacle points to be selected in the obstacle point cloud set to be selected corresponding to each acquisition area to obtain at least one clustered point cloud set, and use the largest clustered point cloud set in the at least one clustered point cloud set as the target obstacle point cloud set.
Optionally, the determining module 202 is configured to:
and for each frame of point cloud information, removing a target obstacle point cloud set corresponding to the frame of point cloud information from the point cloud data included in the frame of point cloud information to obtain a target point cloud set corresponding to the frame of point cloud information.
FIG. 6 is a block diagram of one generation module shown in accordance with the embodiment shown in FIG. 4. As shown in fig. 6, the generating module 203 includes:
the third determining submodule 2031 is configured to determine, according to the target point cloud set corresponding to each frame of point cloud information, a target odometer pose of the vehicle corresponding to the frame of point cloud information.
The stitching sub-module 2032 is configured to, for each frame of point cloud information, stitch the target odometer pose corresponding to the frame of point cloud information with the target point cloud set corresponding to the frame of point cloud information to obtain a point cloud map.
Optionally, the third determining submodule 2031 is configured to:
and extracting the characteristic points of the target points in each target point cloud set to obtain the target characteristic points corresponding to each target point.
And determining the position and the pose of the to-be-selected odometer of the vehicle corresponding to each frame of point cloud information according to the target feature points.
And optimizing the pose of the to-be-selected odometer by using a preset optimization algorithm to obtain the pose of the target odometer.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
In summary, according to the present disclosure, multi-frame image information and multi-frame point cloud information of a vehicle surrounding environment are first obtained, where the image information includes environment images corresponding to a plurality of acquisition regions, the point cloud information includes point cloud data corresponding to the plurality of acquisition regions, the image information corresponds to the point cloud information one to one, then an obstacle region corresponding to a moving obstacle in the vehicle surrounding environment in each environment image is determined, then, for each frame of point cloud information, a target obstacle point cloud set corresponding to the frame of point cloud information is determined according to the obstacle region and the point cloud data, a target point cloud set corresponding to each frame of point cloud information is determined according to the target obstacle point cloud set and the point cloud data, and finally, a point cloud map is generated according to the target point cloud set. According to the method, the target obstacle point cloud set corresponding to the moving obstacle is determined through the obstacle area corresponding to the moving obstacle in the environment image, and the point cloud map without the moving obstacle is generated according to the target point cloud set obtained by removing the target obstacle point cloud set from the point cloud data, so that the noise in the point cloud map can be reduced, and the precision of the point cloud map is ensured.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the map generation method provided by the present disclosure.
FIG. 7 is a block diagram of an electronic device shown in accordance with an example embodiment. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 7, electronic device 800 may include one or more of the following components: a processing component 802, a first memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more first processors 820 to execute instructions to perform all or a portion of the steps of the map generation method described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The first memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The first memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the first memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The input/output interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described map generation methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the first memory 804 comprising instructions, executable by the first processor 820 of the electronic device 800 to perform the map generation method described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
FIG. 8 is a functional block diagram schematic of a vehicle shown in an exemplary embodiment. The vehicle 600 may be configured in a fully or partially autonomous driving mode. For example, the vehicle 600 may acquire environmental information around the vehicle through the sensing system 620 and derive an automatic driving strategy based on an analysis of the surrounding environmental information to implement fully automatic driving, or present the analysis results to the user to implement partially automatic driving.
Vehicle 600 may include various subsystems such as infotainment system 610, perception system 620, decision control system 630, drive system 640, and computing platform 650. Alternatively, vehicle 600 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the sub-systems and components of the vehicle 600 may be interconnected by wire or wirelessly.
In some embodiments, the infotainment system 610 may include a communication system 611, an entertainment system 612, and a navigation system 613.
The communication system 611 may comprise a wireless communication system that may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a Wireless Local Area Network (WLAN) using WiFi. In some embodiments, the wireless communication system may utilize an infrared link, bluetooth, or ZigBee to communicate directly with the device. Other wireless protocols, such as various vehicular communication systems, for example, a wireless communication system may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
The entertainment system 612 may include a display device, a microphone and a sound, and a user may listen to a radio in the car based on the entertainment system, playing music; or the mobile phone is communicated with the vehicle, screen projection of the mobile phone is realized on the display equipment, the display equipment can be in a touch control type, and a user can operate the display equipment by touching the screen.
In some cases, the voice signal of the user may be captured by a microphone, and certain control of the vehicle 600 by the user, such as adjusting the temperature in the vehicle, etc., may be implemented according to the analysis of the voice signal of the user. In other cases, music may be played to the user through a stereo.
The navigation system 613 may include a map service provided by a map provider to provide navigation of a route of travel for the vehicle 600, and the navigation system 613 may be used in conjunction with a global positioning system 621 and an inertial measurement unit 622 of the vehicle. The map service provided by the map provider can be a two-dimensional map or a high-precision map.
The sensing system 620 may include several types of sensors that sense information about the environment surrounding the vehicle 600. For example, the sensing system 620 may include a global positioning system 621 (the global positioning system may be a GPS system, a beidou system or other positioning system), an Inertial Measurement Unit (IMU) 622, a laser radar 623, a millimeter wave radar 624, an ultrasonic radar 625, and a camera 626. The sensing system 620 may also include sensors of internal systems of the monitored vehicle 600 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors may be used to detect the object and its corresponding characteristics (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 600.
Global positioning system 621 is used to estimate the geographic location of vehicle 600.
The inertial measurement unit 622 is used to sense a pose change of the vehicle 600 based on the inertial acceleration. In some embodiments, inertial measurement unit 622 may be a combination of accelerometers and gyroscopes.
Lidar 623 utilizes laser light to sense objects in the environment in which vehicle 600 is located. In some embodiments, lidar 623 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
The millimeter-wave radar 624 utilizes radio signals to sense objects within the surrounding environment of the vehicle 600. In some embodiments, in addition to sensing objects, the millimeter-wave radar 624 may also be used to sense the speed and/or heading of objects.
The ultrasonic radar 625 may sense objects around the vehicle 600 using ultrasonic signals.
The camera 626 is used to capture image information of the surroundings of the vehicle 600. The image capturing device 626 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, and the like, and the image information acquired by the image capturing device 626 may include still images or video stream information.
Decision control system 630 includes a computing system 631 that makes analytical decisions based on information acquired by sensing system 620, decision control system 630 further includes a vehicle control unit 632 that controls the powertrain of vehicle 600, and a steering system 633, throttle 634, and brake system 635 for controlling vehicle 600.
The computing system 631 may operate to process and analyze the various information acquired by the perception system 620 to identify objects, and/or features in the environment surrounding the vehicle 600. The target may comprise a pedestrian or an animal and the objects and/or features may comprise traffic signals, road boundaries and obstacles. The computing system 631 may use object recognition algorithms, structure From Motion (SFM) algorithms, video tracking, and the like. In some embodiments, the computing system 631 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The computing system 631 may analyze the various information obtained and derive a control strategy for the vehicle.
The vehicle controller 632 may be used to perform coordinated control on the power battery and the engine 641 of the vehicle to improve the power performance of the vehicle 600.
The steering system 633 is operable to adjust the heading of the vehicle 600. For example, in one embodiment, a steering wheel system.
The throttle 634 is used to control the operating speed of the engine 641 and, in turn, the speed of the vehicle 600.
The braking system 635 is used to control the deceleration of the vehicle 600. The braking system 635 may use friction to slow the wheel 644. In some embodiments, the braking system 635 may convert the kinetic energy of the wheels 644 into electrical current. The braking system 635 may also take other forms to slow the rotational speed of the wheels 644 to control the speed of the vehicle 600.
The drive system 640 may include components that provide powered motion to the vehicle 600. In one embodiment, the drive system 640 may include an engine 641, an energy source 642, a transmission 643, and wheels 644. The engine 641 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine consisting of a gasoline engine and an electric motor, a hybrid engine consisting of an internal combustion engine and an air compression engine. The engine 641 converts the energy source 642 into mechanical energy.
Examples of energy sources 642 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 642 may also provide energy to other systems of the vehicle 600.
The transmission 643 may transmit mechanical power from the engine 641 to the wheels 644. The transmission 643 may include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission 643 may also include other components, such as clutches. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 644.
Some or all of the functionality of the vehicle 600 is controlled by the computing platform 650. Computing platform 650 can include at least one second processor 651, which second processor 651 can execute instructions 653 stored in a non-transitory computer-readable medium, such as second memory 652. In some embodiments, the computing platform 650 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 600 in a distributed manner.
The second processor 651 may be any conventional processor, such as a commercially available CPU. Alternatively, the second processor 651 may also include a processor such as a Graphics Processor Unit (GPU), a Field Programmable Gate Array (FPGA), a System On Chip (SOC), an Application Specific Integrated Circuit (ASIC), or a combination thereof. Although fig. 8 functionally illustrates a processor, memory, and other elements of a computer in the same block, those skilled in the art will appreciate that the processor, computer, or memory may actually comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard drive or other storage medium located in a different enclosure than the computer. Thus, reference to a processor or computer will be understood to include reference to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only computations related to the component-specific functions.
In the disclosed embodiment, the second processor 651 may perform the map generation method described above.
In various aspects described herein, the second processor 651 can be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle and others are executed by a remote processor, including taking the steps necessary to execute a single maneuver.
In some embodiments, the second memory 652 can contain instructions 653 (e.g., program logic), which instructions 653 can be executed by the second processor 651 to perform various functions of the vehicle 600. The second memory 652 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the infotainment system 610, the perception system 620, the decision control system 630, the drive system 640.
In addition to instructions 653, second memory 652 may also store data such as road maps, route information, the location, direction, speed, and other such vehicle data of the vehicle, as well as other information. Such information may be used by the vehicle 600 and the computing platform 650 during operation of the vehicle 600 in autonomous, semi-autonomous, and/or manual modes.
Computing platform 650 may control functions of vehicle 600 based on inputs received from various subsystems (e.g., drive system 640, perception system 620, and decision control system 630). For example, computing platform 650 may utilize input from decision control system 630 in order to control steering system 633 to avoid obstacles detected by perception system 620. In some embodiments, the computing platform 650 is operable to provide control over many aspects of the vehicle 600 and its subsystems.
Optionally, one or more of these components described above may be mounted or associated separately from the vehicle 600. For example, the second memory 652 may exist partially or completely separate from the vehicle 600. The aforementioned components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 8 should not be construed as limiting the embodiment of the present disclosure.
An autonomous automobile traveling on a road, such as vehicle 600 above, may identify objects within its surrounding environment to determine an adjustment to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently and may be used to determine the speed at which the autonomous vehicle is to be adjusted based on the respective characteristics of the object, such as its current speed, acceleration, separation from the vehicle, and the like.
Optionally, the vehicle 600 or a sensory and computing device associated with the vehicle 600 (e.g., computing system 631, computing platform 650) may predict behavior of the identified object based on characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.). Optionally, each of the identified objects is dependent on the behavior of each other, so all of the identified objects can also be considered together to predict the behavior of a single identified object. The vehicle 600 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous vehicle is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 600, such as the lateral position of the vehicle 600 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 600 to cause the autonomous vehicle to follow a given trajectory and/or maintain a safe lateral and longitudinal distance from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on the road).
The vehicle 600 may be any type of vehicle, such as a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a recreational vehicle, a train, etc., and the disclosed embodiment is not particularly limited.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described map generation method when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A map generation method, characterized in that the method comprises:
acquiring multi-frame image information and multi-frame point cloud information of the surrounding environment of the vehicle; the image information comprises environment images corresponding to a plurality of acquisition areas, and the point cloud information comprises point cloud data corresponding to the acquisition areas; the image information corresponds to the point cloud information one by one;
determining a corresponding obstacle area of a moving obstacle in the environment surrounding the vehicle in each environment image;
aiming at each frame of point cloud information, determining a target obstacle point cloud set corresponding to the frame of point cloud information according to the obstacle area and the point cloud data;
determining a target point cloud set corresponding to each frame of point cloud information according to the target obstacle point cloud set and the point cloud data;
and generating a point cloud map according to the target point cloud set.
2. The method of claim 1, wherein said determining a corresponding obstacle region of a moving obstacle in the vehicle surroundings in each of the environmental images comprises:
and for each frame of image information, all the environment images included in the frame of image information are spliced to obtain a spliced environment image corresponding to the frame of image information, and the obstacle area corresponding to the moving obstacle in each environment image included in the frame of image information is determined through a pre-trained obstacle detection model according to the spliced environment image.
3. The method according to claim 1, wherein determining a target obstacle point cloud set corresponding to the frame of point cloud information according to the obstacle area and the point cloud data comprises:
for each acquisition area, projecting each point to be selected in the point cloud data corresponding to the acquisition area, which is included in the frame of point cloud information, to a target environment image corresponding to the acquisition area to obtain a projection point corresponding to the point to be selected, and taking the point to be selected corresponding to the projection point matched with the obstacle area as an obstacle point to be selected to obtain an obstacle point cloud set corresponding to the acquisition area; the target environment image is the environment image corresponding to the acquisition area and included in the image information corresponding to the frame of point cloud information;
and aiming at each acquisition region, clustering the obstacle points to be selected in the obstacle point cloud set to be selected corresponding to the acquisition region to obtain at least one clustering point cloud set, and taking the largest clustering point cloud set in the at least one clustering point cloud set as the target obstacle point cloud set.
4. The method of claim 1, wherein determining a target point cloud set corresponding to each frame of point cloud information from the target obstacle point cloud set and the point cloud data comprises:
and removing the target obstacle point cloud set corresponding to the frame of point cloud information from the point cloud data included in the frame of point cloud information to obtain the target point cloud set corresponding to the frame of point cloud information.
5. The method of claim 1, wherein generating a point cloud map from the set of target point clouds comprises:
determining the target odometer pose of the vehicle corresponding to each frame of point cloud information according to the target point cloud set corresponding to each frame of point cloud information;
and aiming at each frame of point cloud information, splicing the target odometer pose corresponding to the frame of point cloud information and the target point cloud set corresponding to the frame of point cloud information to obtain the point cloud map.
6. The method of claim 5, wherein determining the target odometer pose of the vehicle corresponding to each frame of point cloud information according to the target point cloud set corresponding to each frame of point cloud information comprises:
extracting characteristic points of target points in each target point cloud set to obtain target characteristic points corresponding to the target points;
determining the position and posture of the odometer to be selected of the vehicle corresponding to each frame of point cloud information according to the target feature points;
and optimizing the pose of the to-be-selected odometer by using a preset optimization algorithm to obtain the pose of the target odometer.
7. A map generation apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is configured to acquire multi-frame image information and multi-frame point cloud information of the surrounding environment of the vehicle; the image information comprises environment images corresponding to a plurality of acquisition areas, and the point cloud information comprises point cloud data corresponding to the acquisition areas; the image information corresponds to the point cloud information one by one;
a determination module configured to determine a corresponding obstacle area in each of the environmental images for a moving obstacle in the vehicle surroundings;
the determining module is further configured to determine, for each frame of the point cloud information, a target obstacle point cloud set corresponding to the frame of the point cloud information according to the obstacle area and the point cloud data;
the determining module is further configured to determine a target point cloud set corresponding to each frame of point cloud information according to the target obstacle point cloud set and the point cloud data;
and the generating module is configured to generate a point cloud map according to the target point cloud set.
8. An electronic device, comprising:
a first processor;
a first memory for storing first processor-executable instructions;
wherein the first processor is configured to perform the steps of the method of any one of claims 1 to 6.
9. A vehicle, characterized by comprising:
a second processor;
a second memory for storing second processor-executable instructions;
wherein the second processor is configured to perform the steps of the method of any one of claims 1 to 6.
10. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 6.
CN202210778725.8A 2022-06-30 2022-06-30 Map generation method, map generation device, electronic equipment, vehicle and storage medium Active CN115170630B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210778725.8A CN115170630B (en) 2022-06-30 2022-06-30 Map generation method, map generation device, electronic equipment, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210778725.8A CN115170630B (en) 2022-06-30 2022-06-30 Map generation method, map generation device, electronic equipment, vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN115170630A true CN115170630A (en) 2022-10-11
CN115170630B CN115170630B (en) 2023-11-21

Family

ID=83492064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210778725.8A Active CN115170630B (en) 2022-06-30 2022-06-30 Map generation method, map generation device, electronic equipment, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN115170630B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883496A (en) * 2023-06-26 2023-10-13 小米汽车科技有限公司 Coordinate reconstruction method and device for traffic element, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113639745A (en) * 2021-08-03 2021-11-12 北京航空航天大学 Point cloud map construction method and device and storage medium
CN114353799A (en) * 2021-12-30 2022-04-15 武汉大学 Indoor rapid global positioning method for unmanned platform carrying multi-line laser radar

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113639745A (en) * 2021-08-03 2021-11-12 北京航空航天大学 Point cloud map construction method and device and storage medium
CN114353799A (en) * 2021-12-30 2022-04-15 武汉大学 Indoor rapid global positioning method for unmanned platform carrying multi-line laser radar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHENG ZOU等: "Static map reconstruction and dynamic object tracking for a camera and laser scanner system", 《HTTPS://IETRESEARCH.ONLINELIBRARY.WILEY.COM/DOI/FULL/10.1049/IET-CVI.2017.0308》, pages 384 - 392 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883496A (en) * 2023-06-26 2023-10-13 小米汽车科技有限公司 Coordinate reconstruction method and device for traffic element, electronic equipment and storage medium
CN116883496B (en) * 2023-06-26 2024-03-12 小米汽车科技有限公司 Coordinate reconstruction method and device for traffic element, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115170630B (en) 2023-11-21

Similar Documents

Publication Publication Date Title
CN114882464B (en) Multi-task model training method, multi-task processing method, device and vehicle
CN114935334B (en) Construction method and device of lane topological relation, vehicle, medium and chip
CN115222941A (en) Target detection method and device, vehicle, storage medium, chip and electronic equipment
CN115205365A (en) Vehicle distance detection method and device, vehicle, readable storage medium and chip
CN115170630B (en) Map generation method, map generation device, electronic equipment, vehicle and storage medium
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
CN115164910B (en) Travel route generation method, travel route generation device, vehicle, storage medium, and chip
CN114771539B (en) Vehicle lane change decision method and device, storage medium and vehicle
CN115205311B (en) Image processing method, device, vehicle, medium and chip
CN114756700B (en) Scene library establishing method and device, vehicle, storage medium and chip
CN114973178A (en) Model training method, object recognition method, device, vehicle and storage medium
CN114880408A (en) Scene construction method, device, medium and chip
CN114862931A (en) Depth distance determination method and device, vehicle, storage medium and chip
CN114537450A (en) Vehicle control method, device, medium, chip, electronic device and vehicle
CN114842454B (en) Obstacle detection method, device, equipment, storage medium, chip and vehicle
CN115221260B (en) Data processing method, device, vehicle and storage medium
CN114821511B (en) Rod body detection method and device, vehicle, storage medium and chip
CN115115822B (en) Vehicle-end image processing method and device, vehicle, storage medium and chip
CN114771514B (en) Vehicle running control method, device, equipment, medium, chip and vehicle
CN114822216B (en) Method and device for generating parking space map, vehicle, storage medium and chip
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle
CN114789723B (en) Vehicle running control method and device, vehicle, storage medium and chip
CN115214629B (en) Automatic parking method, device, storage medium, vehicle and chip
CN115535004B (en) Distance generation method, device, storage medium and vehicle
CN115223122A (en) Method and device for determining three-dimensional information of object, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant