CN112785704A - Semantic map construction method, computer readable storage medium and processor - Google Patents

Semantic map construction method, computer readable storage medium and processor Download PDF

Info

Publication number
CN112785704A
CN112785704A CN202110044266.6A CN202110044266A CN112785704A CN 112785704 A CN112785704 A CN 112785704A CN 202110044266 A CN202110044266 A CN 202110044266A CN 112785704 A CN112785704 A CN 112785704A
Authority
CN
China
Prior art keywords
obstacle
area
point cloud
points
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110044266.6A
Other languages
Chinese (zh)
Other versions
CN112785704B (en
Inventor
赵祈杰
李德聆
陈泽佳
刘炜铭
周筠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaoma Huixing Technology Co ltd
Original Assignee
Beijing Xiaoma Huixing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaoma Huixing Technology Co ltd filed Critical Beijing Xiaoma Huixing Technology Co ltd
Priority to CN202110044266.6A priority Critical patent/CN112785704B/en
Publication of CN112785704A publication Critical patent/CN112785704A/en
Application granted granted Critical
Publication of CN112785704B publication Critical patent/CN112785704B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a semantic map construction method, a computer readable storage medium and a processor. The method comprises the following steps: determining an obstacle area in a 3D point cloud map; acquiring and storing position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map; and in the case that the obstacle area is not detected within a preset time period, generating the semantic map according to the position information of the obstacle points. According to the method, according to the acquired position information of the plurality of obstacle points of the obstacle area, when the obstacle area is not detected, the semantic map can be generated according to the position information of the plurality of obstacle points, the construction area can be accurately identified, different objects can be distinguished by the semantic map according to different displayed colors of different obstacles, the semantic level is high, and the problem that the map with high semantic level capable of accurately identifying the construction area is lacked in the prior art is solved.

Description

Semantic map construction method, computer readable storage medium and processor
Technical Field
The application relates to the field of automatic driving, in particular to a semantic map construction method, a computer readable storage medium, a processor and an intelligent driving vehicle.
Background
In the prior art, construction areas often appear on roads, which can affect the running of vehicles.
The 3D point cloud image in the prior art cannot display what the object corresponding to the point cloud is, namely, the object is not distinguished, the semantic level is low, the vehicle is not favorable for accurately and efficiently identifying the construction area, and the subsequent driving strategy is not favorable for accurately and efficiently adjusting according to the information of the construction area.
Therefore, a map with a higher semantic level, which can accurately identify a construction area, is needed.
The above information disclosed in this background section is only for enhancement of understanding of the background of the technology described herein and, therefore, certain information may be included in the background that does not form the prior art that is already known in this country to a person of ordinary skill in the art.
Disclosure of Invention
The application mainly aims to provide a semantic map construction method, a computer readable storage medium, a processor and an intelligent driving vehicle, so as to solve the problem that a map with a higher semantic level, which can accurately identify a construction area, is lacked in the prior art.
According to an aspect of the embodiments of the present invention, there is provided a semantic map construction method, including: determining an obstacle area in a 3D point cloud map, wherein the obstacle area is an area comprising obstacles; acquiring and storing position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map, wherein the obstacle points are points in the obstacle area; and in the case that the obstacle area is not detected within a preset time period, generating the semantic map according to the position information of the obstacle points.
Optionally, determining an obstacle area in the 3D point cloud map comprises: acquiring a 2D image to obtain the obstacle area in the 2D image; and determining the obstacle area in the 3D point cloud map according to the 2D image and the corresponding 3D point cloud map.
Optionally, acquiring a 2D image to obtain the obstacle region in the 2D image includes: acquiring the 2D image; determining whether a first obstacle region and a second obstacle region are included in the 2D image, wherein the first obstacle region is the obstacle region with the area smaller than a preset area, and the second obstacle region is the obstacle region with the area larger than or equal to the preset area; acquiring a frame of the first obstacle area when the 2D image includes the first obstacle area.
Optionally, determining the obstacle area in the 3D point cloud map according to the 2D image and the corresponding 3D point cloud map, further comprising: projecting the bounding box into the 3D point cloud map; determining a point cloud area corresponding to the frame as an obstacle point cloud area; determining the obstacle point cloud area as the obstacle area in the 3D point cloud map.
Optionally, determining the obstacle area in the 3D point cloud map according to the 2D image and the corresponding 3D point cloud map includes: under the condition that the second obstacle area is included in the 2D image, projecting the point cloud in the 3D point cloud map into the 2D image, and determining an obstacle point cloud area in the 3D point cloud map corresponding to the obstacle area in the 2D image; determining the obstacle point cloud area as the obstacle area in the 3D point cloud map.
Optionally, projecting the point cloud in the 3D point cloud map into the 2D image, and determining an obstacle point cloud area in the 3D point cloud map corresponding to the obstacle area in the 2D image, including: projecting the point cloud in the 3D point cloud map into the 2D image, and determining obstacle area points, wherein the obstacle area points are points in the 3D point cloud map corresponding to the obstacle area in the 2D image; and determining the obstacle point cloud area according to the obstacle area points based on an object segmentation model of the 3D point cloud.
Optionally, acquiring and storing position information of a plurality of obstacle points of the obstacle area in the 3D point cloud map, including: extracting first obstacle points, wherein the first obstacle points are all points contained in the first obstacle area in the 3D point cloud map; and acquiring and storing the position information of the first obstacle point.
Optionally, acquiring and storing position information of a plurality of obstacle points of the obstacle area in the 3D point cloud map, including: dividing the second obstacle area into a plurality of obstacle meshes; and determining the center of the obstacle net to be a second obstacle point, and acquiring and storing the position information of the second obstacle point.
Optionally, after determining an obstacle area in the 3D point cloud map, before acquiring and storing position information of a plurality of obstacle points of the obstacle area in the 3D point cloud map, the method further includes: tracking the obstacle area by adopting a binary matching method to obtain a tracking result; according to the tracking result, determining a predetermined obstacle area, wherein the predetermined obstacle area is the obstacle area meeting a predetermined condition, and the predetermined condition comprises at least one of the following conditions: the method comprises the following steps that tracking time is longer than preset time, the number of points of the obstacle area is larger than the preset number of points, the obstacle area is observed by a laser radar and an image acquisition device, and position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map is obtained and stored, and the method comprises the following steps: and acquiring and storing the position information of a plurality of obstacle points of the preset obstacle area in the 3D point cloud map.
Optionally, in a case where the obstacle area is not detected within a predetermined time period, after the semantic map is generated according to the position information of the plurality of obstacle points, the method further includes: and updating the driving route according to the semantic map.
According to another aspect of the embodiments of the present invention, there is also provided a semantic map construction method, including: the method comprises the steps that an intelligent driving vehicle determines an obstacle area in a 3D point cloud map, wherein the obstacle area is an area including obstacles; the intelligent driving vehicle acquires and stores position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map, wherein the obstacle points are points in the obstacle area; and under the condition that the obstacle area is not detected within a preset time period, the intelligent driving vehicle sends the position information of the plurality of obstacle points to a service end, and the service end generates the semantic map according to the position information of the plurality of obstacle points.
Optionally, the intelligently driven vehicle determines an obstacle area in the 3D point cloud map, including: the intelligent driving vehicle acquires a 2D image to obtain an obstacle area in the 2D image; and the intelligent driving vehicle determines the obstacle area in the 3D point cloud map according to the 2D image and the corresponding 3D point cloud map.
Optionally, the acquiring, by the smart driving vehicle, a 2D image to obtain an obstacle area in the 2D image includes: the intelligent driving vehicle acquires a 2D image; the intelligent driving vehicle determines whether a first obstacle region and a second obstacle region are included in the 2D image, wherein the first obstacle region is the obstacle region with the area smaller than a preset area, and the second obstacle region is the obstacle region with the area larger than or equal to the preset area; the intelligent driving vehicle acquires a frame of the first obstacle area under the condition that the 2D image includes the first obstacle area.
Optionally, the acquiring and storing the position information of the plurality of obstacle points of the obstacle area in the 3D point cloud map by the intelligent driving vehicle includes: extracting first obstacle points, wherein the first obstacle points are all points contained in the first obstacle area in the 3D point cloud map; and acquiring and storing the position information of the first obstacle point.
Optionally, the acquiring and storing the position information of the plurality of obstacle points of the obstacle area in the 3D point cloud map by the intelligent driving vehicle includes: dividing the second obstacle area into a plurality of obstacle meshes; and determining the center of the obstacle net to be a second obstacle point, and acquiring and storing the position information of the second obstacle point.
Optionally, the sending, by the intelligent driving vehicle, the position information of the plurality of obstacle points to a server includes: deleting the obstacle points of each frame of the plurality of obstacle points that do not appear in the continuous predetermined frame map, leaving behind reserved obstacle points; clustering the reserved obstacle points to obtain a plurality of point clusters; if the reserved obstacle point in the point cluster is the first obstacle point, generating a concave packet for the corresponding point cluster; if the reserved obstacle point in the point cluster is the second obstacle point, generating a convex hull for the corresponding point cluster; and sending the convex hull and the concave hull to the server.
According to still another aspect of embodiments of the present invention, there is also provided a computer-readable storage medium including a stored program, wherein the program executes any one of the methods.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes any one of the methods.
According to still another aspect of the embodiments of the present invention, there is also provided an intelligent driving vehicle including: one or more processors, memory, and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods described herein
In the embodiment of the invention, firstly, an obstacle area in a 3D point cloud map is determined, then, the position information of a plurality of obstacle points in the 3D map is obtained according to the determined obstacle area, wherein the obstacle points are points in the obstacle area, the obtained position information is stored, and finally, under the condition that the obstacle area is not detected in a preset time period, a semantic map is generated according to the position information of the plurality of obstacle points. According to the method, according to the acquired position information of the plurality of obstacle points of the obstacle area, when the obstacle area is not detected, the semantic map can be generated according to the position information of the plurality of obstacle points, the construction area can be accurately identified, different objects can be distinguished by the semantic map according to different displayed colors of different obstacles, the semantic level is higher, the problem that the map with higher semantic level capable of accurately identifying the construction area is lacked in the prior art is solved, the driving strategy of the vehicle can be accurately and efficiently adjusted according to the semantic map subsequently, and the driving safety of the vehicle is guaranteed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
FIG. 1 is a flow chart diagram illustrating a semantic map construction method according to an embodiment of the application;
FIG. 2 is a flow chart diagram illustrating another semantic map construction method according to an embodiment of the application;
fig. 3 shows a schematic structural diagram of a semantic map building apparatus according to an embodiment of the present application.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As mentioned in the background, the prior art lacks a semantic map capable of accurately identifying a construction area, and in order to solve the above problems, in an exemplary embodiment of the present application, a semantic map construction method, a computer-readable storage medium, a processor, and an intelligent driving vehicle are provided.
According to the embodiment of the application, a semantic map construction method is provided. Fig. 1 is a flowchart of a semantic map construction method according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
step S101, determining an obstacle area in a 3D point cloud map, wherein the obstacle area is an area including obstacles, and the distance between any two adjacent obstacles in the obstacle area is smaller than a preset distance;
step S102, obtaining and storing position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map, wherein the obstacle points are points in the obstacle area;
step S103 is performed to generate the semantic map based on the position information of the plurality of obstacle points when the obstacle area is not detected within a predetermined time period.
In the method, firstly, an obstacle area in a 3D point cloud map is determined, then, the position information of a plurality of obstacle points in the 3D map is obtained according to the determined obstacle area, wherein the obstacle points are points in the obstacle area, the obtained position information is stored, and finally, a semantic map is generated according to the position information of the plurality of obstacle points under the condition that the obstacle area is not detected in a preset time period. According to the method, according to the acquired position information of the plurality of obstacle points of the obstacle area, when the obstacle area is not detected, the semantic map can be generated according to the position information of the plurality of obstacle points, the construction area can be accurately identified, different objects can be distinguished by the semantic map according to different displayed colors of different obstacles, the semantic level is higher, the problem that the map with higher semantic level capable of accurately identifying the construction area is lacked in the prior art is solved, the driving strategy of the vehicle can be accurately and efficiently adjusted according to the semantic map subsequently, and the driving safety of the vehicle is guaranteed.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
In practical applications, the obstacle area may be a cone, a single barricade, and/or a water horse, etc.
In one embodiment of the present application, determining an obstacle area in a 3D point cloud map comprises: acquiring a 2D image to obtain the obstacle area in the 2D image; and determining the obstacle area in the 3D point cloud map according to the 2D image and the corresponding 3D point cloud map. In the embodiment, the obstacle area in the 3D point cloud map can be determined more accurately according to the acquired 2D image and the acquired 3D point cloud map, so that a plurality of obstacle points of the obstacle area can be acquired more accurately subsequently, and further, the accurate semantic map can be generated subsequently.
Specifically, the 2D image is captured by a camera on the vehicle, and the obstacle and the specific position thereof in the image can be determined by a neural network training model.
In another embodiment of the present application, acquiring a 2D image to obtain the obstacle area in the 2D image includes: acquiring the 2D image; determining whether a first obstacle region and a second obstacle region are included in the 2D image, the first obstacle region being the obstacle region having an area smaller than a predetermined area, the second obstacle region being the obstacle region having an area greater than or equal to the predetermined area; and acquiring a frame of the first obstacle area when the 2D image includes the first obstacle area. In this embodiment, different information is acquired for obstacle regions of different areas, and then the obstacle regions may be determined according to the different information.
In another embodiment of the present application, determining the obstacle area in the 3D point cloud map according to the 2D image and the corresponding 3D point cloud map further includes: projecting the frame into the 3D point cloud map, specifically projecting the frame of the obstacle into a 3D coordinate system; determining the point cloud area corresponding to the frame as an obstacle point cloud area, so as to obtain the real position of the obstacle point cloud area; and determining the obstacle point cloud area as the obstacle area in the 3D point cloud map. In the embodiment, the frame is projected into the 3D point cloud map, the point cloud area in the frame can be determined to be the obstacle point cloud area more efficiently, and the high efficiency of generating the semantic map is guaranteed.
Specifically, the midpoint of the bottom side of the frame of the obstacle area in the 2D image is projected to a 3D coordinate system, a ground point is searched in the upward search direction from the point, the surrounding points belonging to the frame in the 2D image are collected with the ground point as the center, and then the obstacle area in the 3D point cloud map is formed, that is, the obstacle area in the 3D point cloud map is obtained.
In another embodiment of the present application, determining the obstacle area in the 3D point cloud map according to the 2D image and the corresponding 3D point cloud map includes: projecting the point cloud in the 3D point cloud map into the 2D image when the second obstacle area is included in the 2D image, and determining an obstacle point cloud area in the 3D point cloud map corresponding to the obstacle area in the 2D image; and determining the obstacle point cloud area as the obstacle area in the 3D point cloud map. In the embodiment, the point cloud in the 3D point cloud map is projected into the 2D image, so that the obstacle point cloud area in the 3D point cloud map corresponding to the obstacle area in the 2D image can be determined more accurately and directly, the obstacle point cloud area can be further determined accurately as the obstacle area in the 3D point cloud map, and a plurality of obstacle points in the obstacle area can be acquired more accurately subsequently.
In a specific embodiment of the present application, projecting the point cloud in the 3D point cloud map into the 2D image, and determining an obstacle point cloud area in the 3D point cloud map corresponding to the obstacle area in the 2D image includes: projecting the point cloud in the 3D point cloud map into the 2D image, and determining an obstacle area point, wherein the obstacle area point is a point in the 3D point cloud map corresponding to the obstacle area in the 2D image; and determining the obstacle point cloud area according to the obstacle area points based on the object segmentation model of the 3D point cloud. In the embodiment, the obstacle area point is determined first, and then the obstacle area point is segmented based on the object segmentation model of the 3D point cloud, so that the obstacle point cloud area can be further accurately determined.
In another specific embodiment of the present application, obtaining and storing position information of a plurality of obstacle points of the obstacle area in the 3D point cloud map includes: extracting first obstacle points which are all points included in the first obstacle area in the 3D point cloud map; and acquiring and storing the position information of the first obstacle point. In this embodiment, since the first obstacle area is an obstacle area with a small floor area and has a small number of corresponding first obstacle points, the position information of all the first obstacle points can be stored, which ensures that the semantic map can be generated more accurately and efficiently according to the position information of the obstacle points.
In another embodiment of the present application, acquiring and storing position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map includes: dividing the second obstacle area into a plurality of obstacle meshes; and determining the center of the obstacle net to be a second obstacle point, and acquiring and storing the position information of the second obstacle point. In this embodiment, since the second obstacle area is an obstacle area having a large floor area and has a large number of corresponding second obstacle points, if all the second obstacle points are stored, the efficiency is relatively low, and therefore, by dividing the second obstacle area, each obstacle mesh has a corresponding data volume, so that the data volume is small, and the center point of each obstacle mesh is used as the second obstacle point, so that the acquired second obstacle points are relatively accurate, and then, according to the second obstacle points, the semantic map can be subsequently generated more efficiently and accurately.
In another embodiment of the present application, after determining an obstacle area in a 3D point cloud map, before acquiring and storing position information of a plurality of obstacle points of the obstacle area in the 3D point cloud map, the method further includes: tracking the obstacle area by adopting a binary matching method to obtain a tracking result, specifically, performing one-to-one binary matching on the obstacle area after the previous frame of tracking is finished and the detected obstacle area of the current frame, and outputting the matching result to obtain the tracking result; determining a predetermined obstacle area according to the tracking result, wherein the predetermined obstacle area is the obstacle area satisfying a predetermined condition, and the predetermined condition includes at least one of the following conditions: the tracking time is longer than the preset time, the number of points of the obstacle area is larger than the preset number of points, the obstacle area is observed by the laser radar and the image acquisition equipment, the position information of a plurality of obstacle points of the obstacle area in the 3D point cloud map is obtained and stored, and the method comprises the following steps: and acquiring and storing the position information of a plurality of obstacle points of the preset obstacle area in the 3D point cloud map. In the embodiment, the obstacle area can be tracked more efficiently by a binary matching method, so that a more accurate obstacle area can be obtained, a predetermined obstacle area can be further accurately determined according to a tracking result, and the semantic map can be subsequently generated more accurately and efficiently according to the position information of a plurality of obstacle points of the obstacle area by storing the position information of the obstacle points.
According to the embodiment of the application, another semantic map construction method is provided. Fig. 2 is a flowchart of a semantic map construction method according to an embodiment of the present application. As shown in fig. 2, the method comprises the steps of:
step S201, an intelligent driving vehicle determines an obstacle area in a 3D point cloud map, wherein the obstacle area is an area including obstacles;
step S202, the intelligent driving vehicle acquires and stores position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map, wherein the obstacle points are points in the obstacle area;
step S203, when the obstacle area is not detected within a predetermined time period, the intelligent driving vehicle transmits the position information of the plurality of obstacle points to a server, and the server generates the semantic map according to the position information of the plurality of obstacle points.
In the method, firstly, an intelligent driving vehicle determines an obstacle area in a 3D point cloud map, then, the intelligent driving vehicle acquires position information of a plurality of obstacle points in the 3D map according to the determined obstacle area, wherein the obstacle points are points in the obstacle area, the acquired position information is stored, finally, under the condition that the obstacle area is not detected in a preset time period, the intelligent driving vehicle sends the position information of the plurality of obstacle points to a service end, and the service end generates a semantic map according to the received position information of the plurality of obstacle points. According to the method, the intelligent driving vehicle sends the position information of the plurality of obstacle points to the server side according to the acquired position information of the plurality of obstacle points of the obstacle area when the obstacle area is not detected, the server side can generate the semantic map according to the position information of the plurality of obstacle points, the construction area can be accurately identified, the semantic map can distinguish different objects according to different displayed colors of different obstacles, the semantic level of the semantic map is higher, the problem that the map with higher semantic level capable of accurately identifying the construction area is lacked in the prior art is solved, the driving strategy of the intelligent driving vehicle can be accurately and efficiently adjusted according to the semantic map subsequently, and the driving safety of the intelligent driving vehicle is guaranteed.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
In practical applications, the obstacle area may be a cone, a single barricade, and/or a water horse, etc.
In one embodiment of the present application, an intelligent driving vehicle determines an obstacle area in a 3D point cloud map, comprising: the intelligent driving vehicle acquires a 2D image to obtain an obstacle area in the 2D image; and the intelligent driving vehicle determines the obstacle area in the 3D point cloud map according to the 2D image and the corresponding 3D point cloud map. In the embodiment, the obstacle area in the 3D point cloud map can be determined more accurately according to the acquired 2D image and the acquired 3D point cloud map, so that a plurality of obstacle points of the obstacle area can be acquired more accurately subsequently, and further, the accurate semantic map can be generated subsequently.
In another embodiment of the present application, the obtaining a 2D image by the intelligent driving vehicle to obtain an obstacle area in the 2D image includes: the intelligent driving vehicle acquires a 2D image; the smart driving vehicle determines whether the 2D image includes a first obstacle region and a second obstacle region, the first obstacle region being the obstacle region having an area smaller than a predetermined area, the second obstacle region being the obstacle region having an area greater than or equal to the predetermined area; the smart driving vehicle acquires a frame of the first obstacle area when the 2D image includes the first obstacle area. In this embodiment, different information is acquired for obstacle regions of different areas, and then the obstacle regions may be determined according to the different information.
In another embodiment of the present application, determining the obstacle area in the 3D point cloud map according to the 2D image and the corresponding 3D point cloud map further includes: projecting the frame to the 3D point cloud map; determining a point cloud area corresponding to the frame as an obstacle point cloud area; and determining the obstacle point cloud area as the obstacle area in the 3D point cloud map. In the embodiment, the frame is projected into the 3D point cloud map, the point cloud area in the frame can be determined to be the obstacle point cloud area more efficiently, and the high efficiency of generating the semantic map is guaranteed.
In another embodiment of the application, the determining, by the intelligent driving vehicle, the obstacle area in the 3D point cloud map according to the 2D image and the corresponding 3D point cloud map includes: projecting the point cloud in the 3D point cloud map into the 2D image when the second obstacle area is included in the 2D image, and determining an obstacle point cloud area in the 3D point cloud map corresponding to the obstacle area in the 2D image; and determining the obstacle point cloud area as the obstacle area in the 3D point cloud map. In the embodiment, the point cloud in the 3D point cloud map is projected into the 2D image, so that the obstacle point cloud area in the 3D point cloud map corresponding to the obstacle area in the 2D image can be determined more accurately and directly, the obstacle point cloud area can be further determined accurately as the obstacle area in the 3D point cloud map, and a plurality of obstacle points in the obstacle area can be acquired more accurately subsequently.
In a specific embodiment of the present application, projecting the point cloud in the 3D point cloud map into the 2D image, and determining an obstacle point cloud area in the 3D point cloud map corresponding to the obstacle area in the 2D image includes: projecting the point cloud in the 3D point cloud map into the 2D image, and determining an obstacle area point, wherein the obstacle area point is a point in the 3D point cloud map corresponding to the obstacle area in the 2D image; and determining the obstacle point cloud area according to the obstacle area points based on the object segmentation model of the 3D point cloud. In the embodiment, the obstacle area point is determined first, and then the obstacle area point is segmented based on the object segmentation model of the 3D point cloud, so that the obstacle point cloud area can be further accurately determined.
In another specific embodiment of the present application, the acquiring and storing position information of a plurality of obstacle points in the obstacle area in the 3D point cloud map by the intelligent driving vehicle includes: extracting first obstacle points which are all points included in the first obstacle area in the 3D point cloud map; and acquiring and storing the position information of the first obstacle point. In this embodiment, since the first obstacle area is an obstacle area with a small floor area and has a small number of corresponding first obstacle points, the position information of all the first obstacle points can be stored, which ensures that the semantic map can be generated more accurately and efficiently according to the position information of the obstacle points.
In another embodiment of the present application, the acquiring and storing position information of a plurality of obstacle points of the obstacle area in the 3D point cloud map by the intelligent driving vehicle includes: dividing the second obstacle area into a plurality of obstacle meshes; and determining the center of the obstacle net to be a second obstacle point, and acquiring and storing the position information of the second obstacle point. In this embodiment, since the second obstacle area is an obstacle area having a large floor area and has a large number of corresponding second obstacle points, if all the second obstacle points are stored, the efficiency is relatively low, and therefore, by dividing the second obstacle area, each obstacle mesh has a corresponding data volume, so that the data volume is small, and the center point of each obstacle mesh is used as the second obstacle point, so that the acquired second obstacle points are relatively accurate, and then, according to the second obstacle points, the semantic map can be subsequently generated more efficiently and accurately.
In another embodiment of the present application, after the smart driving vehicle determines an obstacle area in the 3D point cloud map, before the smart driving vehicle obtains and stores position information of a plurality of obstacle points of the obstacle area in the 3D point cloud map, the method further includes: tracking the obstacle area by adopting a binary matching method to obtain a tracking result, specifically, performing one-to-one binary matching on the obstacle area after the previous frame of tracking is finished and the detected obstacle area of the current frame, and outputting the matching result to obtain the tracking result; determining a predetermined obstacle area according to the tracking result, wherein the predetermined obstacle area is the obstacle area satisfying a predetermined condition, and the predetermined condition includes at least one of the following conditions: the tracking time is longer than the preset time, the number of points of the obstacle area is larger than the preset number of points, the obstacle area is observed by the laser radar and the image acquisition equipment, the intelligent driving vehicle acquires and stores position information of a plurality of obstacle points of the obstacle area in the 3D point cloud map, and the method comprises the following steps: and acquiring and storing the position information of a plurality of obstacle points of the preset obstacle area in the 3D point cloud map. In the embodiment, the obstacle area can be tracked more efficiently by a binary matching method, so that a more accurate obstacle area can be obtained, a predetermined obstacle area can be further accurately determined according to a tracking result, and the semantic map can be subsequently generated more accurately and efficiently according to the position information of a plurality of obstacle points of the obstacle area by storing the position information of the obstacle points.
In another embodiment of the present application, the intelligent driving vehicle sends the position information of the plurality of obstacle points to a server, and the method includes: deleting the obstacle points of each frame which are not present in the continuous preset frame map from a plurality of obstacle points, and remaining the obstacle points as reserved obstacle points; clustering the reserved obstacle points to obtain a plurality of point clusters; generating a concave packet for the corresponding point cluster if the reserved obstacle point in the point cluster is the first obstacle point; generating a convex hull for the corresponding point cluster if the reserved obstacle point in the point cluster is the second obstacle point; and sending the convex hull and the concave hull to the server. In the embodiment, the generated concave hull and the generated convex hull are sent to the server side, the subsequent server side receives the concave hull and the convex hull, and the semantic map can be generated more accurately and efficiently according to the concave hull and the convex hull.
The embodiment of the present application further provides a semantic map construction device, and it should be noted that the semantic map construction device of the embodiment of the present application may be used to execute the semantic map construction method provided in the embodiment of the present application. The following describes a semantic map construction apparatus provided in an embodiment of the present application.
Fig. 3 is a schematic diagram of a semantic map construction apparatus according to an embodiment of the present application. As shown in fig. 3, the apparatus includes:
a first determining unit 10, configured to determine an obstacle area in the 3D point cloud map, where the obstacle area is an area including an obstacle;
an acquiring unit 20 configured to acquire and store position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map, the obstacle points being points in the obstacle area;
a generating unit 30, configured to generate the semantic map according to position information of the plurality of obstacle points when the obstacle area is not detected within a predetermined time period.
In the above device, the first determining unit determines an obstacle area in the 3D point cloud map, the acquiring unit acquires position information of a plurality of obstacle points in the 3D map according to the determined obstacle area, where the obstacle points are points in the obstacle area, stores the acquired position information, and the generating unit generates the semantic map according to the position information of the plurality of obstacle points when the obstacle area is not detected within a predetermined time period. According to the device, according to the acquired position information of the plurality of obstacle points of the obstacle area, when the obstacle area is not detected, the semantic map can be generated according to the position information of the plurality of obstacle points, the construction area can be accurately identified, the semantic map can distinguish different objects according to different displayed colors of different obstacles, the semantic level of the semantic map is higher, the problem that the map with higher semantic level capable of accurately identifying the construction area is lacked in the prior art is solved, the driving strategy of the vehicle can be accurately and efficiently adjusted according to the semantic map subsequently, and the driving safety of the vehicle is guaranteed.
In practical applications, the obstacle area may be a cone, a single barricade, and/or a water horse, etc.
In an embodiment of the present application, the first determining unit includes a first obtaining module and a first determining module, where the first obtaining module is configured to obtain a 2D image to obtain the obstacle area in the 2D image; the first determining module is used for determining the obstacle area in the 3D point cloud map according to the 2D image and the corresponding 3D point cloud map. In the embodiment, the obstacle area in the 3D point cloud map can be determined more accurately according to the acquired 2D image and the acquired 3D point cloud map, so that a plurality of obstacle points of the obstacle area can be acquired more accurately subsequently, and further, the accurate semantic map can be generated subsequently.
In another embodiment of the present application, the first obtaining module includes a first obtaining sub-module, a first determining sub-module, and a second obtaining sub-module, and the first obtaining sub-module is configured to obtain the 2D image; a first determining submodule configured to determine whether the 2D image includes a first obstacle region and a second obstacle region, the first obstacle region being the obstacle region having an area smaller than a predetermined area, the second obstacle region being the obstacle region having an area greater than or equal to the predetermined area; the second obtaining sub-module is configured to obtain a frame of the first obstacle area when the 2D image includes the first obstacle area. In this embodiment, different information is acquired for obstacle regions of different areas, and then the obstacle regions may be determined according to the different information.
In another embodiment of the present application, the first determining module includes a projection submodule, a second determining submodule, and a third determining submodule, and the projection submodule is configured to project the frame into the 3D point cloud map; the second determining submodule is used for determining the point cloud area corresponding to the frame as an obstacle point cloud area; and the third determining submodule is used for determining the obstacle point cloud area as the obstacle area in the 3D point cloud map. In the embodiment, the frame is projected into the 3D point cloud map, the point cloud area in the frame can be determined to be the obstacle point cloud area more efficiently, and the high efficiency of generating the semantic map is guaranteed.
In another embodiment of the application, the first determining module includes a fourth determining submodule and a fifth determining submodule, and the fourth determining submodule is configured to, when the 2D image includes the second obstacle area, project a point cloud in the 3D point cloud map into the 2D image, and determine an obstacle point cloud area in the 3D point cloud map corresponding to the obstacle area in the 2D image; and the fifth determining submodule is used for determining the obstacle point cloud area as the obstacle area in the 3D point cloud map. In the embodiment, the point cloud in the 3D point cloud map is projected into the 2D image, so that the obstacle point cloud area in the 3D point cloud map corresponding to the obstacle area in the 2D image can be determined more accurately and directly, the obstacle point cloud area can be further determined accurately as the obstacle area in the 3D point cloud map, and a plurality of obstacle points in the obstacle area can be acquired more accurately subsequently.
In a specific embodiment of the present application, the fourth sub-module is further configured to project a point cloud in the 3D point cloud map into the 2D image, and determine an obstacle area point, where the obstacle area point is a point in the 3D point cloud map corresponding to the obstacle area in the 2D image; the fourth sub-module is further configured to determine the obstacle point cloud area according to the obstacle area point based on an object segmentation model of the 3D point cloud. In the embodiment, the obstacle area point is determined first, and then the obstacle area point is segmented based on the object segmentation model of the 3D point cloud, so that the obstacle point cloud area can be further accurately determined.
In another specific embodiment of the present application, the obtaining unit includes an extracting module and a second obtaining module, where the extracting module is configured to extract a first obstacle point, where the first obstacle point is all points included in the first obstacle area in the 3D point cloud map; the second acquisition module is used for acquiring and storing the position information of the first obstacle point. In this embodiment, since the first obstacle area is an obstacle area with a small floor area and has a small number of corresponding first obstacle points, the position information of all the first obstacle points can be stored, which ensures that the semantic map can be generated more accurately and efficiently according to the position information of the obstacle points.
In another embodiment of the present application, the obtaining unit includes a dividing module and a second determining module, and the dividing module is configured to divide the second obstacle area into a plurality of obstacle meshes; and the second determining module is used for determining that the center of the obstacle grid is a second obstacle point, and acquiring and storing the position information of the second obstacle point. In this embodiment, since the second obstacle area is an obstacle area having a large floor area and has a large number of corresponding second obstacle points, if all the second obstacle points are stored, the efficiency is relatively low, and therefore, by dividing the second obstacle area, each obstacle mesh has a corresponding data volume, so that the data volume is small, and the center point of each obstacle mesh is used as the second obstacle point, so that the acquired second obstacle points are relatively accurate, and then, according to the second obstacle points, the semantic map can be subsequently generated more efficiently and accurately.
In another embodiment of the present application, the apparatus further includes a tracking unit and a second determining unit, where the tracking unit is configured to, after determining an obstacle area in the 3D point cloud map, track the obstacle area by using a binary matching method before obtaining and storing position information of a plurality of obstacle points of the obstacle area in the 3D point cloud map, to obtain a tracking result, specifically, perform one-to-one binary matching on an obstacle area where tracking of a previous frame is completed and a detected obstacle area of a current frame, and output a matching result, to obtain the tracking result; the second determining unit is configured to determine a predetermined obstacle area according to the tracking result, where the predetermined obstacle area is the obstacle area satisfying a predetermined condition, and the predetermined condition includes at least one of: the tracking time is longer than the preset time, the number of points of the obstacle area is larger than the preset number of points, and the obstacle area is observed by the laser radar and the image acquisition equipment. In the embodiment, the obstacle area can be tracked more efficiently by a binary matching method, so that a more accurate obstacle area can be obtained, a predetermined obstacle area can be further accurately determined according to a tracking result, and the semantic map can be subsequently generated more accurately and efficiently according to the position information of a plurality of obstacle points of the obstacle area by storing the position information of the obstacle points.
The present application also provides a vehicle comprising one or more processors, memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the above-described methods.
In the method, according to the acquired position information of a plurality of obstacle points of the obstacle area, when the obstacle area is not detected, the semantic map can be generated according to the position information of the plurality of obstacle points, the construction area can be accurately identified, different objects can be distinguished by the semantic map according to different displayed colors of different obstacles, the semantic level is higher, the problem that the map with higher semantic level capable of accurately identifying the construction area is lacked in the prior art is solved, the driving strategy of the vehicle can be accurately and efficiently adjusted according to the semantic map subsequently, and the driving safety of the vehicle is ensured.
The semantic map construction device comprises a processor and a memory, wherein the first determination unit, the acquisition unit, the generation unit and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, and the map with higher semantic level of the construction area is accurately identified by adjusting the kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
An embodiment of the present invention provides a computer-readable storage medium, on which a program is stored, and when the program is executed by a processor, the program implements the above semantic map construction method.
The embodiment of the invention provides a processor, which is used for running a program, wherein the semantic map construction method is executed when the program runs.
The embodiment of the invention provides equipment, which comprises a processor, a memory and a program which is stored on the memory and can run on the processor, wherein when the processor executes the program, at least the following steps are realized:
step S101, determining an obstacle area in a 3D point cloud map, wherein the obstacle area is an area including obstacles;
step S102, obtaining and storing position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map, wherein the obstacle points are points in the obstacle area;
step S103, when the obstacle area is not detected in a predetermined time period, generating the semantic map or generating the semantic map based on the position information of the plurality of obstacle points
Step S201, a vehicle determines an obstacle area in a 3D point cloud map, wherein the obstacle area is an area including an obstacle;
step S202, the vehicle acquires and stores position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map, wherein the obstacle points are points in the obstacle area;
step S203, if the obstacle area is not detected within a predetermined time period, the vehicle transmits the position information of the plurality of obstacle points to a server, and the server generates the semantic map based on the position information of the plurality of obstacle points.
The device herein may be a server, a PC, a PAD, a mobile phone, etc.
The present application further provides a computer program product adapted to perform a program of initializing at least the following method steps when executed on a data processing device:
step S101, determining an obstacle area in a 3D point cloud map, wherein the obstacle area is an area including obstacles;
step S102, obtaining and storing position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map, wherein the obstacle points are points in the obstacle area;
step S103, when the obstacle area is not detected in a predetermined time period, generating the semantic map or generating the semantic map based on the position information of the plurality of obstacle points
Step S201, a vehicle determines an obstacle area in a 3D point cloud map, wherein the obstacle area is an area including an obstacle;
step S202, the vehicle acquires and stores position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map, wherein the obstacle points are points in the obstacle area;
step S203, if the obstacle area is not detected within a predetermined time period, the vehicle transmits the position information of the plurality of obstacle points to a server, and the server generates the semantic map based on the position information of the plurality of obstacle points.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
From the above description, it can be seen that the above-described embodiments of the present application achieve the following technical effects:
1) the method for constructing the semantic map comprises the steps of firstly determining an obstacle area in a 3D point cloud map, then obtaining position information of a plurality of obstacle points in the 3D map according to the determined obstacle area, wherein the obstacle points are points in the obstacle area, storing the obtained position information, and finally generating the semantic map according to the position information of the plurality of obstacle points under the condition that the obstacle area is not detected in a preset time period. According to the method, according to the acquired position information of the plurality of obstacle points of the obstacle area, when the obstacle area is not detected, the semantic map can be generated according to the position information of the plurality of obstacle points, the construction area can be accurately identified, different objects can be distinguished by the semantic map according to different displayed colors of different obstacles, the semantic level is higher, the problem that the map with higher semantic level capable of accurately identifying the construction area is lacked in the prior art is solved, the driving strategy of the vehicle can be accurately and efficiently adjusted according to the semantic map subsequently, and the driving safety of the vehicle is guaranteed.
2) The method for constructing the semantic map comprises the steps that firstly, a vehicle determines an obstacle area in a 3D point cloud map, then the vehicle acquires position information of a plurality of obstacle points in the 3D map according to the determined obstacle area, wherein the obstacle points are points in the obstacle area, the acquired position information is stored, finally, under the condition that the obstacle area is not detected in a preset time period, the vehicle sends the position information of the plurality of obstacle points to a server, and the server generates the semantic map according to the received position information of the plurality of obstacle points. According to the method, the vehicle sends the position information of the plurality of obstacle points to the server side according to the acquired position information of the plurality of obstacle points of the obstacle area when the obstacle area is not detected, the server side can generate the semantic map according to the position information of the plurality of obstacle points, the construction area can be accurately identified, the semantic map can distinguish different objects according to different displayed colors of different obstacles, the semantic level of the semantic map is higher, the problem that the map with higher semantic level capable of accurately identifying the construction area is lacked in the prior art is solved, the driving strategy of the vehicle can be accurately and efficiently adjusted according to the semantic map subsequently, and the driving safety of the vehicle is guaranteed.
3) According to the semantic map building device, a first determining unit determines an obstacle area in a 3D point cloud map, an obtaining unit obtains position information of a plurality of obstacle points in the 3D map according to the determined obstacle area, the obstacle points are points in the obstacle area, the obtained position information is stored, and a generating unit generates the semantic map according to the position information of the plurality of obstacle points when the obstacle area is not detected in a preset time period. According to the device, according to the acquired position information of the plurality of obstacle points of the obstacle area, when the obstacle area is not detected, the semantic map can be generated according to the position information of the plurality of obstacle points, the construction area can be accurately identified, the semantic map can distinguish different objects according to different displayed colors of different obstacles, the semantic level of the semantic map is higher, the problem that the map with higher semantic level capable of accurately identifying the construction area is lacked in the prior art is solved, the driving strategy of the vehicle can be accurately and efficiently adjusted according to the semantic map subsequently, and the driving safety of the vehicle is guaranteed.
4) According to the method, according to the acquired position information of the plurality of obstacle points in the obstacle area, when the obstacle area is not detected, the semantic map can be generated according to the position information of the plurality of obstacle points, the construction area can be accurately identified, the semantic map has different colors displayed aiming at different obstacles, different objects can be distinguished, the semantic level is higher, the problem that the map with higher semantic level capable of accurately identifying the construction area is lacked in the prior art is solved, the driving strategy of the vehicle can be accurately and efficiently adjusted according to the semantic map subsequently, and the driving safety of the vehicle is ensured.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (19)

1. A semantic map construction method is characterized by comprising the following steps:
determining an obstacle area in a 3D point cloud map, wherein the obstacle area is an area comprising obstacles;
acquiring and storing position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map, wherein the obstacle points are points in the obstacle area;
and in the case that the obstacle area is not detected within a preset time period, generating the semantic map according to the position information of the obstacle points.
2. The method of claim 1, wherein determining an obstacle area in a 3D point cloud map comprises:
acquiring a 2D image to obtain the obstacle area in the 2D image;
and determining the obstacle area in the 3D point cloud map according to the 2D image and the corresponding 3D point cloud map.
3. The method of claim 2, wherein acquiring a 2D image, obtaining the obstacle region in the 2D image, comprises:
acquiring the 2D image;
determining whether a first obstacle region and a second obstacle region are included in the 2D image, wherein the first obstacle region is the obstacle region with the area smaller than a preset area, and the second obstacle region is the obstacle region with the area larger than or equal to the preset area;
acquiring a frame of the first obstacle area when the 2D image includes the first obstacle area.
4. The method of claim 3, wherein determining the obstacle region in the 3D point cloud map from the 2D image and the corresponding 3D point cloud map further comprises:
projecting the bounding box into the 3D point cloud map;
determining a point cloud area corresponding to the frame as an obstacle point cloud area;
determining the obstacle point cloud area as the obstacle area in the 3D point cloud map.
5. The method of claim 3, wherein determining the obstacle region in the 3D point cloud map from the 2D image and the corresponding 3D point cloud map comprises:
under the condition that the second obstacle area is included in the 2D image, projecting the point cloud in the 3D point cloud map into the 2D image, and determining an obstacle point cloud area in the 3D point cloud map corresponding to the obstacle area in the 2D image;
determining the obstacle point cloud area as the obstacle area in the 3D point cloud map.
6. The method of claim 5, wherein projecting the point cloud in the 3D point cloud map into the 2D image, determining an obstacle point cloud area in the 3D point cloud map that corresponds to the obstacle area in the 2D image, comprises:
projecting the point cloud in the 3D point cloud map into the 2D image, and determining obstacle area points, wherein the obstacle area points are points in the 3D point cloud map corresponding to the obstacle area in the 2D image;
and determining the obstacle point cloud area according to the obstacle area points based on an object segmentation model of the 3D point cloud.
7. The method of claim 3, wherein obtaining and storing position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map comprises:
extracting first obstacle points, wherein the first obstacle points are all points contained in the first obstacle area in the 3D point cloud map;
and acquiring and storing the position information of the first obstacle point.
8. The method of claim 3, wherein obtaining and storing position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map comprises:
dividing the second obstacle area into a plurality of obstacle meshes;
and determining the center of the obstacle net to be a second obstacle point, and acquiring and storing the position information of the second obstacle point.
9. The method according to any one of claims 1 to 8,
after determining an obstacle area in a 3D point cloud map, before acquiring and storing position information of a plurality of obstacle points of the obstacle area in the 3D point cloud map, the method further includes:
tracking the obstacle area by adopting a binary matching method to obtain a tracking result;
according to the tracking result, determining a predetermined obstacle area, wherein the predetermined obstacle area is the obstacle area meeting a predetermined condition, and the predetermined condition comprises at least one of the following conditions: the tracking time is more than the preset time, the point number of the barrier area is more than the preset point number, the barrier area is observed by the laser radar and the image acquisition equipment,
acquiring and storing position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map, wherein the position information comprises the following steps:
and acquiring and storing the position information of a plurality of obstacle points of the preset obstacle area in the 3D point cloud map.
10. The method according to any one of claims 1 to 8, wherein after generating the semantic map from position information of a plurality of the obstacle points in a case where the obstacle area is not detected within a predetermined period of time, the method further comprises:
and updating the driving route according to the semantic map.
11. A semantic map construction method is characterized by comprising the following steps:
the method comprises the steps that an intelligent driving vehicle determines an obstacle area in a 3D point cloud map, wherein the obstacle area is an area including obstacles;
the intelligent driving vehicle acquires and stores position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map, wherein the obstacle points are points in the obstacle area;
and under the condition that the obstacle area is not detected within a preset time period, the intelligent driving vehicle sends the position information of the plurality of obstacle points to a service end, and the service end generates the semantic map according to the position information of the plurality of obstacle points.
12. The method of claim 11, wherein the smart driving vehicle determines an obstacle area in the 3D point cloud map, comprising:
the intelligent driving vehicle acquires a 2D image to obtain an obstacle area in the 2D image;
and the intelligent driving vehicle determines the obstacle area in the 3D point cloud map according to the 2D image and the corresponding 3D point cloud map.
13. The method of claim 12, wherein the smart driving vehicle acquiring a 2D image, resulting in an obstacle area in the 2D image, comprises:
the intelligent driving vehicle acquires a 2D image;
the intelligent driving vehicle determines whether a first obstacle region and a second obstacle region are included in the 2D image, wherein the first obstacle region is the obstacle region with the area smaller than a preset area, and the second obstacle region is the obstacle region with the area larger than or equal to the preset area;
the intelligent driving vehicle acquires a frame of the first obstacle area under the condition that the 2D image includes the first obstacle area.
14. The method of claim 13, wherein the smart driving vehicle obtains and stores position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map, comprising:
extracting first obstacle points, wherein the first obstacle points are all points contained in the first obstacle area in the 3D point cloud map;
and acquiring and storing the position information of the first obstacle point.
15. The method of claim 14, wherein the smart driving vehicle obtains and stores position information of a plurality of obstacle points of the obstacle area in a 3D point cloud map, comprising:
dividing the second obstacle area into a plurality of obstacle meshes;
and determining the center of the obstacle net to be a second obstacle point, and acquiring and storing the position information of the second obstacle point.
16. The method of claim 15, wherein the smart driving vehicle sends location information of the plurality of obstacle points to a server, comprising:
deleting the obstacle points of each frame of the plurality of obstacle points that do not appear in the continuous predetermined frame map, leaving behind reserved obstacle points;
clustering the reserved obstacle points to obtain a plurality of point clusters;
if the reserved obstacle point in the point cluster is the first obstacle point, generating a concave packet for the corresponding point cluster;
if the reserved obstacle point in the point cluster is the second obstacle point, generating a convex hull for the corresponding point cluster;
and sending the convex hull and the concave hull to the server.
17. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program performs the method of any one of claims 1 to 16.
18. A processor, characterized in that the processor is configured to run a program, wherein the program when running performs the method of any of claims 1 to 16.
19. A smart driving vehicle, comprising: one or more processors, memory, and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any of claims 1-16.
CN202110044266.6A 2021-01-13 2021-01-13 Semantic map construction method, computer readable storage medium and processor Active CN112785704B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110044266.6A CN112785704B (en) 2021-01-13 2021-01-13 Semantic map construction method, computer readable storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110044266.6A CN112785704B (en) 2021-01-13 2021-01-13 Semantic map construction method, computer readable storage medium and processor

Publications (2)

Publication Number Publication Date
CN112785704A true CN112785704A (en) 2021-05-11
CN112785704B CN112785704B (en) 2024-07-26

Family

ID=75755781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110044266.6A Active CN112785704B (en) 2021-01-13 2021-01-13 Semantic map construction method, computer readable storage medium and processor

Country Status (1)

Country Link
CN (1) CN112785704B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115376109A (en) * 2022-10-25 2022-11-22 杭州华橙软件技术有限公司 Obstacle detection method, obstacle detection device, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8996228B1 (en) * 2012-09-05 2015-03-31 Google Inc. Construction zone object detection using light detection and ranging
CN106951847A (en) * 2017-03-13 2017-07-14 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium
CN108318895A (en) * 2017-12-19 2018-07-24 深圳市海梁科技有限公司 Obstacle recognition method, device and terminal device for automatic driving vehicle
CN109074668A (en) * 2018-08-02 2018-12-21 深圳前海达闼云端智能科技有限公司 Method for path navigation, relevant apparatus and computer readable storage medium
CN110832474A (en) * 2016-12-30 2020-02-21 迪普迈普有限公司 High definition map update
CN111380545A (en) * 2015-02-10 2020-07-07 御眼视觉技术有限公司 Method, server, autonomous vehicle, and medium for autonomous vehicle navigation
US20210004613A1 (en) * 2019-07-02 2021-01-07 DeepMap Inc. Annotating high definition map data with semantic labels

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8996228B1 (en) * 2012-09-05 2015-03-31 Google Inc. Construction zone object detection using light detection and ranging
CN111380545A (en) * 2015-02-10 2020-07-07 御眼视觉技术有限公司 Method, server, autonomous vehicle, and medium for autonomous vehicle navigation
CN110832474A (en) * 2016-12-30 2020-02-21 迪普迈普有限公司 High definition map update
CN106951847A (en) * 2017-03-13 2017-07-14 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium
CN108318895A (en) * 2017-12-19 2018-07-24 深圳市海梁科技有限公司 Obstacle recognition method, device and terminal device for automatic driving vehicle
CN109074668A (en) * 2018-08-02 2018-12-21 深圳前海达闼云端智能科技有限公司 Method for path navigation, relevant apparatus and computer readable storage medium
US20210004613A1 (en) * 2019-07-02 2021-01-07 DeepMap Inc. Annotating high definition map data with semantic labels

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115376109A (en) * 2022-10-25 2022-11-22 杭州华橙软件技术有限公司 Obstacle detection method, obstacle detection device, and storage medium

Also Published As

Publication number Publication date
CN112785704B (en) 2024-07-26

Similar Documents

Publication Publication Date Title
KR102062680B1 (en) Laser point cloud based urban road recognition method, device, storage medium and device
CN110147706B (en) Obstacle recognition method and device, storage medium, and electronic device
WO2018068653A1 (en) Point cloud data processing method and apparatus, and storage medium
CN111932943B (en) Dynamic target detection method and device, storage medium and roadbed monitoring equipment
CN110148144A (en) Dividing method and device, storage medium, the electronic device of point cloud data
CN108984741B (en) Map generation method and device, robot and computer-readable storage medium
CN104994360A (en) Video monitoring method and video monitoring system
CN109377694B (en) Monitoring method and system for community vehicles
CN110390314B (en) Visual perception method and equipment
CN110490066B (en) Target detection method and device based on picture analysis and computer equipment
CN111308500B (en) Obstacle sensing method and device based on single-line laser radar and computer terminal
CN114037966A (en) High-precision map feature extraction method, device, medium and electronic equipment
CN113191297A (en) Pavement identification method and device and electronic equipment
CN111026136A (en) Port unmanned sweeper intelligent scheduling method and device based on monitoring equipment
CN113190538A (en) Road construction method and device based on track data, storage medium and terminal
CN115482255A (en) Obstacle tracking method, device, equipment and storage medium
CN112785704A (en) Semantic map construction method, computer readable storage medium and processor
CN117011413B (en) Road image reconstruction method, device, computer equipment and storage medium
CN112507891B (en) Method and device for automatically identifying high-speed intersection and constructing intersection vector
CN116309943B (en) Parking lot semantic map road network construction method and device and electronic equipment
CN112488010A (en) High-precision target extraction method and system based on unmanned aerial vehicle point cloud data
Goga et al. An approach for segmenting 3D LIDAR data using multi-volume grid structures
CN110827340A (en) Map updating method, device and storage medium
CN114838729A (en) Path planning method, device and equipment
CN111208530B (en) Positioning layer generation method and device, high-precision map and high-precision map equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant