CN112622923B - Method and device for controlling a vehicle - Google Patents

Method and device for controlling a vehicle Download PDF

Info

Publication number
CN112622923B
CN112622923B CN201910906102.2A CN201910906102A CN112622923B CN 112622923 B CN112622923 B CN 112622923B CN 201910906102 A CN201910906102 A CN 201910906102A CN 112622923 B CN112622923 B CN 112622923B
Authority
CN
China
Prior art keywords
point cloud
cloud data
clusters
obstacle
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910906102.2A
Other languages
Chinese (zh)
Other versions
CN112622923A (en
Inventor
刘祥
张双
高斌
朱晓星
薛晶晶
王俊平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910906102.2A priority Critical patent/CN112622923B/en
Publication of CN112622923A publication Critical patent/CN112622923A/en
Application granted granted Critical
Publication of CN112622923B publication Critical patent/CN112622923B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0002Automatic control, details of type of controller or control system architecture
    • B60W2050/0004In digital systems, e.g. discrete-time systems involving sampling
    • B60W2050/0005Processor details or data handling, e.g. memory registers or chip architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions

Abstract

The embodiment of the application discloses a method and a device for controlling a vehicle. One embodiment of the above method comprises: acquiring first point cloud data acquired by a vehicle in a driving process; identifying an obstacle in the first point cloud data; determining second point cloud data according to the first point cloud data and the point cloud data corresponding to the identified obstacle; clustering the second point cloud data, and determining at least one obstacle corresponding to the second point cloud data according to a clustering result; and sending a control signal to the vehicle to control the vehicle according to the determined at least one obstacle. According to the implementation method, the point cloud data corresponding to the obstacle of the unknown type can be further processed, so that the vehicle can be guided to run more reasonably.

Description

Method and device for controlling a vehicle
Technical Field
The embodiment of the application relates to the technical field of vehicle control, in particular to a method and a device for controlling a vehicle.
Background
The three-dimensional information of the surrounding environment is acquired by using a depth sensor and a position and attitude sensor based on a fixed station or a mobile platform, and the three-dimensional information is widely adopted due to the characteristics of high efficiency, real time and high precision. Since the scanned scene contains different types of objects, such as ground, buildings, trees, vehicles, etc. Point cloud data corresponding to different types of objects need to be segmented from each other by point cloud segmentation so as to perform point cloud modeling on each object respectively. Existing point cloud segmentation schemes easily identify multiple obstacles as one large obstacle if the obstacles are close in distance or overlap. Such recognition results may affect the running of the autonomous vehicle.
Disclosure of Invention
The embodiment of the application provides a method and a device for controlling a vehicle.
In a first aspect, an embodiment of the present application provides a method for controlling a vehicle, including: acquiring first point cloud data acquired by a vehicle in a driving process; identifying an obstacle in the first point cloud data; determining second point cloud data according to the first point cloud data and the point cloud data corresponding to the identified obstacle; clustering the second point cloud data, and determining at least one obstacle corresponding to the second point cloud data according to a clustering result; and sending a control signal to the vehicle to control the vehicle according to the determined at least one obstacle.
In some embodiments, the clustering the second point cloud data and determining at least one obstacle corresponding to the second point cloud data according to the clustering result includes: dividing the second point cloud data into at least two clusters, and executing the following clustering steps: determining the frame of the at least two clusters; selecting a plurality of points on at least two frames; calculating the distance between any two clusters of the at least two clusters according to the selected points; and determining at least one obstacle corresponding to the second point cloud data according to the obtained distance.
In some embodiments, the dividing the second point cloud data into at least two clusters includes: projecting the second point cloud data to the ground to obtain projection point data; carrying out grid division on the projection point data to obtain at least two grids; and taking the projection points included in the single grid as a cluster to obtain at least two clusters.
In some embodiments, the selecting the plurality of points on the at least two frames includes: for a frame of the at least two frames, determining the center of a range included by the frame; determining a plurality of rays by taking the center as an end point; and taking the intersection points of the rays and the frame as a plurality of selected points.
In some embodiments, the determining at least one obstacle corresponding to the second point cloud data according to the obtained distance includes: calculating the maximum distance between any two clusters of the at least two clusters according to the selected point; and combining the two clusters into one cluster in response to the fact that the maximum distance is smaller than a preset threshold value, obtaining at least two updated clusters, and continuing to execute the clustering step.
In a second aspect, an embodiment of the present application provides an apparatus for controlling a vehicle, including: the vehicle driving control system comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is configured to acquire first point cloud data acquired by a vehicle in a driving process; an identification unit configured to identify an obstacle in the first point cloud data; a determining unit configured to determine second point cloud data according to the first point cloud data and the point cloud data corresponding to the identified obstacle; the clustering unit is configured to cluster the second point cloud data and determine at least one obstacle corresponding to the second point cloud data according to a clustering result; a control unit configured to transmit a control signal to the vehicle to control the vehicle according to the determined at least one obstacle.
In some embodiments, the clustering unit is further configured to: dividing the second point cloud data into at least two clusters, and executing the following clustering steps: determining the frame of the at least two clusters; selecting a plurality of points on at least two frames; calculating the distance between any two clusters of the at least two clusters according to the selected points; and determining at least one obstacle corresponding to the second point cloud data according to the obtained distance.
In some embodiments, the clustering unit is further configured to: projecting the second point cloud data to the ground to obtain projection point data; carrying out grid division on the projection point data to obtain at least two grids; and taking the projection points included in the single grid as a cluster to obtain at least two clusters.
In some embodiments, the clustering unit is further configured to: for a frame of the at least two frames, determining the center of a range included by the frame; determining a plurality of rays by taking the center as an end point; and taking the intersection points of the rays and the frame as a plurality of selected points.
In some embodiments, the clustering unit is further configured to: calculating the maximum distance between any two clusters of the at least two clusters according to the selected point; and combining the two clusters into one cluster in response to the fact that the maximum distance is smaller than a preset threshold value, obtaining at least two updated clusters, and continuing to execute the clustering step.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device, on which one or more programs are stored, which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any of the embodiments of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the method as described in any one of the embodiments of the first aspect.
According to the method and the device for controlling the vehicle, the first point cloud data collected by the vehicle in the driving process can be obtained. And identify an obstacle in the first point cloud data. And then, determining second point cloud data according to the first point cloud data and the identified obstacles. And then, clustering the second point cloud data, and determining at least one obstacle corresponding to the second point cloud data according to a clustering result. Finally, control information is sent to the vehicle to control the vehicle based on the determined at least one obstacle. According to the method, the point cloud data corresponding to the unknown type of obstacle can be further processed, so that the vehicle can be guided to run more reasonably.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method for controlling a vehicle according to the present application;
FIG. 3 is a schematic diagram of one application scenario of a method for controlling a vehicle according to the present application;
FIG. 4 is a flow chart of one embodiment of clustering second point cloud data in a method for controlling a vehicle according to the present application;
FIG. 5 is a schematic block diagram of an embodiment of an apparatus for controlling a vehicle according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for controlling a vehicle or the apparatus for controlling a vehicle of the present application may be applied.
As shown in fig. 1, system architecture 100 may include autonomous vehicles 101, 102, 103, a network 104, and a server 105. Network 104 is used to provide a medium for communication links between autonomous vehicles 101, 102, 103 and server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
Various sensors, such as laser radars, may be mounted on the autonomous vehicles 101, 102, 103 to collect point cloud data of the driving environment of the autonomous vehicles 101, 102, 103. Various electronic devices such as a navigation device, an unmanned vehicle controller, an anti-lock brake system, a brake force distribution system, and the like may be mounted on the autonomous vehicles 101, 102, 103. The autonomous vehicles 101, 102, 103 may be vehicles including an autonomous driving mode, including vehicles that are fully autonomous, and vehicles that can be switched to the autonomous driving mode.
The server 105 may be a server that provides various services, such as a background server that processes point cloud data collected by the autonomous vehicles 101, 102, 103. For example, the background server may receive point cloud data sent by the autonomous vehicles 101, 102, 103, process the received point cloud data to obtain information of obstacles included in the point cloud data, and return the information of the obstacles to the autonomous vehicles 101, 102, 103.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for controlling the vehicle provided in the embodiment of the present application may be executed by the autonomous vehicles 101, 102, 103, or may be executed by the server 105. Accordingly, the means for controlling the vehicle may be provided in the autonomous vehicles 101, 102, 103, or may be provided in the server 105.
It should be understood that the number of autonomous vehicles, networks, and servers in fig. 1 is merely illustrative. There may be any number of autonomous vehicles, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for controlling a vehicle according to the present application is shown. The method for controlling a vehicle of the embodiment includes the steps of:
step 201, first point cloud data collected in the driving process of a vehicle are obtained.
In the present embodiment, an execution subject of the method for controlling a vehicle (for example, the autonomous vehicles 101, 102, 103 or the server 105 shown in fig. 1) may acquire first point cloud data acquired by the vehicle during driving in a wired connection manner or a wireless connection manner. The vehicle can be an automatic driving vehicle, and a laser radar sensor can be installed on the vehicle to collect first point cloud data in the driving process of the vehicle.
At step 202, an obstacle in the first point cloud data is identified.
After the first point cloud data is acquired, the execution subject may perform point cloud identification on the first point cloud data to determine an obstacle in the first point cloud data. The execution subject may also determine the type of obstacle at the same time. In particular, the executing subject may identify an obstacle in the first point cloud data using a trained point cloud identification model. The point cloud identification model may be a neural network. The execution subject may input the first point cloud data from an input side of the neural network, and may obtain a type of the obstacle at an output side of the neural network. The type of obstacle may include a pedestrian, a bicycle, a vehicle, and the like.
Step 203, determining second point cloud data according to the first point cloud data and the point cloud data corresponding to the identified obstacle.
After identifying an obstacle included in the first point cloud data, the executing subject may take point cloud data other than point cloud data corresponding to the identified obstacle as second point cloud data. And further clustering and analyzing the second point cloud data to determine the number of the obstacles contained in the second point cloud data.
And 204, clustering the second point cloud data, and determining at least one obstacle corresponding to the second point cloud data according to a clustering result.
After the second point cloud data is determined, the executing subject may perform cluster analysis on the second point cloud data. Specifically, the executing entity may perform clustering analysis on the second point cloud data by using various clustering algorithms, for example, a minimum spanning tree, a K-means algorithm, and the like. After clustering, at least one cluster may be obtained, each cluster may act as an obstacle. At least one obstacle corresponding to the second point cloud data is obtained.
Step 205, sending a control signal to the vehicle to control the vehicle according to the determined at least one obstacle.
After determining at least one obstacle, the execution body may send a control signal to the vehicle according to the position of each obstacle to control the vehicle to travel away from the obstacle.
With continued reference to fig. 3, fig. 3 is a schematic diagram of one application scenario of the method for controlling a vehicle according to the present embodiment. In the application scenario of fig. 3, the autonomous vehicle 301 collects first point cloud data during driving using a lidar sensor mounted thereon. After the point cloud data of the first point is subjected to point cloud identification, point cloud data except pedestrians, bicycles and vehicles are determined to be second point cloud data 302. The second point cloud data 302 may be identified in the prior art as a large obstacle whose boundary coincides with the body of the autonomous vehicle 301, which affects the decision-making function of the autonomous vehicle 301. The autonomous vehicle may further perform cluster analysis on the second point cloud data 302 to obtain at least one obstacle 303 corresponding to the second point cloud data 302. Finally, the autonomous vehicle 301 controls the traveling direction and the traveling speed so as to avoid the obstacle 303 according to the position of each obstacle 303.
According to the method for controlling the vehicle, the first point cloud data collected by the vehicle in the driving process can be obtained. And identify an obstacle in the first point cloud data. And then, determining second point cloud data according to the first point cloud data and the identified obstacles. And then, clustering the second point cloud data, and determining at least one obstacle corresponding to the second point cloud data according to a clustering result. Finally, control information is sent to the vehicle to control the vehicle based on the determined at least one obstacle. According to the method, the point cloud data corresponding to the unknown type of obstacle can be further processed, so that the vehicle can be guided to run more reasonably.
With continued reference to FIG. 4, a flow 400 of one embodiment of clustering the second point cloud data in a method for controlling a vehicle according to the present application is shown. As shown in fig. 4, the method for controlling a vehicle of the present embodiment may perform clustering processing on the second point cloud data by:
step 401, dividing the second point cloud data into at least two clusters.
In this embodiment, the executing agent may first divide the second point cloud data into two clusters. Specifically, the executing agent may divide the second point cloud data into two clusters according to the number of points in the point cloud data.
In some optional implementations of the present embodiment, the executing subject may divide the second point cloud data by the following steps not shown in fig. 4: projecting the second point cloud data to the ground to obtain projection point data; carrying out grid division on the projection point data to obtain at least two grids; the projection points included in the single grid are used as a cluster to obtain at least two clusters.
In this implementation, the executing body may first project the second point cloud data to the ground to obtain projection point data. The execution body may then mesh the projection point data. In particular, the execution body may set the size of the grid to 25cm by 25 cm. The projection points falling in each grid are taken as a cluster, so that at least two clusters can be obtained.
After obtaining at least two clusters, the executing agent may continue to perform the clustering steps 402-404.
At step 402, bounding boxes of at least two clusters are determined.
The execution body may determine the bounding box of each cluster described above. Specifically, the execution body may determine a bounding box capable of enveloping the points in each cluster, and the bounding box may be composed of straight lines or curved lines. Alternatively, the executive body may have as a border of each cluster the smallest circumscribed circle or the smallest circumscribed rectangle.
In some optional implementations of this embodiment, the execution main body may use a boundary of the grid corresponding to each cluster as a frame of the cluster, so that the frame of each cluster may be obtained.
In step 403, a plurality of points are selected on at least two frames.
After determining the bounding box for each cluster, the executive may pick points on each bounding box for calculating the distance between clusters. Specifically, the execution main body may select a plurality of points at intervals of a preset length on each frame, or may select a plurality of points in a sampling manner.
In some alternative implementations of the present embodiment, the executing entity may pick the plurality of points by the following steps not shown in fig. 4: for a frame of at least two frames, determining the center of a range included by the frame; determining a plurality of rays by taking the center as an end point; and taking the intersection points of the rays and the frame as a plurality of selected points.
In this implementation, the executing entity may first determine the center of the range included by the bounding box. The executive may then determine a plurality of rays with the center as an end point. Specifically, the execution subject may determine the plurality of rays at preset angles, for example, 60 °, 90 °, and the like. The executive may then take the intersections of the rays with the bounding box as the selected points.
Step 404, calculating a distance between any two clusters of the at least two clusters according to the selected point.
In this embodiment, after the execution main body selects a point for each cluster, the distance between the clusters may be calculated according to the selected point. In particular, the executive body may calculate the distance between the point of each cluster and the points of the other clusters. For example, if 4 points are chosen for each cluster, then there may be 16 distances between the two clusters. This can effectively reduce the amount of computation in the clustering process.
And step 405, determining at least one obstacle corresponding to the second point cloud data according to the obtained distance.
After obtaining the respective distances, the execution subject may determine whether the distance between the two clusters is smaller than a preset threshold, and if so, consider that the two clusters belong to the same class, and combine the two clusters. If not, the two clusters are not considered to belong to the same class, and the merging processing is not carried out.
In this embodiment, the execution main body may take the maximum value among the respective distances between two clusters as the distance between the two clusters. Thus, the situation that clusters are combined into a large cluster due to the fact that the minimum distance is too small can be avoided, namely, the occurrence of large obstacles is avoided.
In some optional implementations of this embodiment, the step 405 may be specifically implemented by the following steps not shown in fig. 4: calculating the maximum distance between any two clusters of the at least two clusters according to the selected point; and combining the two clusters into one cluster in response to the fact that the maximum distance is smaller than the preset threshold value, obtaining at least two updated clusters, and continuing to perform the clustering steps 402-404.
In this implementation, the execution subject may calculate the maximum distance between every two clusters according to the selected point. And if the maximum distance is smaller than a preset threshold value, combining the two clusters into one cluster. Thus, each cluster can be updated to obtain at least two updated clusters. And then, continuously executing the clustering step according to the updated clusters until the distance between any two clusters is greater than or equal to a preset threshold value, and finishing clustering.
According to the method for controlling the vehicle, the point cloud data corresponding to the unknown type of obstacle can be clustered, the calculated amount is reduced during clustering, the occurrence of a large obstacle can be avoided, and the automatic driving vehicle can be accurately controlled.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for controlling a vehicle, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for controlling a vehicle of the present embodiment includes: an acquisition unit 501, a recognition unit 502, a determination unit 503, a clustering unit 504, and a control unit 505.
The acquiring unit 501 is configured to acquire first point cloud data acquired by a vehicle during driving.
An identifying unit 502 configured to identify an obstacle in the first point cloud data.
A determining unit 503 configured to determine the second point cloud data according to the first point cloud data and the point cloud data corresponding to the identified obstacle.
A clustering unit 504 configured to cluster the second point cloud data and determine at least one obstacle corresponding to the second point cloud data according to a clustering result.
A control unit 505 configured to send a control signal to the vehicle to control the vehicle in accordance with the determined at least one obstacle.
In some optional implementations of this embodiment, the clustering unit 504 may be further configured to: dividing the second point cloud data into at least two clusters, and executing the following clustering steps: determining a frame of at least two clusters; selecting a plurality of points on at least two frames; calculating the distance between any two clusters of the at least two clusters according to the selected points; and determining at least one obstacle corresponding to the second point cloud data according to the obtained distance.
In some optional implementations of this embodiment, the clustering unit 504 may be further configured to: projecting the second point cloud data to the ground to obtain projection point data; carrying out grid division on the projection point data to obtain at least two grids; the projection points included in the single grid are used as a cluster to obtain at least two clusters.
In some optional implementations of the present embodiment, the clustering unit 504 may be further configured to: for a frame of at least two frames, determining the center of a range included by the frame; determining a plurality of rays by taking the center as an end point; and taking the intersection points of the rays and the frame as a plurality of selected points.
In some optional implementations of this embodiment, the clustering unit 504 may be further configured to: for any two clusters in the at least two clusters, calculating the maximum distance between the two clusters according to the selected point; and combining the two clusters into one cluster in response to the fact that the maximum distance is smaller than the preset threshold value, obtaining at least two updated clusters, and continuing to execute the clustering step.
It should be understood that units 501 to 505 recited in the apparatus 500 for controlling a vehicle correspond to respective steps in the method described with reference to fig. 2. Thus, the operations and features described above with respect to the method for controlling a vehicle are equally applicable to the apparatus 500 and the units contained therein and will not be described again here.
Referring now to FIG. 6, shown is a schematic diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a vehicle-mounted terminal (e.g., a vehicle-mounted navigation terminal), and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring first point cloud data acquired by a vehicle in a driving process; identifying an obstacle in the first point cloud data, and determining the type of the identified obstacle; in response to determining that the type of the obstacle comprises an unknown type, determining second point cloud data corresponding to the obstacle of the unknown type; clustering the second point cloud data, and determining at least one obstacle corresponding to the second point cloud data according to a clustering result; sending a control signal to the vehicle to control the vehicle in accordance with the determined at least one obstacle.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a recognition unit, a determination unit, a clustering unit, and a control unit. The names of the units do not form a limitation to the unit itself in some cases, and for example, the acquiring unit may also be described as a "unit that acquires first point cloud data acquired by the vehicle during traveling".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combinations of the above-mentioned features, and other embodiments in which the above-mentioned features or their equivalents are combined arbitrarily without departing from the spirit of the invention are also encompassed. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (8)

1. A method for controlling a vehicle, comprising:
acquiring first point cloud data acquired by a vehicle in a driving process;
identifying an obstacle in the first point cloud data, wherein the type of the identified obstacle in the first point cloud data is known;
determining second point cloud data according to the first point cloud data and the point cloud data corresponding to the identified obstacle;
clustering the second point cloud data to obtain at least one cluster, and taking each cluster as an obstacle to obtain at least one obstacle corresponding to the second point cloud data;
transmitting a control signal to the vehicle to control the vehicle in accordance with the determined at least one obstacle,
the clustering the second point cloud data and determining at least one obstacle corresponding to the second point cloud data according to a clustering result comprise:
dividing the second point cloud data into at least two clusters, and executing the following clustering steps: determining a bounding box of the at least two clusters; selecting a plurality of points on at least two frames; calculating the distance between any two clusters of the at least two clusters according to the selected points;
determining at least one obstacle corresponding to the second point cloud data according to the obtained distance,
wherein the selecting a plurality of points on the at least two frames includes:
for a frame of the at least two frames, determining the center of a range included by the frame;
determining a plurality of rays by taking the center as an end point;
and taking the intersection points of the rays and the frame as a plurality of selected points.
2. The method of claim 1, wherein the dividing the second point cloud data into at least two clusters comprises:
projecting the second point cloud data to the ground to obtain projection point data;
carrying out grid division on the projection point data to obtain at least two grids;
the projection points included in the single grid are used as a cluster to obtain at least two clusters.
3. The method of claim 1, wherein the determining at least one obstacle corresponding to the second point cloud data from the obtained distances comprises:
calculating the maximum distance between any two clusters of the at least two clusters according to the selected point;
and combining the two clusters into one cluster in response to the fact that the maximum distance is smaller than a preset threshold value, obtaining at least two updated clusters, and continuing to execute the clustering step.
4. An apparatus for controlling a vehicle, comprising:
the vehicle driving control system comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is configured to acquire first point cloud data acquired by a vehicle in a driving process;
an identification unit configured to identify an obstacle in the first point cloud data, wherein a type of the identified obstacle in the first point cloud data is known;
a determining unit configured to determine second point cloud data according to the first point cloud data and the point cloud data corresponding to the identified obstacle;
a clustering unit configured to cluster the second point cloud data to obtain at least one cluster and obtain at least one obstacle corresponding to the second point cloud data by using each cluster as one obstacle;
a control unit configured to send a control signal to the vehicle to control the vehicle according to the determined at least one obstacle,
wherein the clustering unit is further configured to:
dividing the second point cloud data into at least two clusters, and executing the following clustering steps: determining a bounding box of the at least two clusters; selecting a plurality of points on at least two frames; calculating the distance between any two clusters of the at least two clusters according to the selected points;
determining at least one obstacle corresponding to the second point cloud data according to the obtained distance,
wherein the clustering unit is further configured to:
for a frame of the at least two frames, determining the center of a range included by the frame;
determining a plurality of rays by taking the center as an end point;
and taking the intersection points of the rays and the frame as a plurality of selected points.
5. The apparatus of claim 4, wherein the clustering unit is further configured to:
projecting the second point cloud data to the ground to obtain projection point data;
carrying out grid division on the projection point data to obtain at least two grids;
the projection points included in the single grid are used as a cluster to obtain at least two clusters.
6. The apparatus of claim 4, wherein the clustering unit is further configured to:
calculating the maximum distance between any two clusters of the at least two clusters according to the selected point;
and combining the two clusters into one cluster in response to the fact that the maximum distance is smaller than a preset threshold value, obtaining at least two updated clusters, and continuing to execute the clustering step.
7. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-3.
8. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-3.
CN201910906102.2A 2019-09-24 2019-09-24 Method and device for controlling a vehicle Active CN112622923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910906102.2A CN112622923B (en) 2019-09-24 2019-09-24 Method and device for controlling a vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910906102.2A CN112622923B (en) 2019-09-24 2019-09-24 Method and device for controlling a vehicle

Publications (2)

Publication Number Publication Date
CN112622923A CN112622923A (en) 2021-04-09
CN112622923B true CN112622923B (en) 2022-07-05

Family

ID=75283217

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910906102.2A Active CN112622923B (en) 2019-09-24 2019-09-24 Method and device for controlling a vehicle

Country Status (1)

Country Link
CN (1) CN112622923B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113587937A (en) * 2021-06-29 2021-11-02 阿波罗智联(北京)科技有限公司 Vehicle positioning method and device, electronic equipment and storage medium
CN113609914A (en) * 2021-07-09 2021-11-05 北京经纬恒润科技股份有限公司 Obstacle identification method and device and vehicle control system
CN116263602A (en) * 2021-12-15 2023-06-16 速感科技(北京)有限公司 Autonomous mobile apparatus, control method and apparatus thereof, and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6036724B2 (en) * 2014-02-17 2016-11-30 トヨタ自動車株式会社 Vehicle surrounding situation recognition device and vehicle control device
CN107064955A (en) * 2017-04-19 2017-08-18 北京汽车集团有限公司 barrier clustering method and device
CN107609520B (en) * 2017-09-15 2020-07-03 四川大学 Obstacle identification method and device and electronic equipment
CN110196429A (en) * 2018-04-02 2019-09-03 北京航空航天大学 Vehicle target recognition methods, storage medium, processor and system
CN109285220B (en) * 2018-08-30 2022-11-15 阿波罗智能技术(北京)有限公司 Three-dimensional scene map generation method, device, equipment and storage medium
CN109558854B (en) * 2018-12-05 2021-04-06 百度在线网络技术(北京)有限公司 Obstacle sensing method and device, electronic equipment and storage medium
CN110009718B (en) * 2019-03-07 2021-09-24 深兰科技(上海)有限公司 Three-dimensional high-precision map generation method and device
CN109978075B (en) * 2019-04-04 2021-09-28 江苏满运软件科技有限公司 Vehicle false position information identification method and device, electronic equipment and storage medium
CN110097109B (en) * 2019-04-25 2022-12-16 湖北工业大学 Road environment obstacle detection system and method based on deep learning
CN110246159B (en) * 2019-06-14 2023-03-28 湖南大学 3D target motion analysis method based on vision and radar information fusion

Also Published As

Publication number Publication date
CN112622923A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN110687549B (en) Obstacle detection method and device
US11328429B2 (en) Method and apparatus for detecting ground point cloud points
CN110376594B (en) Intelligent navigation method and system based on topological graph
CN112622923B (en) Method and device for controlling a vehicle
CN110654381B (en) Method and device for controlling a vehicle
CN111461981B (en) Error estimation method and device for point cloud stitching algorithm
CN112630799B (en) Method and apparatus for outputting information
CN115540896B (en) Path planning method and device, electronic equipment and computer readable medium
CN110717918B (en) Pedestrian detection method and device
CN110696826B (en) Method and device for controlling a vehicle
CN112258519A (en) Automatic extraction method and device for way-giving line of road in high-precision map making
CN112651266A (en) Pedestrian detection method and device
CN110654380A (en) Method and device for controlling a vehicle
CN112558036B (en) Method and device for outputting information
CN112558035B (en) Method and device for estimating the ground
CN112630798B (en) Method and apparatus for estimating ground
CN115761702B (en) Vehicle track generation method, device, electronic equipment and computer readable medium
CN112668371B (en) Method and device for outputting information
CN115565374A (en) Logistics vehicle driving optimization method and device, electronic equipment and readable storage medium
CN115061386A (en) Intelligent driving automatic simulation test system and related equipment
CN112526477B (en) Method and device for processing information
CN112634487B (en) Method and apparatus for outputting information
CN112560548B (en) Method and device for outputting information
US20220317301A1 (en) Modeling foliage in a synthetic environment
US20220318450A1 (en) Lidar Atmospheric Effects in Simulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant