CN115892068A - Vehicle control method, device, equipment, medium and vehicle - Google Patents

Vehicle control method, device, equipment, medium and vehicle Download PDF

Info

Publication number
CN115892068A
CN115892068A CN202211476560.5A CN202211476560A CN115892068A CN 115892068 A CN115892068 A CN 115892068A CN 202211476560 A CN202211476560 A CN 202211476560A CN 115892068 A CN115892068 A CN 115892068A
Authority
CN
China
Prior art keywords
point cloud
real
time
data
time point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211476560.5A
Other languages
Chinese (zh)
Inventor
兰晓松
何贝
刘鹤云
张岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sinian Zhijia Technology Co ltd
Original Assignee
Beijing Sinian Zhijia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sinian Zhijia Technology Co ltd filed Critical Beijing Sinian Zhijia Technology Co ltd
Priority to CN202211476560.5A priority Critical patent/CN115892068A/en
Publication of CN115892068A publication Critical patent/CN115892068A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention relates to a vehicle control method, a device, equipment, a medium and a vehicle, which can perform label matching on real-time point cloud data corresponding to a vehicle driving area based on a semantic segmentation result of the real-time image data corresponding to the vehicle driving area, determine target point cloud points in the real-time point cloud data, eliminate the target point cloud points from the real-time point cloud data to obtain target point cloud data, and further perform driving control on the vehicle based on the target point cloud data.

Description

Vehicle control method, device, equipment, medium and vehicle
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a vehicle control method, apparatus, device, medium, and vehicle.
Background
In the field of automatic driving, objects such as pedestrians, roads, automobiles and obstacles in the driving process of an automatic driving vehicle can be generally identified through point cloud data, so that the automatic driving vehicle can safely drive on the roads.
However, in a port, lanes in the port are often invaded by obstacles such as weeds and birds which do not affect driving, and when the automatic driving operation vehicle identifies objects based on the point cloud data, the obstacles which do not affect driving can be uniformly identified as the obstacles, so that the automatic driving operation vehicle stops driving due to the obstacles which do not affect driving, and the operation efficiency of the automatic driving operation vehicle is reduced.
Disclosure of Invention
In order to solve the technical problem, the present disclosure provides a vehicle control method, device, apparatus, medium, and vehicle.
In a first aspect, the present disclosure provides a vehicle control method including:
acquiring real-time image data and real-time point cloud data corresponding to a vehicle driving area;
performing semantic segmentation on the real-time image data to obtain a semantic segmentation result, wherein the semantic segmentation result comprises pixel semantic labels of all pixel points in the real-time image data;
performing label matching on the real-time point cloud data and the real-time image data, and determining a target point cloud point in the real-time point cloud data, wherein a point cloud semantic label of the target point cloud point is a target object, and the target object is an obstacle which does not influence driving;
target point cloud points in the real-time point cloud data are removed to obtain target point cloud data;
and controlling the vehicle to drive based on the target point cloud data.
In a second aspect, the present disclosure provides a vehicle control apparatus comprising:
the data acquisition module is used for acquiring real-time image data and real-time point cloud data corresponding to a vehicle driving area;
the semantic segmentation module is used for performing semantic segmentation on the real-time image data to obtain a semantic segmentation result, and the semantic segmentation result comprises pixel semantic labels of all pixel points in the real-time image data;
the system comprises a tag matching module, a real-time image data processing module and a real-time image data processing module, wherein the tag matching module is used for performing tag matching on the real-time point cloud data and the real-time image data and determining a target point cloud point in the real-time point cloud data, a point cloud semantic tag of the target point cloud point is a target object, and the target object is an obstacle which does not influence driving;
the data eliminating module is used for eliminating target point cloud points in the real-time point cloud data to obtain target point cloud data;
and the driving control module is used for controlling the driving of the vehicle based on the target point cloud data.
In a third aspect, the present disclosure provides a vehicle control apparatus comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the vehicle control method of the first aspect.
In a fourth aspect, the present disclosure provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the vehicle control method of the first aspect described above.
In a fifth aspect, the present disclosure provides a vehicle comprising at least one of:
the vehicle control device of the second aspect described above;
the vehicle control apparatus of the third aspect described above;
the computer storage medium of the fourth aspect described above.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
the vehicle control method, the vehicle control device, the vehicle control equipment, the medium and the vehicle can perform label matching on real-time point cloud data corresponding to a vehicle driving area based on a semantic segmentation result of the real-time image data corresponding to the vehicle driving area, determine a target point cloud point in the real-time point cloud data, remove the target point cloud point from the real-time point cloud data to obtain target point cloud data, and further perform driving control on the vehicle based on the target point cloud data.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart diagram of a vehicle control method according to an embodiment of the disclosure;
FIG. 2 is a schematic flow chart diagram of another vehicle control method provided by the disclosed embodiment;
FIG. 3 is a schematic flow chart diagram illustrating yet another vehicle control method provided by an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a vehicle control device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a vehicle control device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based at least in part on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
For a port, lanes in the port are often invaded by obstacles which do not influence driving, such as weeds, birds and the like, and when the automatic driving operation vehicle identifies objects based on the point cloud data, because accurate object identification is difficult to perform only based on semantic information of the point cloud data, the obstacles which do not influence driving are often uniformly identified as the obstacles, so that the automatic driving operation vehicle stops driving due to the obstacles which do not influence driving, and the operation efficiency of the automatic driving operation vehicle is reduced.
Taking the case of the invasion of weeds in a driveway, because the use of the herbicide affects the food safety in a container, the herbicide cannot be used for completely killing the weeds considering the safety factors of the containers in ports, and the manual weeding has low efficiency and can frequently relapse. Therefore, it is impossible to prevent the autonomous working vehicle from stopping driving due to the intrusion of obstacles in the lane, which do not affect the driving.
In order to solve the above problems, embodiments of the present disclosure provide a vehicle control method, device, apparatus, medium, and vehicle. First, a vehicle control method according to an embodiment of the present disclosure will be described.
Fig. 1 shows a schematic flow chart of a vehicle control method provided by an embodiment of the present disclosure. The vehicle control method shown in fig. 1 may be executed by a vehicle control apparatus. The vehicle control device may be an in-vehicle device mounted on a vehicle or a controller of the vehicle, and the vehicle may be an autonomous working vehicle.
As shown in fig. 1, the vehicle control method may include the following steps.
And S110, acquiring real-time image data and real-time point cloud data corresponding to the vehicle driving area.
In the embodiment of the disclosure, when the vehicle needs to drive in the operation process, the vehicle control device on the vehicle can acquire the real-time image data and the real-time point cloud data corresponding to the vehicle driving area, so as to control the vehicle to drive based on the real-time image data and the real-time point cloud data.
The driving area of the vehicle may be a road area to be driven ahead of the driving direction of the vehicle, which is determined according to a planned driving path.
Specifically, the real-time image data of the driving environment including the driving area of the vehicle can be collected in real time by the image collection device which is installed on the vehicle and used for collecting the driving environment in front of the driving direction of the vehicle, and the real-time point cloud data of the driving environment including the driving area of the vehicle can be collected in real time by the laser radar device which is installed on the vehicle and used for collecting the driving environment in front of the driving direction of the vehicle.
Therefore, the vehicle control device can acquire real-time image data acquired by the image acquisition device and real-time point cloud data acquired by the laser radar device.
It should be noted that the acquisition time of the real-time image data and the acquisition time of the real-time point cloud data acquired by the vehicle control device need to be the same, so as to ensure the reliability of the vehicle driving control based on the real-time image data and the real-time point cloud data.
S120, performing semantic segmentation on the real-time image data to obtain a semantic segmentation result, wherein the semantic segmentation result comprises pixel semantic labels of all pixel points in the real-time image data.
In the embodiment of the present disclosure, after the real-time image data and the real-time point cloud data are obtained, the vehicle control device may perform semantic segmentation on the real-time image data to determine semantic information of each pixel point in the real-time image data, so as to obtain a semantic segmentation result.
The pixel semantic tags may include various objects that may be present in the driving environment of the vehicle, such as weeds, birds, cones, pedestrians, backgrounds other than roads, and so on.
Further, these objects may be classified into object categories such as a target object corresponding to an obstacle that does not affect driving, a non-target object corresponding to an obstacle that does affect driving, and an unrelated object unrelated to the obstacle that does not affect driving, according to the degree of influence of the objects on driving of the vehicle. For example, the target objects may include weeds, birds, and other objects, which do not actually affect the normal traveling of the vehicle during the actual driving of the vehicle, i.e., the vehicle is less likely to collide with the target objects; the non-target objects can comprise objects such as a cone, a pedestrian and the like, and the objects can actually influence the normal running of the vehicle in the actual driving process of the vehicle, namely, the vehicle has a high possibility of colliding with the objects; the irrelevant objects may include objects such as buildings, sky, etc. in the background outside the road, which are irrelevant to the road and therefore to the driving obstacles during the actual driving of the vehicle.
Specifically, the vehicle control device may input the real-time image data into a pre-trained semantic segmentation model capable of performing semantic segmentation on various objects that may appear in a vehicle driving environment, to obtain a semantic segmentation result output by the semantic segmentation model, where the semantic segmentation result may include a pixel semantic label of each pixel point in the real-time image data, and the pixel semantic label is semantic information representing an object to which the pixel point belongs.
Further, the semantic segmentation model can be obtained by performing offline training on training samples of various objects which are labeled in advance and possibly appear in the vehicle driving environment.
And S130, performing label matching on the real-time point cloud data and the real-time image data, and determining a target point cloud point in the real-time point cloud data, wherein the point cloud semantic label of the target point cloud point is a target object, and the target object is an obstacle which does not influence driving.
In the embodiment of the present disclosure, after the pixel semantic label of each pixel point in the real-time image data is obtained, the vehicle control device may perform label matching on the real-time point cloud data and the real-time image data, and determine that the point cloud semantic label in the real-time point cloud data is the target point cloud point of the target object.
Specifically, the vehicle control device may perform label matching on the real-time point cloud data and the real-time image data to obtain a point cloud semantic label of each real-time point cloud point in the real-time point cloud data, and then determine an object type to which the point cloud semantic label belongs for each real-time point cloud point to obtain a target point cloud point with the point cloud semantic label as a target object.
The object included in the point cloud semantic label and the object category divided by the point cloud semantic label are the same as the pixel semantic label, which is not limited herein.
And S140, eliminating the target point cloud points in the real-time point cloud data to obtain target point cloud data.
In the embodiment of the present disclosure, after determining the target point cloud points in the real-time point cloud data, the vehicle control device may remove the target point cloud points in the real-time point cloud data to obtain the target point cloud data without an influence of an obstacle that does not affect driving.
And S150, controlling the vehicle to run based on the target point cloud data.
In the embodiment of the present disclosure, after the target point cloud data is obtained, the vehicle control device may perform driving control on the vehicle based on the target point cloud data.
Specifically, the vehicle control device may perform semantic segmentation on the target point cloud data to obtain an object recognition result in the vehicle driving area, and perform driving control on the vehicle based on the object recognition result. For example, if the object recognition result includes an obstacle, the vehicle is controlled to stop traveling; and if the object recognition result does not contain the obstacle, controlling the vehicle to continue to travel.
In the embodiment of the disclosure, the real-time point cloud data corresponding to the vehicle driving area can be subjected to tag matching based on the semantic segmentation result of the real-time image data corresponding to the vehicle driving area, the target point cloud points in the real-time point cloud data are determined, the target point cloud points are eliminated from the real-time point cloud data to obtain the target point cloud data, and then the vehicle is controlled to drive based on the target point cloud data.
In another embodiment of the present disclosure, the vehicle control device may perform mapping imaging on the real-time point cloud data to the real-time image data to realize coordinate system conversion of the real-time point cloud data, and further perform label matching on the real-time image data and the real-time point cloud data in the same coordinate system to determine a target point cloud point in the real-time point cloud data, so as to improve the accuracy of the label matching and the determined target point cloud point.
Fig. 2 shows a schematic flow chart of another vehicle control method provided by the embodiment of the disclosure. As shown in fig. 2, the vehicle control method may include the following steps.
S210, acquiring real-time image data and real-time point cloud data corresponding to a vehicle driving area.
S220, performing semantic segmentation on the real-time image data to obtain a semantic segmentation result, wherein the semantic segmentation result comprises pixel semantic labels of all pixel points in the real-time image data.
S210-S220 are similar to S110-S120 in the embodiment shown in fig. 1, and are not described herein again.
And S230, projecting and imaging the real-time point cloud data from the radar coordinate system to an image coordinate system corresponding to the real-time image data to obtain a first point cloud coordinate of each real-time point cloud point in the real-time point cloud data.
In this disclosure, after obtaining the pixel semantic label of each pixel point in the real-time image data, the vehicle control device may project and image the real-time point cloud data from the radar coordinate system to the image coordinate system corresponding to the real-time image data according to a preset projection manner, to obtain a first point cloud coordinate of each real-time point cloud point in the real-time point cloud data, so that the real-time image data and the real-time point cloud data are in the same coordinate system, and have coordinate comparability and label matching performance.
The radar coordinate system is a coordinate system set with the laser radar device as a reference point, for example, a coordinate system set with the laser radar device as a coordinate origin. The image coordinate system is a coordinate system set with the image capturing device as a reference point, for example, a coordinate system set with the image capturing device as a coordinate origin.
In some examples, projecting and imaging the real-time point cloud data into an image coordinate system corresponding to the real-time image data, and obtaining the first point cloud coordinate of each real-time point cloud point in the real-time point cloud data may specifically include:
mapping the real-time point cloud coordinates of each real-time point cloud point in a radar coordinate system to an image acquisition equipment coordinate system according to the laser radar equipment external parameters and the image acquisition equipment external parameters to obtain second point cloud coordinates of each real-time point cloud point;
and mapping the second point cloud coordinates of each real-time point cloud point to an image coordinate system according to the internal parameters of the image acquisition equipment to obtain the first point cloud coordinates of each real-time point cloud point.
The external parameter of the laser radar equipment is the external parameter from the laser radar equipment to the vehicle body, and specifically, the external parameter can be a 4 × 4 matrix generated according to the radar position and the radar angle. Specifically, the radar position may be an offset amount of the radar with respect to the vehicle body center, and the radar angle may be a rotation angle of the radar with respect to the vehicle body center.
The external parameter of the image acquisition device is the external parameter from the vehicle body to the image acquisition device, and specifically can be a 4 × 4 matrix generated according to the position of the image acquisition device and the angle of the image acquisition device. Specifically, the image capturing device position may be an offset of the vehicle body center relative to the image capturing device, and the image capturing device angle may be a rotation angle of the vehicle body center relative to the image capturing device.
Further, the external parameters of the laser radar device and the external parameters of the image acquisition device can be obtained by calibration in advance, and the internal parameters of the image acquisition device can be an internal parameter matrix set when the image acquisition device leaves a factory.
Specifically, the vehicle control device may map the real-time point cloud coordinates of each real-time point cloud point in the radar coordinate system to the image acquisition device coordinate system through a first mapping relationship formed based on the external parameters of the laser radar device and the external parameters of the image acquisition device to obtain second point cloud coordinates of each real-time point cloud point, and map the second point cloud coordinates of each real-time point cloud point to the image coordinate system through a second mapping relationship formed based on the internal parameters of the image acquisition device to obtain the first point cloud coordinates of each real-time point cloud point.
Taking the coordinate of a certain real-time cloud point in a radar coordinate system as [ X ] l ,Y l ,Z l ]For example, the coordinate [ X ] in the coordinate system of the image capturing device can be obtained by performing coordinate mapping through the first mapping relation c ,Y c ,Z c ]The specific mapping formula is as follows:
Figure BDA0003959232350000091
wherein, RT vl For external reference of lidar equipment, RT cv Is an external parameter of the image acquisition equipment, and 1 is a homogeneous coordinate.
Then, the coordinate [ X ] in the coordinate system of the image acquisition equipment is mapped through a second mapping relation c ,Y c ,Z c ]Coordinate mapping is carried out to obtain the coordinates [ u, v ] under the image coordinate system]The specific mapping formula is as follows:
Figure BDA0003959232350000101
wherein K is an internal parameter of the image acquisition equipment, lambda is an intermediate parameter, and lambda = Z c ×K。
Therefore, in the embodiment of the present disclosure, mapping imaging of the real-time point cloud data to the real-time image data can be realized through the above formula, so that coordinates of the real-time image data and the real-time point cloud data have comparability and matchability.
S240, aiming at each real-time point cloud point, determining a point cloud semantic label of the real-time point cloud point based on the semantic segmentation result and the first point cloud coordinate of the real-time point cloud point.
In the embodiment of the disclosure, after the first point cloud coordinates of each real-time point cloud point in the real-time point cloud data are obtained, the vehicle control device may classify each real-time point cloud point according to a numerical type of each coordinate component of the first point cloud coordinates of each real-time point cloud point, and for each real-time point cloud point, the vehicle control device may determine a point cloud semantic tag of the real-time point cloud point according to a matching manner corresponding to the classification to which the real-time point cloud point belongs, based on a semantic segmentation result and the first point cloud coordinates of the real-time point cloud point.
In some embodiments, determining the point cloud semantic label of the real-time point cloud point based on the semantic segmentation result and the first point cloud coordinate of the real-time point cloud point may specifically include:
and when each coordinate component of the first point cloud coordinate is an integer value, taking the pixel semantic label of the pixel point corresponding to the first point cloud coordinate as the point cloud semantic label of the real-time point cloud point.
Since each coordinate component of the pixel coordinate of each pixel point of the real-time image data is an integer value, if each coordinate component of the first point cloud coordinate is an integer value, the first point cloud coordinate has the pixel coordinates of the pixel points in one-to-one correspondence, that is, the real-time point cloud point to which the first point cloud coordinate belongs has completely matched pixel points located at the same coordinate position, and therefore, the real-time point cloud points to which the first point cloud coordinate to which each coordinate component is an integer value belongs can be classified into one type.
Specifically, for such real-time point cloud points, the vehicle control device may directly determine, according to a first point cloud coordinate of the real-time point cloud point, a pixel point having the same coordinate position as the real-time point cloud point, as a pixel point corresponding to the first point cloud coordinate, and use a pixel semantic label of the pixel point as a point cloud semantic label of the real-time point cloud point.
In other embodiments, determining the point cloud semantic label of the real-time point cloud point based on the semantic segmentation result and the first point cloud coordinate of the real-time point cloud point may specifically include:
and when at least one coordinate component of the first point cloud coordinate is a non-integer value, determining the point cloud semantic label of the real point cloud point based on the pixel semantic labels of all the pixel points surrounding the first point cloud coordinate.
Since each coordinate component of the pixel coordinate of each pixel point of the real-time image data is an integer value, if at least one coordinate component of the first point cloud coordinate is a non-integer value, the first point cloud coordinate does not have the pixel coordinate of the pixel point corresponding to one another, that is, the real-time point cloud point to which the first point cloud coordinate belongs does not have a completely matched pixel point located at the same coordinate position, and therefore, the real-time point cloud point to which the first point cloud coordinate to which at least one coordinate component is a non-integer value belongs can be classified into one type.
Specifically, for such real-time point cloud points, the vehicle control device cannot directly use the pixel semantic label of a certain pixel point as the point cloud semantic label of the real-time point cloud point, but needs to determine the point cloud semantic label of the real-time point cloud point by using the pixel semantic labels of the pixels surrounding the first point cloud coordinate, that is, the pixel semantic labels of the pixels surrounding the real-time point cloud point.
In some examples, the vehicle control device may determine, according to a first point cloud coordinate of the real-time point cloud point, all pixel points of which pixel coordinates surround the first point cloud coordinate, select, among the pixel points, a pixel point to which the pixel coordinate closest to the first point cloud coordinate belongs, and then use a pixel semantic label of the selected pixel point as the point cloud semantic label of the real-time point cloud point.
In other examples, the vehicle control device may determine, according to a first point cloud coordinate of the real-time point cloud point, all pixel points of which pixel coordinates surround the first point cloud coordinate, and determine a pixel semantic label of each pixel point, and then count the pixel semantic labels of the pixel points, if the number of the pixel semantic labels corresponding to a certain object can be counted to be respectively greater than the number of the pixel semantic labels corresponding to other objects, the pixel semantic label with the largest number is used as the point cloud semantic label of the real-time point cloud point, otherwise, a pixel point to which the pixel coordinate closest to the first point cloud coordinate belongs is selected from the pixel points, and then the pixel semantic label of the selected pixel point is used as the point cloud semantic label of the real-time point cloud point.
Therefore, in the embodiment of the disclosure, the pixel points related to the real-time point cloud point can be determined according to the first point cloud coordinate of the real-time point cloud point, and the corresponding point cloud semantic label is obtained for the real-time point cloud point matching based on the determined pixel points, so that the analysis of the voice information of the point cloud point can be accurately and efficiently realized.
And S250, respectively taking the real-time point cloud points with the point cloud semantic labels as target objects as target point cloud points.
In the embodiment of the present disclosure, after point cloud semantic tags of all real-time point cloud points are determined, an object type to which the point cloud semantic tag belongs may be determined for each real-time point cloud point, and the real-time point cloud points whose point cloud semantic tags are target objects are respectively used as target point cloud points, so as to obtain target point cloud points whose point cloud semantic tags are target objects.
And S260, eliminating target point cloud points in the real-time point cloud data to obtain target point cloud data.
And S270, controlling the vehicle to run based on the target point cloud data.
S260-S270 are similar to S140-S150 in the embodiment shown in fig. 1, and are not repeated herein.
In the embodiment of the disclosure, before the vehicle is controlled to drive based on the point cloud data, the point cloud points related to the obstacles that do not influence the driving can be removed, and then when the vehicle is controlled to drive based on the point cloud data, the obstacles that do not influence the driving on the lane in the port can be prevented from interfering with the driving, so that the automatic driving operation vehicle in the port is prevented from stopping driving due to the obstacles that do not influence the driving, and the operation efficiency of the automatic driving operation vehicle is improved. Specifically, semantic information of each cloud point can be given by means of semantic information rich in image data and external reference of the laser radar equipment which participates inside and outside the image acquisition equipment, so that the target cloud point which is related to the obstacle and does not influence driving is identified, and the identification accuracy of the target cloud point is improved. In conclusion, compared with a method for preventing a vehicle from being interfered by artificial weeding, the method disclosed by the embodiment of the disclosure saves a large amount of human resources and time cost, can ensure the safety of port food compared with a method for preventing a vehicle from being interfered by herbicide weeding, and is more accurate and efficient in recognition result compared with a method for recognizing an object only by point cloud semantic segmentation.
Fig. 3 shows a schematic flow chart of another vehicle control method provided by the embodiment of the disclosure. As shown in fig. 3, the vehicle control method may include the following steps.
S310, acquiring real-time image data and real-time point cloud data corresponding to the vehicle driving area.
S320, performing semantic segmentation on the real-time image data to obtain a semantic segmentation result, wherein the semantic segmentation result comprises pixel semantic labels of all pixel points in the real-time image data.
And S330, projecting and imaging the real-time point cloud data from the radar coordinate system to an image coordinate system corresponding to the real-time image data to obtain a first point cloud coordinate of each real-time point cloud point in the real-time point cloud data.
S310-S330 are similar to S210-S230 in the embodiment shown in fig. 2, and are not described herein again.
And S340, eliminating invalid point cloud points in the real-time point cloud data, wherein the invalid point cloud points are point cloud points of which the first point cloud coordinates are located outside the coordinate range corresponding to the real-time image data.
In the embodiment of the present disclosure, after the first point cloud coordinates of each real-time point cloud point in the real-time point cloud data are obtained, for each real-time point cloud point, it may be determined whether the first point cloud coordinates of the real-time point cloud point fall within a coordinate range corresponding to the real-time image data, and if the first point cloud coordinates of the fruit point cloud point fall within the coordinate range corresponding to the real-time image data, it is determined that the real-time point cloud point is a valid point cloud point, and the valid point cloud point is retained in the real-time point cloud data, otherwise, if the first point cloud coordinates of the real-time point cloud point do not fall within the coordinate range corresponding to the real-time image data, that is, the first point cloud coordinates are outside the coordinate range corresponding to the real-time image data, it is determined that the real-time point cloud point is a invalid point cloud point, and the invalid point cloud point is removed from the real-time point cloud data.
The coordinate range corresponding to the real-time image data is a coordinate range formed by pixel coordinates of all pixel points in the real-time image data in an image coordinate system.
In general, the data acquisition range of the laser radar device is larger than that of the image acquisition device, so that after the real-time point cloud data is projected and imaged into the image coordinate system corresponding to the real-time image data from the radar coordinate system, part of the point cloud data does not fall into the data acquisition range of the image acquisition device, and semantic analysis on the part of the point cloud data cannot be realized through a semantic segmentation result of the real-time image data. Therefore, the part of point cloud data can be removed before label matching is carried out, so that the data processing amount is reduced, and the data processing efficiency is improved.
If the first point cloud coordinate of the point cloud point of the fruit falls within the coordinate range corresponding to the real-time image data, the point cloud point of the real-time point is located in the driving environment reflected by the real-time image data, that is, the point cloud point of the real-time point falls within the data acquisition range of the image acquisition device, that is, the semantic segmentation result of the real-time image data can realize the label matching of the point cloud point of the real-time point, the point cloud point of the real-time point is an effective point cloud point, and the subsequent label matching processing can be performed, otherwise, if the first point cloud coordinate of the point cloud point of the real-time point does not fall within the coordinate range corresponding to the real-time image data, the point cloud point of the real-time point is not located in the driving environment reflected by the real-time image data, that is not in the data acquisition range of the image acquisition device, that is, that the semantic segmentation result of the real-time image data cannot realize the label matching of the point cloud point of the point, the point cloud point of the real-time point is an ineffective point cloud point, and the subsequent label matching processing can be performed without.
And S350, aiming at each real-time point cloud point, determining a point cloud semantic label of the real-time point cloud point based on the semantic segmentation result and the first point cloud coordinate of the real-time point cloud point.
And S360, respectively taking the point cloud semantic labels as real-time point cloud points of the target object as target point cloud points.
And S370, eliminating the target point cloud points in the real-time point cloud data to obtain target point cloud data.
And S380, controlling the vehicle to run based on the target point cloud data.
S350-S380 are similar to S240-S270 in the embodiment shown in fig. 2, and are not described herein again.
It should be noted that, in order to further reduce the data processing amount and improve the data processing efficiency, the real-time point cloud data in S370 may be point cloud data after the invalid point cloud points are removed in S340; in order to improve the accuracy of the driving control of the vehicle, the real-time point cloud data in S370 may also be the point cloud data acquired in S310, which is not limited herein.
In the embodiment of the disclosure, before performing label matching on the real-time point cloud data and the real-time image data, invalid point cloud points irrelevant to the driving environment corresponding to the real-time image data can be removed, and then only the point cloud points relevant to the driving environment corresponding to the real-time image data are subjected to label matching processing, so that the data processing amount can be reduced, the data processing efficiency can be improved, and the operating efficiency of the automatic driving operating vehicle can be improved. And before the vehicle is controlled to drive based on the point cloud data, the point cloud points related to the obstacles which do not influence the driving can be removed, and then when the vehicle is controlled to drive based on the point cloud data, the interference of the obstacles which do not influence the driving on the lane in the port on the driving can be avoided, so that the automatic driving operation vehicle in the port is prevented from stopping driving due to the obstacles which do not influence the driving, and the operation efficiency of the automatic driving operation vehicle is improved.
Fig. 4 shows a schematic structural diagram of a vehicle control device provided by the embodiment of the disclosure. The vehicle control device 400 shown in fig. 4 may be applied to a vehicle control apparatus. The vehicle control device may be an in-vehicle device mounted on a vehicle or a controller of the vehicle, and the vehicle may be an autonomous working vehicle.
As shown in fig. 4, the vehicle control apparatus 400 may include a data acquisition module 410, a semantic segmentation module 420, a tag matching module 430, a data culling module 440, and a driving control module 450.
The data obtaining module 410 may be configured to obtain real-time image data and real-time point cloud data corresponding to a driving area of a vehicle.
The semantic segmentation module 420 may be configured to perform semantic segmentation on the real-time image data to obtain a semantic segmentation result, where the semantic segmentation result includes a pixel semantic label of each pixel point in the real-time image data.
The tag matching module 430 may be configured to perform tag matching on the real-time point cloud data and the real-time image data, and determine a target point cloud point in the real-time point cloud data, where a point cloud semantic tag of the target point cloud point is a target object, and the target object is an obstacle that does not affect driving.
The data eliminating module 440 may be configured to eliminate a target point cloud point in the real-time point cloud data to obtain target point cloud data.
The driving control module 450 may be configured to control driving of the vehicle based on the target point cloud data.
In the embodiment of the disclosure, the real-time point cloud data corresponding to the vehicle driving area can be subjected to tag matching based on the semantic segmentation result of the real-time image data corresponding to the vehicle driving area, the target point cloud points in the real-time point cloud data are determined, the target point cloud points are eliminated from the real-time point cloud data to obtain the target point cloud data, and then the vehicle is controlled to drive based on the target point cloud data.
In some embodiments of the present disclosure, the tag matching module 430 may include a projection imaging unit, a first determination unit, and a second determination unit.
The projection imaging unit can be used for projecting and imaging the real-time point cloud data from the radar coordinate system to the image coordinate system corresponding to the real-time image data to obtain the first point cloud coordinates of each real-time point cloud point in the real-time point cloud data.
The first determining unit may be configured to determine, for each real-time point cloud point, a point cloud semantic tag of the real-time point cloud point based on the semantic segmentation result and the first point cloud coordinate of the real-time point cloud point.
The second determining unit may be configured to use the real-time point cloud points labeled with the point cloud semantic labels as target objects as target point cloud points, respectively.
In some embodiments of the present disclosure, the projection imaging unit may be further configured to map real-time point cloud coordinates of each real-time point cloud point in the radar coordinate system into an image acquisition device coordinate system according to the laser radar device external parameter and the image acquisition device external parameter, to obtain second point cloud coordinates of each real-time point cloud point; and mapping the second point cloud coordinates of each real-time point cloud point to an image coordinate system according to the internal parameters of the image acquisition equipment to obtain the first point cloud coordinates of each real-time point cloud point.
In some embodiments of the present disclosure, the first determining unit may be further configured to, when each coordinate component of the first point cloud coordinate is an integer value, use a pixel semantic tag of a pixel point corresponding to the first point cloud coordinate as a point cloud semantic tag of a real-time point cloud point.
In some embodiments of the present disclosure, the first determining unit may be further configured to determine, when at least one coordinate component of the first point cloud coordinate is a non-integer value, a point cloud semantic label of the real point cloud point based on pixel semantic labels of respective pixel points surrounding the first point cloud coordinate.
In some embodiments of the present disclosure, the tag matching module 430 may further include a point cloud rejecting unit, where the point cloud rejecting unit is configured to reject invalid point cloud points in the real-time point cloud data before determining a point cloud semantic tag of each real-time point cloud point based on the semantic segmentation result and the first point cloud coordinate of the real-time point cloud point, where the invalid point cloud points are point cloud points whose first point cloud coordinate is outside the coordinate range corresponding to the real-time image data.
It should be noted that the vehicle control device 400 shown in fig. 4 may execute each step in the method embodiments shown in fig. 1 to 3, and implement each process and effect in the method embodiments shown in fig. 1 to 3, which are not described herein again.
Fig. 5 shows a schematic structural diagram of a vehicle control device provided by an embodiment of the present disclosure. The vehicle control device shown in fig. 5 may be an in-vehicle device mounted on a vehicle or a controller of the vehicle, and the vehicle may be an autonomous working vehicle.
As shown in fig. 5, the vehicle control apparatus may include a processor 501 and a memory 502 storing computer program instructions.
Specifically, the processor 501 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 502 may include a mass storage for information or instructions. By way of example, and not limitation, memory 502 may include a Hard Disk Drive (HDD), a floppy Disk Drive, flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 502 may include removable or non-removable (or fixed) media, where appropriate. Memory 502 may be internal or external to the integrated gateway device, where appropriate. In a particular embodiment, the memory 502 is non-volatile solid-state memory. In a particular embodiment, the Memory 502 includes a Read-Only Memory (ROM). The ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (Electrically Erasable PROM, EPROM), electrically Erasable PROM (Electrically Erasable PROM, EEPROM), electrically Alterable ROM (Electrically Alterable ROM, EAROM), or flash memory, or a combination of two or more of these, where appropriate.
The processor 501 reads and executes computer program instructions stored in the memory 502 to perform the steps of the vehicle control method provided by the embodiments of the present disclosure.
In one example, the vehicle control device may also include a transceiver 503 and a bus 504. As shown in fig. 5, the processor 501, the memory 502 and the transceiver 503 are connected via a bus 504 to complete communication.
Bus 504 includes hardware, software, or both. By way of example and not limitation, a BUS may include an Accelerated Graphics Port (AGP) or other Graphics BUS, an Enhanced Industry Standard Architecture (EISA) BUS, a Front-Side BUS (Front Side BUS, FSB), a Hyper Transport (HT) Interconnect, an Industry Standard Architecture (ISA) BUS, an infiniband Interconnect, a Low Pin Count (LPC) BUS, a memory BUS, a microchannel Architecture (MCA) BUS, a Peripheral Control Interconnect (PCI) BUS, a PCI-Express (PCI-X) BUS, a Serial Advanced Technology Attachment (Attachment) BUS, a Local Electronics Standard Association (vldo) BUS, a Local Association BUS, a BUS, or a combination of two or more of these as appropriate. Bus 504 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The disclosed embodiments also provide a computer-readable storage medium, which may store a computer program, which, when executed by a processor, causes the processor to implement the vehicle control method provided by the disclosed embodiments.
The storage medium described above may, for example, include a memory 502 of computer program instructions executable by a processor 501 of a vehicle control device to perform a vehicle control method provided by embodiments of the present disclosure. Alternatively, the storage medium may be a non-transitory computer readable storage medium, for example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a Compact Disc read only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
The embodiment of the present application further provides a vehicle, where the vehicle may include at least one of a vehicle control device, and a computer storage medium, and the specific reference to the vehicle control device, and the computer storage medium is consistent with the above description, and is not repeated here.
It is noted that, in this document, relational terms such as "first" and "second," and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the term "comprises/comprising" is intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A vehicle control method, characterized by comprising:
acquiring real-time image data and real-time point cloud data corresponding to a vehicle driving area;
performing semantic segmentation on the real-time image data to obtain a semantic segmentation result, wherein the semantic segmentation result comprises pixel semantic labels of all pixel points in the real-time image data;
performing label matching on the real-time point cloud data and the real-time image data, and determining a target point cloud point in the real-time point cloud data, wherein a point cloud semantic label of the target point cloud point is a target object, and the target object is an obstacle which does not influence driving;
eliminating the target point cloud points in the real-time point cloud data to obtain target point cloud data;
and controlling the vehicle to drive based on the target point cloud data.
2. The method of claim 1, wherein the tag matching the real-time point cloud data with the real-time image data to determine a target point cloud point in the real-time point cloud data comprises:
projecting and imaging the real-time point cloud data to an image coordinate system corresponding to the real-time image data from a radar coordinate system to obtain a first point cloud coordinate of each real-time point cloud point in the real-time point cloud data;
for each real-time point cloud point, determining a point cloud semantic label of the real-time point cloud point based on the semantic segmentation result and the first point cloud coordinate of the real-time point cloud point;
and respectively taking the point cloud semantic labels as real-time point cloud points of the target object as the target point cloud points.
3. The method of claim 2, wherein the projecting and imaging the real-time point cloud data into an image coordinate system corresponding to the real-time image data to obtain a first point cloud coordinate of each real-time point cloud point in the real-time point cloud data comprises:
mapping the real-time point cloud coordinates of each real-time point cloud point in the radar coordinate system to an image acquisition equipment coordinate system according to the laser radar equipment external parameters and the image acquisition equipment external parameters to obtain second point cloud coordinates of each real-time point cloud point;
and mapping the second point cloud coordinates of each real-time point cloud point to the image coordinate system according to the internal parameters of the image acquisition equipment to obtain the first point cloud coordinates of each real-time point cloud point.
4. The method of claim 2, wherein determining the point cloud semantic label of the real-time point cloud point based on the semantic segmentation result and the first point cloud coordinate of the real-time point cloud point comprises:
and when each coordinate component of the first point cloud coordinate is an integer value, taking a pixel semantic label of a pixel point corresponding to the first point cloud coordinate as a point cloud semantic label of the real point cloud point.
5. The method of claim 2, wherein determining the point cloud semantic label of the real-time point cloud point based on the semantic segmentation result and the first point cloud coordinate of the real-time point cloud point comprises:
and when at least one coordinate component of the first point cloud coordinate is a non-integer value, determining a point cloud semantic label of the real point cloud point based on pixel semantic labels of all pixel points surrounding the first point cloud coordinate.
6. The method of claim 2, wherein before the determining, for each of the real-time point cloud points, a point cloud semantic label for the real-time point cloud point based on the semantic segmentation result and the first point cloud coordinates of the real-time point cloud point, the method further comprises:
and eliminating invalid point cloud points in the real-time point cloud data, wherein the invalid point cloud points are point cloud points of which the first point cloud coordinates are located outside the coordinate range corresponding to the real-time image data.
7. A vehicle control apparatus characterized by comprising:
the data acquisition module is used for acquiring real-time image data and real-time point cloud data corresponding to a vehicle driving area;
the semantic segmentation module is used for performing semantic segmentation on the real-time image data to obtain a semantic segmentation result, and the semantic segmentation result comprises pixel semantic labels of all pixel points in the real-time image data;
the tag matching module is used for performing tag matching on the real-time point cloud data and the real-time image data and determining a target point cloud point in the real-time point cloud data, wherein a point cloud semantic tag of the target point cloud point is a target object, and the target object is an obstacle which does not influence driving;
the data eliminating module is used for eliminating the target point cloud points in the real-time point cloud data to obtain target point cloud data;
and the driving control module is used for controlling the driving of the vehicle based on the target point cloud data.
8. A vehicle control apparatus characterized by comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the vehicle control method of any of claims 1-6.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, causes the processor to implement the vehicle control method of any one of the preceding claims 1 to 6.
10. A vehicle, characterized by at least one of the following:
the vehicle control apparatus according to claim 7;
the vehicle control apparatus according to claim 8;
the computer storage medium of claim 9.
CN202211476560.5A 2022-11-23 2022-11-23 Vehicle control method, device, equipment, medium and vehicle Pending CN115892068A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211476560.5A CN115892068A (en) 2022-11-23 2022-11-23 Vehicle control method, device, equipment, medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211476560.5A CN115892068A (en) 2022-11-23 2022-11-23 Vehicle control method, device, equipment, medium and vehicle

Publications (1)

Publication Number Publication Date
CN115892068A true CN115892068A (en) 2023-04-04

Family

ID=86478371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211476560.5A Pending CN115892068A (en) 2022-11-23 2022-11-23 Vehicle control method, device, equipment, medium and vehicle

Country Status (1)

Country Link
CN (1) CN115892068A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116772887A (en) * 2023-08-25 2023-09-19 北京斯年智驾科技有限公司 Vehicle course initialization method, system, device and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116772887A (en) * 2023-08-25 2023-09-19 北京斯年智驾科技有限公司 Vehicle course initialization method, system, device and readable storage medium
CN116772887B (en) * 2023-08-25 2023-11-14 北京斯年智驾科技有限公司 Vehicle course initialization method, system, device and readable storage medium

Similar Documents

Publication Publication Date Title
CN108845574B (en) Target identification and tracking method, device, equipment and medium
He et al. Accurate and robust lane detection based on dual-view convolutional neutral network
Nienaber et al. Detecting potholes using simple image processing techniques and real-world footage
US11308717B2 (en) Object detection device and object detection method
CN110674733A (en) Multi-target detection and identification method and driving assistance method and system
JP6717240B2 (en) Target detection device
CN109633621A (en) A kind of vehicle environment sensory perceptual system data processing method
CN114296095A (en) Method, device, vehicle and medium for extracting effective target of automatic driving vehicle
CN113255444A (en) Training method of image recognition model, image recognition method and device
CN115892068A (en) Vehicle control method, device, equipment, medium and vehicle
CN113435237A (en) Object state recognition device, recognition method, recognition program, and control device
CN114119955A (en) Method and device for detecting potential dangerous target
CN116433715A (en) Time sequence tracking method, device and medium based on multi-sensor front fusion result
CN118311955A (en) Unmanned aerial vehicle control method, terminal, unmanned aerial vehicle and storage medium
CN111976585A (en) Projection information recognition device and method based on artificial neural network
CN115995163B (en) Vehicle collision early warning method and system
Álvarez et al. Perception advances in outdoor vehicle detection for automatic cruise control
CN114724119B (en) Lane line extraction method, lane line detection device, and storage medium
CN116434156A (en) Target detection method, storage medium, road side equipment and automatic driving system
CN112101069A (en) Method and device for determining driving area information
CN113611008B (en) Vehicle driving scene acquisition method, device, equipment and medium
CN110634120A (en) Vehicle damage judgment method and device
CN117523914A (en) Collision early warning method, device, equipment, readable storage medium and program product
CN116433712A (en) Fusion tracking method and device based on pre-fusion of multi-sensor time sequence sensing results
CN110309741B (en) Obstacle detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination