CN116301061A - Unmanned vehicle heel pedestrian driving method and device, electronic equipment and readable storage medium - Google Patents

Unmanned vehicle heel pedestrian driving method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116301061A
CN116301061A CN202310287652.7A CN202310287652A CN116301061A CN 116301061 A CN116301061 A CN 116301061A CN 202310287652 A CN202310287652 A CN 202310287652A CN 116301061 A CN116301061 A CN 116301061A
Authority
CN
China
Prior art keywords
target pedestrian
point cloud
unmanned vehicle
base station
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310287652.7A
Other languages
Chinese (zh)
Inventor
于飞
刘言
迟骋
汪平凡
郭元明
郝晓伟
陈嘉鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xingwang Marine Electric Technology Co ltd
Original Assignee
Beijing Xingwang Marine Electric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xingwang Marine Electric Technology Co ltd filed Critical Beijing Xingwang Marine Electric Technology Co ltd
Priority to CN202310287652.7A priority Critical patent/CN116301061A/en
Publication of CN116301061A publication Critical patent/CN116301061A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a method, a device, electronic equipment and a readable storage medium for driving an unmanned vehicle to walk, wherein a laser radar sensor and a first base station and a second base station in an ultra-wideband sensor are arranged on the unmanned vehicle; the positioning tag in the ultra-wideband sensor is arranged on the body of the target pedestrian; the method comprises the following steps: when a target pedestrian is positioned in a designated area in front of the unmanned aerial vehicle, determining a first position of the target pedestrian relative to the unmanned aerial vehicle according to the point cloud data for the target pedestrian read from the laser radar sensor, and determining a second position of the target pedestrian relative to the unmanned aerial vehicle according to the distance data between the target pedestrian and the first base station and the second base station read from the ultra-wideband sensor; judging whether the first position and the second position are the same; and when the first position and the second position are the same, controlling the unmanned vehicle to drive along with the target pedestrian. The method is beneficial to improving the running efficiency of the unmanned vehicle following the person.

Description

Unmanned vehicle heel pedestrian driving method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of unmanned vehicles, and in particular, to a method and apparatus for driving an unmanned vehicle with a pedestrian, an electronic device, and a readable storage medium.
Background
In the process of transporting materials, it is often necessary for a vehicle (e.g., an automobile) to load the materials. When the road on which the materials are transported is rough (e.g., mountain roads, etc.), and the ordinary vehicle cannot pass through the road, an unmanned vehicle having a smaller volume can be selected to transport the materials through the road. When the material is transported by the unmanned vehicle, the unmanned vehicle can travel along with the pedestrian in front, that is, the pedestrian is on the front road, and the unmanned vehicle can travel along with the pedestrian.
In the prior art, a pedestrian tracking method based on vision or a pedestrian tracking method based on laser radar is generally selected, so that the unmanned vehicle can drive along with a pedestrian (abbreviated as following the pedestrian). When the vision-based pedestrian tracking method is adopted, the pedestrian tracking method is easily affected by illumination conditions, and when the illumination conditions are not good, the fuzzy video frames can make pedestrian features difficult to extract, so that the pedestrian tracking failure is caused, and the unmanned vehicle tracking efficiency is affected. Although the pedestrian tracking method based on the laser radar can overcome the factor of illumination condition, in practical application, if similar obstacles (such as other pedestrians or other objects similar to the pedestrians) exist near the tracked pedestrians, the laser radar matching algorithm can track the similar obstacles as tracking objects by mistake, and mismatching is easy to occur, so that tracking failure is caused, and the efficiency of following the pedestrians is affected.
Disclosure of Invention
In view of the foregoing, an object of the present application is to provide an unmanned vehicle-following pedestrian driving method, apparatus, electronic device and readable storage medium, so as to improve the efficiency of unmanned vehicle-following pedestrian driving.
In a first aspect, an embodiment of the present application provides a method for tracking an unmanned vehicle, where a laser radar sensor and a first base station and a second base station in an ultra-wideband sensor are installed on the unmanned vehicle; the positioning tag in the ultra-wideband sensor is arranged on a target pedestrian; the method comprises the following steps:
determining a first position of the target pedestrian relative to the unmanned vehicle according to the point cloud data for the target pedestrian read from the laser radar sensor when the target pedestrian is positioned in a designated area in front of the unmanned vehicle, and determining a second position of the target pedestrian relative to the unmanned vehicle according to the distance data between the target pedestrian and the first base station and the second base station read from the ultra-wideband sensor;
judging whether the first position and the second position are the same;
and when the first position and the second position are the same, controlling the unmanned vehicle to follow the target pedestrian to run.
With reference to the first aspect, the embodiments of the present application provide a first possible implementation manner of the first aspect, where the method further includes:
and when the first position and the second position are different, controlling the unmanned vehicle to stop running, and reporting warning information for representing the following failure to a designated terminal.
With reference to the first aspect, the embodiments of the present application provide a second possible implementation manner of the first aspect, where the method further includes:
and converting the coordinate system corresponding to the laser radar sensor and the coordinate system corresponding to the ultra-wideband sensor into the same appointed coordinate system.
With reference to the first possible implementation manner of the first aspect, the present embodiment provides a third possible implementation manner of the first aspect, where the method further includes:
reading point cloud data for the target pedestrian from the laser radar sensor, and reading distance data between the target pedestrian and the first base station and the second base station from the ultra-wideband sensor;
judging whether the point cloud data and the distance data are read or not;
and when the point cloud data and/or the distance data are not read, reporting first prompt information for indicating data reading failure to the appointed terminal.
With reference to the first aspect, an embodiment of the present application provides a fourth possible implementation manner of the first aspect, wherein the determining, according to the point cloud data for the target pedestrian read from the lidar sensor, a first position of the target pedestrian with respect to the unmanned vehicle includes:
after the point cloud data aiming at the target pedestrian are read from the laser radar sensor, filtering out the point cloud data outside the appointed area in the point cloud data to obtain first point cloud data in the appointed area;
clustering the first point cloud data to gather the first point cloud data with the same characteristics into one type to obtain at least one point cloud cluster;
determining a target point cloud cluster corresponding to the target pedestrian from all the point cloud clusters according to the preset characteristics of the target pedestrian;
carrying out Kalman filtering on the first point cloud data in the target point cloud cluster to obtain second point cloud data;
and determining a first position of the target pedestrian relative to the unmanned vehicle according to the second point cloud data.
With reference to the first aspect, the embodiments of the present application provide a fifth possible implementation manner of the first aspect, wherein the distance data includes first distance data between the target pedestrian and the first base station and second distance data between the target pedestrian and the second base station; the distance between the first base station and the second base station is a fixed distance; the determining a second position of the target pedestrian relative to the unmanned vehicle according to the distance data of the target pedestrian, the first base station and the second base station, which are read from the ultra-wideband sensor, includes:
after the first distance data and the second distance data are read from the ultra-wideband sensor, performing Kalman filtering on the first distance data to obtain filtered third distance data, and performing Kalman filtering on the second distance data to obtain filtered fourth distance data;
and determining a second position of the target pedestrian relative to the unmanned vehicle according to the third distance data, the fourth distance data and the fixed distance.
With reference to the first possible implementation manner of the first aspect, the present embodiment provides a sixth possible implementation manner of the first aspect, where the method further includes:
when a travel suspension instruction is received, controlling the unmanned vehicle to temporarily stop traveling;
continuously reading point cloud data aiming at the target pedestrian from the laser radar sensor and distance data of the target pedestrian, the first base station and the second base station from the ultra-wideband sensor in the process that the unmanned vehicle temporarily stops running;
when a continuous driving instruction is received, detecting whether the point cloud data and the distance data are read;
when the point cloud data and the distance data are read, determining a first position of the target pedestrian relative to the unmanned vehicle according to the point cloud data, and determining a second position of the target pedestrian relative to the unmanned vehicle according to the distance data;
when the first position and the second position are the same, controlling the unmanned vehicle to follow the target pedestrian to run;
and when the point cloud data and/or the distance data are not read, reporting second prompt information for indicating data reading failure to the appointed terminal.
In a second aspect, an embodiment of the present application further provides an unmanned vehicle following a pedestrian device, where a laser radar sensor and a first base station and a second base station in an ultra-wideband sensor are installed on the unmanned vehicle; the positioning tag in the ultra-wideband sensor is arranged on a target pedestrian; the device comprises:
a first determining module configured to determine, when the target pedestrian is located in a specified area in front of the unmanned vehicle, a first position of the target pedestrian with respect to the unmanned vehicle according to the point cloud data for the target pedestrian read from the lidar sensor, and determine a second position of the target pedestrian with respect to the unmanned vehicle according to the distance data between the target pedestrian and the first and second base stations read from the ultra-wideband sensor;
the first judging module is used for judging whether the first position and the second position are the same or not;
and the first control module is used for controlling the unmanned vehicle to follow the target pedestrian to run when the first position and the second position are the same.
With reference to the second aspect, embodiments of the present application provide a first possible implementation manner of the second aspect, where the apparatus further includes:
and the second control module is used for controlling the unmanned vehicle to stop running when the first position and the second position are different, and reporting warning information for representing the following failure to the appointed terminal.
With reference to the second aspect, embodiments of the present application provide a second possible implementation manner of the second aspect, where the apparatus further includes:
and the conversion module is used for converting the coordinate system corresponding to the laser radar sensor and the coordinate system corresponding to the ultra-wideband sensor into the same appointed coordinate system.
With reference to the first possible implementation manner of the second aspect, the present embodiment provides a third possible implementation manner of the second aspect, where the apparatus further includes:
the first reading module is used for reading point cloud data aiming at the target pedestrian from the laser radar sensor and reading distance data of the target pedestrian from the ultra-wideband sensor and the first base station and the second base station;
the second judging module is used for judging whether the point cloud data and the distance data are read;
and the first reporting module is used for reporting first prompt information for indicating data reading failure to the appointed terminal when the point cloud data and/or the distance data are not read.
With reference to the second aspect, an embodiment of the present application provides a fourth possible implementation manner of the second aspect, where the first determining module is specifically configured to, when determining, according to the point cloud data for the target pedestrian read from the lidar sensor, a first position of the target pedestrian with respect to the drone vehicle:
after the point cloud data aiming at the target pedestrian are read from the laser radar sensor, filtering out the point cloud data outside the appointed area in the point cloud data to obtain first point cloud data in the appointed area;
clustering the first point cloud data to gather the first point cloud data with the same characteristics into one type to obtain at least one point cloud cluster;
determining a target point cloud cluster corresponding to the target pedestrian from all the point cloud clusters according to the preset characteristics of the target pedestrian;
carrying out Kalman filtering on the first point cloud data in the target point cloud cluster to obtain second point cloud data;
and determining a first position of the target pedestrian relative to the unmanned vehicle according to the second point cloud data.
With reference to the second aspect, embodiments of the present application provide a fifth possible implementation manner of the second aspect, wherein the distance data includes first distance data between the target pedestrian and the first base station and second distance data between the target pedestrian and the second base station; the distance between the first base station and the second base station is a fixed distance; the first determining module is configured to, when determining the second position of the target pedestrian relative to the unmanned vehicle according to the distance data between the target pedestrian and the first base station and the second base station read from the ultra-wideband sensor, specifically:
after the first distance data and the second distance data are read from the ultra-wideband sensor, performing Kalman filtering on the first distance data to obtain filtered third distance data, and performing Kalman filtering on the second distance data to obtain filtered fourth distance data;
and determining a second position of the target pedestrian relative to the unmanned vehicle according to the third distance data, the fourth distance data and the fixed distance.
With reference to the first possible implementation manner of the second aspect, the present embodiment provides a sixth possible implementation manner of the second aspect, where the apparatus further includes:
the third control module is used for controlling the unmanned vehicle to temporarily stop running when receiving a running suspension instruction;
the second reading module is used for continuously reading the point cloud data aiming at the target pedestrian from the laser radar sensor and reading the distance data of the target pedestrian, the first base station and the second base station from the ultra-wideband sensor in the process that the unmanned vehicle temporarily stops running;
the detection module is used for detecting whether the point cloud data and the distance data are read when a continuous running instruction is received;
the second determining module is used for determining a first position of the target pedestrian relative to the unmanned vehicle according to the point cloud data and determining a second position of the target pedestrian relative to the unmanned vehicle according to the distance data when the point cloud data and the distance data are read;
the fourth control module is used for controlling the unmanned vehicle to follow the target pedestrian to run when the first position and the second position are the same;
and the second reporting module is used for reporting second prompt information for indicating data reading failure to the appointed terminal when the point cloud data and/or the distance data are not read.
In a third aspect, embodiments of the present application further provide an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of any one of the possible implementations of the first aspect.
In a fourth aspect, the present embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the possible implementations of the first aspect described above.
According to the unmanned vehicle following pedestrian driving method, device, electronic equipment and readable storage medium, the position of a target pedestrian is determined through a laser radar sensor and an ultra-wideband sensor respectively in the process of following the target pedestrian, then whether the positions of the target pedestrian determined in the two modes are the same is judged, when the positions of the target pedestrian are the same, the fact that the target pedestrian is correctly identified is indicated, and then the following pedestrian driving is carried out. In this embodiment, the position of the target pedestrian is determined by using two recognition modes, which is favorable for avoiding the phenomenon that the similar obstacle is used as the target pedestrian to track by mistake when only using the laser radar matching algorithm, that is, avoiding the problem of mismatching, thereby improving the success rate of following people and improving the efficiency of following people.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a flow chart of an unmanned vehicle heel walk method provided by an embodiment of the present application;
FIG. 2 illustrates a schematic diagram of a designated area provided by an embodiment of the present application;
fig. 3 is a schematic structural view of an unmanned vehicle heel walking device according to an embodiment of the present disclosure;
fig. 4 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
The problem that the tracking failure is caused and the efficiency of following people is affected due to the fact that mismatching is easy to occur in the prior art is considered. Based on the above, the embodiment of the application provides an unmanned vehicle following pedestrian driving method, an unmanned vehicle following pedestrian driving device, electronic equipment and a readable storage medium, so as to improve the following success rate and the following efficiency. The following is a description of examples.
Embodiment one:
for the convenience of understanding the present embodiment, a detailed description will be given of an unmanned vehicle heel walking method disclosed in the present embodiment. The method is applied to a control system on an unmanned vehicle, and a laser radar sensor and a first base station and a second base station in an ultra-wideband sensor are installed on the unmanned vehicle; the positioning tag in the ultra-wideband sensor is arranged on the body of the target pedestrian; fig. 1 shows a flowchart of a method for driving an unmanned vehicle with a pedestrian, which is provided in an embodiment of the present application, as shown in fig. 1, and includes the following steps S101 to S103:
s101: when the target pedestrian is located in a designated area in front of the unmanned vehicle, a first position of the target pedestrian relative to the unmanned vehicle is determined according to the point cloud data for the target pedestrian read from the laser radar sensor, and a second position of the target pedestrian relative to the unmanned vehicle is determined according to the distance data between the target pedestrian and the first base station and the second base station read from the ultra-wideband sensor.
S102: it is determined whether the first location and the second location are the same.
S103: and when the first position and the second position are the same, controlling the unmanned vehicle to drive along with the target pedestrian.
In this embodiment, the unmanned vehicle is an unmanned vehicle or an automatic driving vehicle, the unmanned vehicle is used for transporting goods, and the whole vehicle of the unmanned vehicle is smaller in size, that is, the whole vehicle of the unmanned vehicle is smaller in size than a preset volume.
The laser radar sensor is detachably arranged at a designated position on the unmanned vehicle. The Ultra Wideband (UWB) sensor comprises a first base station, a second base station and a positioning tag, wherein the first base station and the second base station are respectively arranged at different positions on the unmanned vehicle, and the positioning tag is positioned on the body of a target pedestrian.
Fig. 2 is a schematic diagram of a designated area provided in the embodiment of the present application, and as shown in fig. 2, the designated area may be a circular area with a preset length as a radius, with a preset distance directly in front of the unmanned vehicle as a center.
When a target pedestrian is located in a designated area in front of the unmanned vehicle, the lidar sensor is used to collect point cloud data containing the designated area (i.e., point cloud data for the target pedestrian). And the control system of the unmanned vehicle reads the point cloud data from the laser radar sensor in real time, and determines the first position of the target pedestrian relative to the unmanned vehicle according to the point cloud data.
And, when the target pedestrian is located in the specified area in front of the unmanned aerial vehicle, the positioning tag is used to transmit a first positioning signal (ultra-wideband signal) to the first base station, and transmit a second positioning signal (ultra-wideband signal) to the second base station. After receiving the first positioning signal, the first base station returns a third positioning signal to the positioning tag, and likewise, after receiving the second positioning signal, the second base station returns a fourth positioning signal to the positioning tag. The positioning tag calculates first distance data between the positioning tag and the first base station according to round trip time of the first positioning signal and the third positioning signal, and calculates second distance data between the positioning tag and the second base station according to round trip time of the second positioning signal and the fourth positioning signal. The control system of the unmanned vehicle reads distance data (comprising first distance data and second distance data) of the target pedestrian and the first base station and the second base station from the ultra-wideband sensor in real time, and then determines a second position of the target pedestrian relative to the unmanned vehicle according to the distance data.
In this embodiment, when the first position and the second position are the same, the unmanned vehicle is controlled to travel with the person, which is favorable for avoiding the problem of failure of person following caused by incorrect pedestrian recognition when only using the laser radar sensor.
In one possible embodiment, the distance measured by the lidar sensor and the ultra-wideband sensor may be subject to error when a person stands in the same location, considering that the mounting positions of the lidar sensor and the ultra-wideband sensor on the drone are different. Therefore, the data generated by the lidar sensor and the ultra-wideband sensor need to be calibrated to the same origin, i.e., converted into the same coordinate system. Therefore, before executing step S101, it is also possible to execute the following steps:
s1001: and converting the coordinate system corresponding to the laser radar sensor and the coordinate system corresponding to the ultra-wideband sensor into the same appointed coordinate system. The specified coordinate system may be a coordinate system with the center of the unmanned vehicle as the origin of coordinates. The x-axis of the specified coordinate system may be the lateral direction of the drone, the y-axis may be the forward direction of the drone, and the z-axis may be the direction perpendicular to the ground.
Specifically, lidar sensor and ultra wideband sensor external parameter calibration essentially obtains the displacement (x, y, z) and rotation (roll, pitch, yaw) of the two sensors, so that a three-dimensional space can describe such a transformation relationship with a homogeneous transformation matrix.
The homogeneous transformation matrix is a 4×4 matrix describing translation and rotation transformation relation between two coordinate axes, and the translation amount (x t ,y t ,z t ) And Euler angles R (α, β, γ), a 3D homogeneous transformation matrix is as follows:
Figure BDA0004140285170000111
wherein x is t 、y t 、z t Respectively, translation along the x-axis, y-axis and z-axis, α, β, γ respectively, rotation angles about the x-axis, y-axis and z-axis, and T represents a homogeneous transformation of the two coordinate systems, including both translation and rotation.
In a possible implementation manner, after performing step S102, the following steps may be further performed:
s104: when the first position and the second position are different, controlling the unmanned vehicle to stop running, and reporting warning information for representing the following failure to the designated terminal. The designated terminal may be a handheld terminal of the target pedestrian.
In a possible implementation, before performing step S101, the following steps S1002-S1004 may be further performed:
s1002: reading point cloud data aiming at a target pedestrian from a laser radar sensor, and reading distance data of the target pedestrian and a first base station and a second base station from an ultra-wideband sensor;
s1003: judging whether the point cloud data and the distance data are read;
s1004: and when the point cloud data and/or the distance data are not read, reporting first prompt information for indicating data reading failure to the appointed terminal.
When the point cloud data and/or the distance data are read, the step S101 is continued to be performed.
In this embodiment, when the point cloud data and/or the distance data cannot be read, the first prompt information is reported to the designated terminal, and the target pedestrian is prompted in time.
In one possible implementation manner, when step S101 is performed to determine the first position of the target pedestrian relative to the unmanned vehicle according to the point cloud data for the target pedestrian read from the lidar sensor, the following steps S1011-S1015 may be specifically performed:
s1011: and after the point cloud data aiming at the target pedestrian are read from the laser radar sensor, filtering the point cloud data outside the designated area of the point cloud data to obtain the first point cloud data in the designated area.
In this embodiment, since the target pedestrian is located within the designated area, it is advantageous to reduce the amount of subsequent data calculation by filtering out the point cloud data outside the designated area.
S1012: clustering the first point cloud data to group the first point cloud data with the same characteristics into one type to obtain at least one point cloud cluster.
By way of example, the features include human features, individual animal features, vehicle features, building features, tree features, and the like.
S1013: and determining a target point cloud cluster corresponding to the target pedestrian from the target point cloud clusters according to the characteristics of the preset target pedestrian.
S1014: and carrying out Kalman filtering on the first point cloud data in the target point cloud cluster to obtain second point cloud data.
In this embodiment, by performing kalman filtering on the first point cloud data in the target point cloud cluster, it is advantageous to reduce the deviation of the position obtained by measurement from the actual position.
S1015: and determining a first position of the target pedestrian relative to the unmanned vehicle according to the second point cloud data.
Illustratively, the first position of the target pedestrian relative to the drone in the designated coordinate system is determined based on the distance between the center of the point cloud cluster formed by the second point cloud data and the drone center, and the position of the center of the point cloud cluster relative to the drone center.
In one possible embodiment, the distance data includes first distance data between the target pedestrian and the first base station and second distance data between the target pedestrian and the second base station; the distance between the first base station and the second base station is a fixed distance; when step S101 is executed to determine the second position of the target pedestrian relative to the unmanned vehicle according to the distance data between the target pedestrian and the first base station and the second base station read from the ultra wideband sensor, the following steps S1016-S1017 may be specifically executed:
s1016: after the first distance data and the second distance data are read from the ultra-wideband sensor, performing Kalman filtering on the first distance data to obtain filtered third distance data, and performing Kalman filtering on the second distance data to obtain filtered fourth distance data;
s1017: and determining a second position of the target pedestrian relative to the unmanned vehicle according to the third distance data, the fourth distance data and the fixed distance.
In one possible embodiment, after performing step S103, it may be specifically performed according to the following steps S1051 to S1056:
s1051: and when a travel suspension instruction is received, controlling the unmanned vehicle to temporarily stop traveling.
In this embodiment, the suspension travel instruction may be sent by the specified terminal to the control system of the unmanned vehicle.
S1052: and continuously reading the point cloud data of the target pedestrian from the laser radar sensor and the distance data of the target pedestrian from the ultra-wideband sensor in the process of temporarily stopping the unmanned vehicle from running.
In the embodiment, in the process of temporarily stopping the unmanned vehicle from running, the unmanned vehicle only reads the point cloud data and the distance data, and the unmanned vehicle is not controlled to run.
S1053: when a continuous running instruction is received, whether the point cloud data and the distance data are read or not is detected.
S1054: when the point cloud data and the distance data are read, a first position of the target pedestrian relative to the unmanned vehicle is determined according to the point cloud data, and a second position of the target pedestrian relative to the unmanned vehicle is determined according to the distance data.
S1055: and when the first position and the second position are the same, controlling the unmanned vehicle to drive along with the target pedestrian.
S1056: and when the point cloud data and/or the distance data are not read, reporting second prompt information for indicating that the data are read failure to the appointed terminal.
In one possible embodiment, the maximum distance between the unmanned vehicle and the target pedestrian is preset. Within the maximum distance, the control system outputs an angle and a speed to drive the unmanned vehicle chassis to turn to follow the target pedestrian; outside the maximum distance, the control system outputs angle information, and the unmanned vehicle only turns instead of following.
Embodiment two:
fig. 3 is a schematic structural diagram of an unmanned vehicle following a pedestrian device according to an embodiment of the present application, where a laser radar sensor and a first base station and a second base station in an ultra wideband sensor are installed on the unmanned vehicle; the positioning tag in the ultra-wideband sensor is arranged on a target pedestrian; as shown in fig. 3, the apparatus includes:
a first determining module 301, configured to determine, when the target pedestrian is located in a designated area in front of the unmanned vehicle, a first position of the target pedestrian with respect to the unmanned vehicle according to the point cloud data for the target pedestrian read from the lidar sensor, and determine a second position of the target pedestrian with respect to the unmanned vehicle according to the distance data between the target pedestrian and the first and second base stations read from the ultra-wideband sensor;
a first determining module 302, configured to determine whether the first position and the second position are the same;
and the first control module 303 is used for controlling the unmanned vehicle to follow the target pedestrian to run when the first position and the second position are the same.
Optionally, the apparatus further includes:
and the second control module is used for controlling the unmanned vehicle to stop running when the first position and the second position are different, and reporting warning information for representing the following failure to the appointed terminal.
Optionally, the apparatus further includes:
and the conversion module is used for converting the coordinate system corresponding to the laser radar sensor and the coordinate system corresponding to the ultra-wideband sensor into the same appointed coordinate system.
Optionally, the apparatus further includes:
the first reading module is used for reading point cloud data aiming at the target pedestrian from the laser radar sensor and reading distance data of the target pedestrian from the ultra-wideband sensor and the first base station and the second base station;
the second judging module is used for judging whether the point cloud data and the distance data are read;
and the first reporting module is used for reporting first prompt information for indicating data reading failure to the appointed terminal when the point cloud data and/or the distance data are not read.
Optionally, the first determining module 301 is configured to, when determining the first position of the target pedestrian with respect to the unmanned vehicle according to the point cloud data for the target pedestrian read from the lidar sensor, specifically:
after the point cloud data aiming at the target pedestrian are read from the laser radar sensor, filtering out the point cloud data outside the appointed area in the point cloud data to obtain first point cloud data in the appointed area;
clustering the first point cloud data to gather the first point cloud data with the same characteristics into one type to obtain at least one point cloud cluster;
determining a target point cloud cluster corresponding to the target pedestrian from all the point cloud clusters according to the preset characteristics of the target pedestrian;
carrying out Kalman filtering on the first point cloud data in the target point cloud cluster to obtain second point cloud data;
and determining a first position of the target pedestrian relative to the unmanned vehicle according to the second point cloud data.
Optionally, the distance data includes first distance data between the target pedestrian and the first base station and second distance data between the target pedestrian and the second base station; the distance between the first base station and the second base station is a fixed distance; the first determining module 301 is configured to, when determining the second position of the target pedestrian relative to the unmanned vehicle according to the distance data between the target pedestrian and the first base station and the second base station read from the ultra-wideband sensor, specifically:
after the first distance data and the second distance data are read from the ultra-wideband sensor, performing Kalman filtering on the first distance data to obtain filtered third distance data, and performing Kalman filtering on the second distance data to obtain filtered fourth distance data;
and determining a second position of the target pedestrian relative to the unmanned vehicle according to the third distance data, the fourth distance data and the fixed distance.
Optionally, the apparatus further includes:
the third control module is used for controlling the unmanned vehicle to temporarily stop running when receiving a running suspension instruction;
the second reading module is used for continuously reading the point cloud data aiming at the target pedestrian from the laser radar sensor and reading the distance data of the target pedestrian, the first base station and the second base station from the ultra-wideband sensor in the process that the unmanned vehicle temporarily stops running;
the detection module is used for detecting whether the point cloud data and the distance data are read when a continuous running instruction is received;
the second determining module is used for determining a first position of the target pedestrian relative to the unmanned vehicle according to the point cloud data and determining a second position of the target pedestrian relative to the unmanned vehicle according to the distance data when the point cloud data and the distance data are read;
the fourth control module is used for controlling the unmanned vehicle to follow the target pedestrian to run when the first position and the second position are the same;
and the second reporting module is used for reporting second prompt information for indicating data reading failure to the appointed terminal when the point cloud data and/or the distance data are not read.
Embodiment III:
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application, including: the electronic device comprises a processor 401, a memory 402 and a bus 403, wherein the memory 402 stores machine readable instructions executable by the processor 401, and when the electronic device runs the information processing method, the processor 401 communicates with the memory 402 through the bus 403, and the processor 401 executes the machine readable instructions to execute the method steps described in the first embodiment.
Embodiment four:
the fourth embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor performs the method steps described in the first embodiment.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus, electronic device and computer readable storage medium described above may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, and for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present application, and are not intended to limit the scope of the present application, but the present application is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, the present application is not limited thereto. Any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or make equivalent substitutions for some of the technical features within the technical scope of the disclosure of the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. The unmanned vehicle following pedestrian driving method is characterized in that a laser radar sensor and a first base station and a second base station in an ultra-wideband sensor are installed on the unmanned vehicle; the positioning tag in the ultra-wideband sensor is arranged on a target pedestrian; the method comprises the following steps:
determining a first position of the target pedestrian relative to the unmanned vehicle according to the point cloud data for the target pedestrian read from the laser radar sensor when the target pedestrian is positioned in a designated area in front of the unmanned vehicle, and determining a second position of the target pedestrian relative to the unmanned vehicle according to the distance data between the target pedestrian and the first base station and the second base station read from the ultra-wideband sensor;
judging whether the first position and the second position are the same;
and when the first position and the second position are the same, controlling the unmanned vehicle to follow the target pedestrian to run.
2. The method according to claim 1, wherein the method further comprises:
and when the first position and the second position are different, controlling the unmanned vehicle to stop running, and reporting warning information for representing the following failure to a designated terminal.
3. The method according to claim 1, wherein the method further comprises:
and converting the coordinate system corresponding to the laser radar sensor and the coordinate system corresponding to the ultra-wideband sensor into the same appointed coordinate system.
4. The method according to claim 2, wherein the method further comprises:
reading point cloud data for the target pedestrian from the laser radar sensor, and reading distance data between the target pedestrian and the first base station and the second base station from the ultra-wideband sensor;
judging whether the point cloud data and the distance data are read or not;
and when the point cloud data and/or the distance data are not read, reporting first prompt information for indicating data reading failure to the appointed terminal.
5. The method of claim 1, wherein the determining a first location of the target pedestrian relative to the drone vehicle based on the point cloud data for the target pedestrian read from the lidar sensor comprises:
after the point cloud data aiming at the target pedestrian are read from the laser radar sensor, filtering out the point cloud data outside the appointed area in the point cloud data to obtain first point cloud data in the appointed area;
clustering the first point cloud data to gather the first point cloud data with the same characteristics into one type to obtain at least one point cloud cluster;
determining a target point cloud cluster corresponding to the target pedestrian from all the point cloud clusters according to the preset characteristics of the target pedestrian;
carrying out Kalman filtering on the first point cloud data in the target point cloud cluster to obtain second point cloud data;
and determining a first position of the target pedestrian relative to the unmanned vehicle according to the second point cloud data.
6. The method of claim 1, wherein the distance data comprises first distance data between the target pedestrian and the first base station and second distance data between the target pedestrian and the second base station; the distance between the first base station and the second base station is a fixed distance; the determining a second position of the target pedestrian relative to the unmanned vehicle according to the distance data of the target pedestrian, the first base station and the second base station, which are read from the ultra-wideband sensor, includes:
after the first distance data and the second distance data are read from the ultra-wideband sensor, performing Kalman filtering on the first distance data to obtain filtered third distance data, and performing Kalman filtering on the second distance data to obtain filtered fourth distance data;
and determining a second position of the target pedestrian relative to the unmanned vehicle according to the third distance data, the fourth distance data and the fixed distance.
7. The method according to claim 2, wherein the method further comprises:
when a travel suspension instruction is received, controlling the unmanned vehicle to temporarily stop traveling;
continuously reading point cloud data aiming at the target pedestrian from the laser radar sensor and distance data of the target pedestrian, the first base station and the second base station from the ultra-wideband sensor in the process that the unmanned vehicle temporarily stops running;
when a continuous driving instruction is received, detecting whether the point cloud data and the distance data are read;
when the point cloud data and the distance data are read, determining a first position of the target pedestrian relative to the unmanned vehicle according to the point cloud data, and determining a second position of the target pedestrian relative to the unmanned vehicle according to the distance data;
when the first position and the second position are the same, controlling the unmanned vehicle to follow the target pedestrian to run;
and when the point cloud data and/or the distance data are not read, reporting second prompt information for indicating data reading failure to the appointed terminal.
8. The unmanned vehicle heel pedestrian driving device is characterized in that a laser radar sensor and a first base station and a second base station in an ultra-wideband sensor are arranged on the unmanned vehicle; the positioning tag in the ultra-wideband sensor is arranged on a target pedestrian; the device comprises:
a first determining module configured to determine, when the target pedestrian is located in a specified area in front of the unmanned vehicle, a first position of the target pedestrian with respect to the unmanned vehicle according to the point cloud data for the target pedestrian read from the lidar sensor, and determine a second position of the target pedestrian with respect to the unmanned vehicle according to the distance data between the target pedestrian and the first and second base stations read from the ultra-wideband sensor;
the first judging module is used for judging whether the first position and the second position are the same or not;
and the first control module is used for controlling the unmanned vehicle to follow the target pedestrian to run when the first position and the second position are the same.
9. An electronic device, comprising: a processor, a memory and a bus, said memory storing machine-readable instructions executable by said processor, said processor and said memory communicating over the bus when the electronic device is running, said machine-readable instructions when executed by said processor performing the steps of the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the method according to any of claims 1 to 7.
CN202310287652.7A 2023-03-22 2023-03-22 Unmanned vehicle heel pedestrian driving method and device, electronic equipment and readable storage medium Pending CN116301061A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310287652.7A CN116301061A (en) 2023-03-22 2023-03-22 Unmanned vehicle heel pedestrian driving method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310287652.7A CN116301061A (en) 2023-03-22 2023-03-22 Unmanned vehicle heel pedestrian driving method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116301061A true CN116301061A (en) 2023-06-23

Family

ID=86814767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310287652.7A Pending CN116301061A (en) 2023-03-22 2023-03-22 Unmanned vehicle heel pedestrian driving method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116301061A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152197A (en) * 2023-10-30 2023-12-01 成都睿芯行科技有限公司 Method and system for determining tracking object and method and system for tracking

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152197A (en) * 2023-10-30 2023-12-01 成都睿芯行科技有限公司 Method and system for determining tracking object and method and system for tracking
CN117152197B (en) * 2023-10-30 2024-01-23 成都睿芯行科技有限公司 Method and system for determining tracking object and method and system for tracking

Similar Documents

Publication Publication Date Title
US11188092B2 (en) Modifying behavior of autonomous vehicles based on sensor blind spots and limitations
US11731629B2 (en) Robust method for detecting traffic signals and their associated states
US11327501B1 (en) Detecting sensor degradation by actively controlling an autonomous vehicle
US9600768B1 (en) Using behavior of objects to infer changes in a driving environment
US11052944B2 (en) Auto docking method for application in heavy trucks
US9395192B1 (en) Methods and systems for road and lane boundary tracing
JP2023536407A (en) Drivable surface identification technology
US8948958B1 (en) Estimating road lane geometry using lane marker observations
US11671564B2 (en) Adjusting vehicle sensor field of view volume
KR20140138762A (en) Detecting lane markings
EP3864438A1 (en) Detecting spurious objects for autonomous vehicles
JP2015125760A (en) Mine work machine
CN112977411A (en) Intelligent chassis control method and device
CN116301061A (en) Unmanned vehicle heel pedestrian driving method and device, electronic equipment and readable storage medium
CN112734811B (en) Obstacle tracking method, obstacle tracking device and chip
CN115438430B (en) Mining area vehicle driving stability prediction method and device
CN112183157A (en) Road geometry identification method and device
JP2024037166A (en) Methods, equipment, storage media and vehicle control methods for modeling objects
CN117991287A (en) Road edge detection method and device, electronic equipment and mobile equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination