CN109765886B - Target track identification method followed by vehicle - Google Patents

Target track identification method followed by vehicle Download PDF

Info

Publication number
CN109765886B
CN109765886B CN201811572884.2A CN201811572884A CN109765886B CN 109765886 B CN109765886 B CN 109765886B CN 201811572884 A CN201811572884 A CN 201811572884A CN 109765886 B CN109765886 B CN 109765886B
Authority
CN
China
Prior art keywords
information
following target
vehicle
target
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811572884.2A
Other languages
Chinese (zh)
Other versions
CN109765886A (en
Inventor
张德兆
王肖
张放
李晓飞
霍舒豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Idriverplus Technologies Co Ltd
Original Assignee
Beijing Idriverplus Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Idriverplus Technologies Co Ltd filed Critical Beijing Idriverplus Technologies Co Ltd
Priority to CN201811572884.2A priority Critical patent/CN109765886B/en
Publication of CN109765886A publication Critical patent/CN109765886A/en
Application granted granted Critical
Publication of CN109765886B publication Critical patent/CN109765886B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a target track identification method followed by a vehicle, which comprises the following steps: acquiring position information of a vehicle; obtaining map data according to the position information of the vehicle; acquiring multi-frame image information of a following target followed by a vehicle within a preset time length; each frame of image information comprises time information when the image information is acquired; according to the time information, fitting processing is carried out on the multi-frame image information and the map data, and position information of the following target corresponding to each time information is obtained; and splicing the plurality of position information of the following target to obtain a target track of the following target. Therefore, the time for acquiring the track is saved, and the track acquisition efficiency is improved.

Description

Target track identification method followed by vehicle
Technical Field
The invention relates to the field of image and data processing, in particular to a target track identification method followed by a vehicle.
Background
With the development of economy and the rise of artificial intelligence technology, the autopilot technology is more and more favored by the market. When automated driving techniques are applied in the cleaning field, automatic cleaning devices are produced.
In the prior art, an automatic cleaning device acquires the track of a moving object in front through a series of complex algorithms by combining a plurality of sensors of the automatic cleaning device. The method has the defects of large operation amount, low calculation efficiency and the like.
Disclosure of Invention
The embodiment of the invention aims to provide a target track identification method followed by a vehicle, so as to solve the problems in the prior art.
In order to solve the above problem, the present invention provides a target trajectory identification method followed by a vehicle, including:
acquiring position information of a vehicle;
obtaining map data according to the position information of the vehicle;
acquiring multi-frame image information of a following target followed by a vehicle within a preset time length; each frame of image information comprises time information when the image information is acquired;
according to the time information, fitting the multi-frame image information and the map data to obtain position information of the following target corresponding to each time information;
and splicing the plurality of position information of the following target to obtain a target track of the following target.
Preferably, before acquiring the multi-frame image information of the following target followed by the vehicle within the preset time period, the method further includes:
acquiring video information of the following target through a first acquisition device;
and processing the video information to obtain the image information of the following target.
Preferably, the method further comprises:
collecting environment perception data of the following target through a second collecting device;
processing the environmental perception data to generate laser point cloud data;
and correcting the image information according to the laser point cloud data.
Preferably, the fitting processing, performed on the multiple frames of image information and the map data according to the time information, to obtain the position information of the following target corresponding to each piece of time information specifically includes:
processing each frame of image information to acquire environmental data in each frame of image information;
fitting the environmental data and the map data, and determining the position of the environmental data in the map data when the environmental data and the map data are overlapped;
and determining the position information of the following target corresponding to each frame of image according to the position.
Preferably, the splicing the plurality of position information of the following target to obtain the target track of the following target specifically includes:
sequencing the position information of each following target according to the time information;
and splicing according to the sequencing result to obtain the target track of the following target.
Preferably, the method further comprises:
acquiring the distance between the vehicle and the following target;
when the distance is not smaller than a preset distance threshold, generating first warning information;
and playing the first warning information through an audio playing unit of the vehicle.
Preferably, the method further comprises the following steps:
when the distance is not smaller than a preset distance threshold value, generating second warning information; the second warning information comprises an expected waiting time of a following target;
and sending the second warning information to the following target so that the following target waits according to the expected waiting time.
Preferably, the image information of the following target at the previous moment not less than the preset distance threshold and at the moment not less than the preset distance threshold are respectively obtained;
processing the image information at the previous moment not less than the preset distance threshold and not less than the preset distance threshold to determine the speed information of the following target;
and calculating the predicted waiting time according to the speed information of the following target, the speed information of the vehicle and the safe distance.
Preferably, when the first acquisition device is a binocular camera, the distance between the vehicle and the following target is calculated according to parameters of the binocular camera and the image information of each frame.
Preferably, the distance between the vehicle and the following target is calculated from the ultrasonic data acquired by the third acquisition device.
By applying the target track identification method followed by the vehicle provided by the invention, the position information of the vehicle is obtained; obtaining map data according to the position information of the vehicle; acquiring multi-frame image information of a following target followed by a vehicle within a preset time length; each frame of image information comprises time information when the image information is acquired; according to the time information, fitting the multi-frame image information and the map data to obtain position information of the following target corresponding to each time information; and splicing the plurality of position information of the following target to obtain a target track of the following target. Therefore, the time for acquiring the track is saved, and the track acquisition efficiency is improved.
Drawings
Fig. 1 is a schematic flow chart of a target trajectory identification method followed by a vehicle according to an embodiment of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be further noted that, for the convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Before the method provided by the invention is applied, the following target is determined, and how to determine the following target can be determined through an image feature comparison method, for example, when the method is applied to suspect tracking, a suspect license plate number, facial features of a suspect and image information of the suspect are stored in a memory of a vehicle, the vehicle matches the acquired image information with a prestored license plate number or facial features, when the matching degree is greater than a certain value, secondary matching is performed through a server for confirmation, and when the matching degree is confirmed to be a suspect object, the suspect object is followed. When the method is applied to the cleaning field, the characteristics of a cleaner can be stored in a memory of a vehicle, or the ID of a terminal carried by the cleaner and the ID of the vehicle are bound in advance, and a following target is determined after interaction through a server.
Fig. 1 is a schematic flow chart of a target trajectory identification method followed by a vehicle according to an embodiment of the present invention. The method is applied to an automatic driving vehicle, in particular to an automatic driving cleaning vehicle, the cleaning vehicle can obtain a target track of a following target, and corresponding processing, such as following and the like, is carried out according to the target track. The execution subject of the method may be a control unit of an autonomous vehicle. A vehicle control unit may be understood as a control module for controlling the travel of a vehicle. As shown in fig. 1, the method comprises the steps of:
step 101, obtaining position information of a vehicle.
Specifically, the position information of the vehicle itself may be acquired by a Positioning module on the vehicle, such as a Global Positioning System (GPS). The position information of the vehicle can also be obtained by sending the query message to the server and analyzing the response message carrying the position information sent by the server.
And 102, acquiring map data according to the position information of the vehicle.
Specifically, when the vehicle is located at a certain position, the map of the position may be loaded according to the position information of the located position, for example, the vehicle is located on a street a, at this time, the longitude and latitude data of the position information of the vehicle is known, the map of the city a, which is the unit of the upper level of the street a, may be loaded, and the map data corresponding to the longitude and latitude located in a certain range may also be loaded according to the longitude and latitude data. How to load the vehicle may be downloading from a server or loading in advance by the vehicle, which is not limited in the present application.
And 103, acquiring multi-frame image information of the following target followed by the vehicle within a preset time length.
Wherein, the following target is different according to the difference of the following scene. When applying the method in the field of cleaning, the following target may be a cleaning worker or a cleaning vehicle, and when applying the method in suspicion tracking, the following target may be a suspect or a suspect vehicle.
The preset time duration may be 10 minutes, and within 10 minutes, the multi-frame image information is obtained. Each frame of image information includes time information when the image information is acquired.
Specifically, the vehicle is provided with a first acquisition device for acquiring video information of the following target. This first collection system can be the camera, and follow-up in order to calculate the distance between vehicle and the follow-up target, the camera can be binocular camera.
The binocular camera can process the collected video data and extract video frames, so that image information is extracted from the video frames.
Further, on the vehicle, in addition to the first acquisition device, there are other acquisition devices, such as a second acquisition device, which may be a lidar. The image information can be corrected by the data of the laser radar.
Specifically, processing environment perception data to generate laser point cloud data;
and correcting the image information according to the laser point cloud data.
Here, the correction may be regarded as a fusion process. The fusion process may be, for example, a detail enhancement algorithm to perform detail enhancement.
And step 104, fitting the multi-frame image information and the map data according to the time information to obtain the position information of the following target corresponding to each time information.
Specifically, when the following target cannot interact with the vehicle for some reason, the position information of the following target can be acquired by processing the acquired image information. The position information may be acquired by the following steps.
Firstly, each frame of image information is processed, and environmental data in each frame of image information is acquired.
Then, the environment data and the map data are fitted, and when the environment data and the map data coincide, the position of the environment data in the map data is determined.
And finally, determining the position information of the following target corresponding to each frame of image according to the position.
The image information includes environment data, such as building identification, traffic identification, road identification, and the like.
After the environment data and the map data are fitted, the same characteristics of the environment data and the map data, such as obstacles, can be comprehensively processed, and the position information of the following target can be calculated.
The obstacle information here may be a fixed obstacle such as a building on a map, a fixed traffic sign (e.g., a pole for fixing a traffic light), a fixed object (e.g., a stationary vehicle, a pedestrian, a road edge). These obstacle information can be directly obtained from image information and map data.
And 105, splicing the position information of the following target to obtain a target track of the following target.
Specifically, the position information of each following target may be sorted according to the time information; and splicing according to the sequencing result to obtain a target track of the following target.
For example, the acquired location information includes 1, 2, 3, 4, and 5, and the corresponding time information is 10: 51. 10: 52. 10: 54. 10: 53 and 10: and 55, sequencing the position information into 1, 2, 4, 3 and 5, and splicing the position information to obtain the target track.
Furthermore, before the target track is generated, in order to improve the accuracy of the position information, the determined position information of the following target can be corrected by using the environmental perception data of the second acquisition device.
Specifically, the sensing module may be a laser radar installed on the vehicle, and in the driving process of the vehicle, the peripheral obstacle information, such as lane lines, moving obstacles, and the like, and the changed traffic lights may be combined with the above obstacle information and the obstacle information sensed in the driving process, and after the fusion processing, the final obstacle information, which is called target obstacle information, may be obtained.
And correcting the track overlapped with the target obstacle in the spliced tracks according to the information of the target obstacle, thereby generating the target track.
Further, after step 105, the method further includes: acquiring the distance between a vehicle and a following target;
when the distance is not smaller than a preset distance threshold, generating first warning information;
the first warning information is played through an audio playing unit of the vehicle.
Specifically, when the distance between the vehicle and the following target is too large and exceeds a distance threshold, after the distance is calculated, a control signal is generated, and the control signal can control the audio playing unit to play the first warning message. The first warning message may be a voice announcement or may be a specific sound.
In one example, when the first acquisition device is a binocular camera, the distance between the vehicle and the following target is calculated through parameters of the binocular camera and each frame of image information. The distance between the vehicle and the following target can be calculated by using a binocular distance measuring principle of a binocular camera.
In another example, the distance between the vehicle and the following target is calculated from the ultrasonic data acquired by the third acquisition means. The third acquisition device is an ultrasonic radar, and can calculate the distance between the vehicle and the following target by utilizing ultrasonic ranging.
Further, the method further comprises the following steps:
when the distance is not smaller than a preset distance threshold, generating second warning information; the second warning information includes an expected waiting time period for the following target;
and sending the second warning information to the following target so that the following target waits according to the expected waiting time.
Specifically, when the following target is a cleaning vehicle or a cleaning man-hour, if the distance of the following target from the vehicle exceeds a distance threshold, the vehicle may generate second warning information, and the second warning information may include a predicted waiting time period.
The vehicle can calculate the position information and the speed information of the following target according to the exceeding distance threshold and the image information acquired at the previous moment, and calculate the predicted waiting time of the following target according to the speed information, the speed information of the vehicle and the safe distance of the vehicle during running.
By applying the target track identification method for vehicle following provided by the embodiment of the invention, the track of the following target can be obtained directly through image information, the time for obtaining the track is saved, and the track obtaining efficiency is improved.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A target track identification method followed by a vehicle is applied to an automatic driving vehicle, and comprises the following steps:
acquiring position information of a vehicle;
obtaining map data according to the position information of the vehicle;
acquiring multi-frame image information of a following target followed by a vehicle within a preset time length; each frame of image information comprises time information when the image information is acquired;
according to the time information, fitting the multi-frame image information and the map data to obtain position information of the following target corresponding to each time information;
and splicing the plurality of position information of the following target to obtain a target track of the following target.
2. The method according to claim 1, wherein before acquiring the plurality of frames of image information of the following target followed by the vehicle within the preset time period, the method further comprises:
acquiring video information of the following target through a first acquisition device;
and processing the video information to obtain the image information of the following target.
3. The method of claim 2, further comprising:
collecting environment perception data of the following target through a second collecting device;
processing the environmental perception data to generate laser point cloud data;
and correcting the image information according to the laser point cloud data.
4. The method according to claim 1, wherein the fitting processing is performed on the plurality of frames of image information and the map data according to the time information to obtain the position information of the following target corresponding to each piece of time information, specifically including:
processing each frame of image information to acquire environmental data in each frame of image information;
fitting the environmental data and the map data, and determining the position of the environmental data in the map data when the environmental data and the map data are overlapped;
and determining the position information of the following target corresponding to each frame of image according to the position.
5. The method according to claim 1, wherein the splicing the plurality of position information of the following target to obtain the target track of the following target specifically comprises:
sequencing the position information of each following target according to the time information;
and splicing according to the sequencing result to obtain the target track of the following target.
6. The method of claim 1, further comprising:
acquiring the distance between the vehicle and the following target;
when the distance is not smaller than a preset distance threshold, generating first warning information;
and playing the first warning information through an audio playing unit of the vehicle.
7. The method of claim 6, further comprising, after the method:
when the distance is not smaller than a preset distance threshold value, second warning information is generated; the second warning information comprises an expected waiting time of a following target;
and sending the second warning information to the following target so that the following target waits according to the expected waiting time.
8. The method according to claim 7, characterized in that image information of the following target at a previous time not less than a preset distance threshold and at a time not less than the preset distance threshold are acquired, respectively;
processing the image information at the previous moment not less than the preset distance threshold and not less than the preset distance threshold to determine the speed information of the following target;
and calculating the predicted waiting time according to the speed information of the following target, the speed information of the vehicle and the safe distance.
9. The method according to any one of claims 6 to 8, wherein when the first acquisition device is a binocular camera, the distance between the vehicle and the following target is calculated by parameters of the binocular camera and the image information of each frame.
10. The method according to any one of claims 6 to 8, wherein the distance between the vehicle and the following target is calculated from the ultrasonic data acquired by the third acquisition means.
CN201811572884.2A 2018-12-21 2018-12-21 Target track identification method followed by vehicle Active CN109765886B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811572884.2A CN109765886B (en) 2018-12-21 2018-12-21 Target track identification method followed by vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811572884.2A CN109765886B (en) 2018-12-21 2018-12-21 Target track identification method followed by vehicle

Publications (2)

Publication Number Publication Date
CN109765886A CN109765886A (en) 2019-05-17
CN109765886B true CN109765886B (en) 2022-05-24

Family

ID=66450857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811572884.2A Active CN109765886B (en) 2018-12-21 2018-12-21 Target track identification method followed by vehicle

Country Status (1)

Country Link
CN (1) CN109765886B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368260B (en) * 2020-03-18 2023-09-05 东软睿驰汽车技术(上海)有限公司 Vehicle following method and device
CN112810625B (en) * 2021-04-19 2021-07-30 北京三快在线科技有限公司 Method and device for correcting track

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101769745A (en) * 2009-01-07 2010-07-07 宏达国际电子股份有限公司 Mobile target tracking and navigation method, device thereof and computer program product used
CN105015547A (en) * 2014-04-28 2015-11-04 丰田自动车株式会社 Driving assistance apparatus
CN106289295A (en) * 2016-08-30 2017-01-04 深圳市轱辘车联数据技术有限公司 The vehicle follower method of a kind of self-driving travel and device
CN106384540A (en) * 2016-10-20 2017-02-08 深圳市元征科技股份有限公司 Vehicle real-time track prediction method and prediction system
CN108362293A (en) * 2018-02-24 2018-08-03 中电福富信息科技有限公司 A kind of track of vehicle matching process based on key point technology

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7268700B1 (en) * 1998-01-27 2007-09-11 Hoffberg Steven M Mobile communication device
WO2016178294A1 (en) * 2015-05-07 2016-11-10 ヤンマー株式会社 Induction control system for autonomous-traveling vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101769745A (en) * 2009-01-07 2010-07-07 宏达国际电子股份有限公司 Mobile target tracking and navigation method, device thereof and computer program product used
CN105015547A (en) * 2014-04-28 2015-11-04 丰田自动车株式会社 Driving assistance apparatus
CN106289295A (en) * 2016-08-30 2017-01-04 深圳市轱辘车联数据技术有限公司 The vehicle follower method of a kind of self-driving travel and device
CN106384540A (en) * 2016-10-20 2017-02-08 深圳市元征科技股份有限公司 Vehicle real-time track prediction method and prediction system
CN108362293A (en) * 2018-02-24 2018-08-03 中电福富信息科技有限公司 A kind of track of vehicle matching process based on key point technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Vector Field Based Sliding Mode Control of Curved Path Following for Miniature Unmanned Aerial Vehicles in Winds;WANG Yajing等;《Journal of Systems Science & Complexity》;20180215(第01期);306-328 *
多源交通信息采集、处理与发布系统研究;曹永军等;《自动化与信息工程》;20110215(第01期);28-32 *

Also Published As

Publication number Publication date
CN109765886A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
US9937922B2 (en) Collision avoidance using auditory data augmented with map data
CN107527092B (en) Training algorithms for collision avoidance using auditory data
CN110045729B (en) Automatic vehicle driving method and device
CN111695546B (en) Traffic signal lamp identification method and device for unmanned vehicle
CN108513059B (en) Image processing method, device and automatic driving vehicle
US11487988B2 (en) Augmenting real sensor recordings with simulated sensor data
CN109682388B (en) Method for determining following path
WO2021063006A1 (en) Driving early warning method and apparatus, electronic device, and computer storage medium
CN112069643B (en) Automatic driving simulation scene generation method and device
CN113646772A (en) Predicting three-dimensional features for autonomous driving
EP3700198B1 (en) Imaging device, image processing apparatus, and image processing method
GB2495807A (en) Activating image processing in a vehicle safety system based on location information
CN109740461B (en) Object and subsequent processing method
CN112753038B (en) Method and device for identifying lane change trend of vehicle
CN110696826B (en) Method and device for controlling a vehicle
CN109765886B (en) Target track identification method followed by vehicle
CN113112524B (en) Track prediction method and device for moving object in automatic driving and computing equipment
CN111539268A (en) Road condition early warning method and device during vehicle running and electronic equipment
CN115792945B (en) Floating obstacle detection method and device, electronic equipment and storage medium
CN109344776B (en) Data processing method
CN112526477B (en) Method and device for processing information
CN112698315B (en) Mobile equipment positioning system, method and equipment
KR102448164B1 (en) Apparatus and method for controlling a vehicle radar
JP6933069B2 (en) Pathfinding device
US20240013560A1 (en) Annotation of objects in image frames

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096

Patentee after: Beijing Idriverplus Technology Co.,Ltd.

Address before: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096

Patentee before: Beijing Idriverplus Technology Co.,Ltd.