CN109188390A - A kind of detection of moving target high-precision and method for tracing - Google Patents

A kind of detection of moving target high-precision and method for tracing Download PDF

Info

Publication number
CN109188390A
CN109188390A CN201810925487.2A CN201810925487A CN109188390A CN 109188390 A CN109188390 A CN 109188390A CN 201810925487 A CN201810925487 A CN 201810925487A CN 109188390 A CN109188390 A CN 109188390A
Authority
CN
China
Prior art keywords
point
vehicle
background frames
target frame
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810925487.2A
Other languages
Chinese (zh)
Other versions
CN109188390B (en
Inventor
郑建颖
张桢瑶
王翔
陶砚蕴
范学良
徐浩
俄文娟
陈蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Zhangjiagang Institute of Industrial Technologies Soochow University
Original Assignee
Suzhou University
Zhangjiagang Institute of Industrial Technologies Soochow University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University, Zhangjiagang Institute of Industrial Technologies Soochow University filed Critical Suzhou University
Priority to CN201810925487.2A priority Critical patent/CN109188390B/en
Publication of CN109188390A publication Critical patent/CN109188390A/en
Application granted granted Critical
Publication of CN109188390B publication Critical patent/CN109188390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/66Tracking systems using electromagnetic waves other than radio waves
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a kind of moving target high-precision detection and method for tracing, including A0: by background filtering algorithm, vehicle point and pedestrian's point being extracted from initial data, frame information where label;A1: the vehicle point and pedestrian's point data of consecutive frame are merged, and are handled the data after fusion with clustering algorithm, marking class cluster information;A2: using the information of frame where each point, the data separating of same frame is opened, according to the identification and association of the class cluster information realization vehicle and pedestrian.The present invention has the advantages that without constructing complicated auto model and track of vehicle model, using simple clustering algorithm, the accuracy rate of vehicle detection being improved on the basis of simple clustering algorithm, can directly extract track of vehicle.

Description

A kind of detection of moving target high-precision and method for tracing
Technical field
The invention belongs to field of intelligent transportation technology, and in particular to a kind of detection of moving target high-precision and method for tracing.
Background technique
Research traffic behavior can effectively reduce traffic accident, shorten hourage.People mainly pass through and build at present Mould and by acquisition earth magnetism, the data of the macroscopic view such as image study traffic behavior.Although both traditional methods are one Determine preferably study traffic behavior in degree, promotes the development of traffic, but with the appearance of laser radar technique, in order to More fully understand traffic behavior, we can obtain microcosmic precise information by laser radar sensor.For example, using 16 Line laser radar VLP-16 carries out data acquisition, and laser radar is placed horizontally at roadside, carries out data acquisition to entire space, To obtain huge point cloud data.The data collected contain the information of this scene, in order to study traffic behavior, need Vehicle and the data of pedestrian are extracted from huge data.
3D laser radar VLP-16 volume very little, cost is lower, can be mass produced.Meanwhile it is remained The breakthrough key features of Velodyne laser radar: in real time, 360 °, 3D data acquire and measurement.The measurement of VLP-16 away from From radius up to 100 meters.16 channels of VLP-16 support of Velodyne, 360 ° of horizontal field of view, 30 ° of vertical field of view, up and down ± 15°.VLP-16 is without apparent external rotary part (rotating part is in inside), so that having in challenging environment There is high degree of adaptability.The vehicle point and pedestrian's point obtained after wiping out background point clusters the point for belonging to the same object, into And identify the targets such as vehicle.The same target of different frame is associated, achievees the purpose that car tracing.
Carrying out the main method of vehicle detection and tracking using laser radar at present is first to identify vehicle, then be tracked. Identification vehicle, which mainly passes through, utilizes two methods of clustering algorithm and building auto model.Wherein, building auto model is mainly By extracting direction vector, the inflection point, the size of central point and two vertical edges of vehicle scan point, auto model is established.Vehicle Tracking is mainly based upon this relatively close feature of spatial position locating for same target in consecutive frame.
Although single clustering algorithm can preferably identify vehicle and pedestrian, when what multiple targets were leaned on can closely lead very much Cause the mistake cluster that multiple target identifications are target, or composition target itself point spacing it is larger when will lead to a mesh Mark is identified as the mistake cluster of multiple targets.Therefore, single clustering algorithm can not adapt to traffic environment complicated and changeable.Structure The auto model built in conjunction with clustering algorithm can solve problem above to a certain extent, but when the deployment way of laser radar or institute The equipment of use is not simultaneously as the reasons such as block, the complex shape of vehicle is changeable, and the method for constructing auto model does not have logical The property used.The effect of vehicle detection tracing algorithm is depended on to the accuracy rate of the identification of vehicle in algorithm, but is blocked due to existing Factor, what laser radar detected is not entire vehicle, and detecting vehicle by the method for feature extraction necessarily will appear mistake.
Car tracing algorithm is mainly based upon this relatively close feature of spatial position locating for same target in consecutive frame.Work as vehicle When spacing is smaller, it is easy to happen vehicle associated errors.It can be solved to a certain extent by the complicated track of vehicle model of building Certainly, it but is difficult to accurately track the vehicle of complex behavior.
Summary of the invention
To solve the above-mentioned problems, the present invention proposes following technical scheme.
According to an aspect of the invention, there is provided a kind of moving target high-precision detection and method for tracing, comprising:
A0: by background filtering algorithm, vehicle point and pedestrian's point being extracted from initial data, frame letter where label Breath;
A1: the vehicle point and pedestrian's point data of consecutive frame are merged, with clustering algorithm by the data after fusion into Row processing, marking class cluster information;
A2: using the information of frame where each point, the data separating of same frame is opened, according to the class cluster information realization vehicle And pedestrian identification and association.
Preferably, the clustering algorithm is DBSCAN algorithm.
Preferably, the background filtering algorithm includes the following steps:
S0: choosing background frames in the data frame of radar acquisition, without vehicle or the nothing in interest region in the background frames Vehicle;
S1: the data fusion of the data of background frames and target frame first sorts according to the serial number laser_id of laser beam, then It sorts to the point of each laser beam laser_id according to level angle, background frames point and target frame dot interlace are distributed at this time;
S2: background frames point is associated with target frame point,
Case (n) is the case where number of target frame point contained between two background frames points is n;
0fIndicate previous background frames point;0lIndicate the latter background frames point;1fIndicate previous target frame point;1lIt indicates The latter target frame point;
When the two background frames points meet condition:
Previous background frames point is associated with, correspondingly, the latter background frames point and the latter with previous target frame point The association of target frame point;θ0Indicate the horizontal angular resolution of single laser beam;Indicate the level angle of previous background frames point Value,Indicate the horizontal angle angle value of the latter background frames point;Indicate the Europe between laser radar of previous background frames point Formula distance value,Indicate the Euclidean distance value between laser radar of the latter background frames point;ρ (id) indicates serial number id's The distance between two points resolution ratio of laser beam;
S3: vehicle point judgement;When the 2 background frames points and target frame point that are associated with completion in S2 meet condition:
Determine that n target frame point between the two background frames points is vehicle point, and the n target frame point is labeled as Vehicle point p1Indicate the Euclidean distance value between laser radar of previous target frame point,Indicate the latter target frame point The Euclidean distance value between laser radar;
S4: missed point is extracted;Unlabelled target frame points all in target frame are traversed, p0For unlabelled target Frame point, it may be assumed that non-vehicle point works as p0Meet condition:
Then, p is marked0For vehicle point p1
S5: in the unlabelled all target points of traversal, judge whether n meets n > n0, when satisfying the condition, directly sentence Target frame point between disconnected 2 background frames points is vehicle point;The n0For the threshold value of setting, completes background data and filter out.
Preferably, the radar is 16 line laser radar VLP-16.
Preferably, selection background frames described in step S0 is realized by veloview application program.
Preferably, interest region described in step S0 refer in laser radar pickup area specify, need to carry out The region of data analysis.
Preferably, the threshold value n in S50Value are as follows: single laser beam beats the minimum points on vehicle.
Preferably, laser radar height h is 2.8 meters, and laser radar detection range is 100 meters, single laser beam adjacent two The distance of point is 0.35 meter, n0Value 4.
Preferably, the method also includes:
S6: noise spot removal;Removing and returning to mode value in the data set of laser radar is non-zero point.
Preferably, the method also includes:
S7: accuracy rate improves;It chooses multiple background frames and target frame carries out the operation of S2-S6, and take intersection to be used as and filter out As a result.
The present invention has the advantages that utilizing simple cluster without constructing complicated auto model and track of vehicle model Algorithm improves the accuracy rate of vehicle detection on the basis of simple clustering algorithm, can directly extract track of vehicle.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this field Technical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as to the present invention Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 shows the vehicle point schematic diagram in 3D point cloud of the invention;
Fig. 2 shows 3D point cloud initial data schematic diagrames of the invention;
Fig. 3 is field test schematic layout pattern of the present invention;
Fig. 4-1 is the result schematic diagram clustered to 1562 frame data;
Fig. 4-2 is the result schematic diagram that the data of the fusion to 1562-1567 frame are clustered;
Fig. 4-3 is the vehicle cluster result schematic diagram for 2098 frames separated from the data that multiframe merges;
Fig. 5 is car tracing result schematic diagram of the present invention;
Fig. 6 is track of vehicle schematic diagram of the present invention.
Specific embodiment
The illustrative embodiments of the disclosure are more fully described below with reference to accompanying drawings.Although showing this public affairs in attached drawing The illustrative embodiments opened, it being understood, however, that may be realized in various forms the disclosure without the reality that should be illustrated here The mode of applying is limited.It is to be able to thoroughly understand the disclosure on the contrary, providing these embodiments, and can be by this public affairs The range opened is fully disclosed to those skilled in the art.
Essential term of the present invention is explained:
Target frame: there are move vehicle or the frame of pedestrian.
Consecutive frame number: for extracting frame number adjacent with target frame used in track of vehicle.
Vehicle point: the point of vehicle is formed.
The mass cloud data visual software of Veloview:Kitware and Velodyne joint publication open source.
In order to obtain high-precision microcosmic traffic data, the initial data that the present invention collects laser radar is carried out Background obtains vehicle point and pedestrian's point after filtering out.In view of constituting point complex-shaped changeable of vehicle and can not know in advance One shares several vehicle and pedestrians (as shown in Figure 1) in the data of one frame, the work of present invention combination laser radar 3D point cloud itself A kind of detection of moving target high-precision and method for tracing are proposed as characteristic.
Firstly, laser radar is placed horizontally at roadside, is highly set as h in the present invention.The vehicle that laser radar provides Point data has following characteristics (as shown in Figure 2):
1, when speed is 10HZ, the horizontal angular resolution of the single laser beam in 16 laser beams is θ0= 0.2°.Closer apart from laser radar, the laser beam got on vehicle is more, and the point for forming the vehicle is also more.
2, the vehicle and pedestrian total number in any one frame data can not be known in advance.
3, the point for constituting same vehicle might not be completely continuous on spatial position.On the one hand, since laser radar is It is made of the laser beam of fixed vertical angle, with certain blind area when detecting vehicle.On the other hand, the material of vehicle itself is such as Glass for vehicle window will lead to laser point and can not return.This point set can be identified as multiple targets by simple clustering algorithm.
4, the distance between vehicle is too small, and when such as waiting traffic lights, single clustering algorithm is easy to be by multiple target identifications One target.
According to the features above of vehicle point, the present invention selects DBSCAN algorithm.DBSCAN(Density-Based Spatial Clustering of Applications with Noise) it is one more representational based on density Clustering algorithm.Different from division and hierarchy clustering method, cluster is defined as the maximum set of the connected point of density by it, can be tool Having region division highdensity enough is cluster, and the cluster of arbitrary shape can be found in the spatial database of noise.
Preferably, the spatial position as locating for target same in consecutive frame is closer, and the present invention is clustered according to DBSCAN and calculated The principle of method merges the vehicle point of consecutive frame, and the number of the point of same vehicle will increase dramatically, and point spacing can also become smaller, and increases Add the accuracy rate of vehicle identification.At this point, the cluster radius in clustering algorithm can set it is smaller, to avoid Adjacent vehicles quilt Accidentally cluster.
Preferably, by being handled using vehicle point data of the clustering algorithm to consecutive frame, the point of same target can quilt It is polymerized to one kind, thus on being directly linked.
Embodiment 1
A0: by background filtering algorithm, vehicle point and pedestrian's point being extracted from initial data, frame letter where label Breath.
A1: the vehicle point and pedestrian's point data of consecutive frame are merged, with clustering algorithm by the data after fusion into Row processing, marking class cluster information.
A2: using the information of frame where each point, the data separating of same frame is opened, realizes the traffic of each frame It accurately identifies.At this point, the same target of consecutive frame is polymerized to one kind in S1, according to class cluster information realization vehicle and pedestrian Identification and association.
A3: step A0-A2 is repeated.
Preferably, in step A0, the background filtering algorithm includes the following steps:
S0: choosing background frames in the data frame of radar acquisition, without vehicle or the nothing in interest region in the background frames Vehicle;
S1: the data fusion of the data of background frames and target frame first sorts according to the serial number laser_id of laser beam, then It sorts to the point of each laser beam laser_id according to level angle, background frames point and target frame dot interlace are distributed at this time;
S2: background frames point is associated with target frame point,
Case (n) is the case where number of target frame point contained between two background frames points is n;
0fIndicate previous background frames point;0lIndicate the latter background frames point;1fIndicate previous target frame point;1lIt indicates The latter target frame point;
When the two background frames points meet condition:
Previous background frames point is associated with, correspondingly, the latter background frames point and the latter with previous target frame point The association of target frame point;θ0Indicate the horizontal angular resolution of single laser beam;Indicate the level angle of previous background frames point Value,Indicate the horizontal angle angle value of the latter background frames point;Indicate the Europe between laser radar of previous background frames point Formula distance value,Indicate the Euclidean distance value between laser radar of the latter background frames point;P (id) indicates serial number id's The distance between two points resolution ratio of laser beam;
S3: vehicle point judgement;When the 2 background frames points and target frame point that are associated with completion in S2 meet condition:
Determine that n target frame point between the two background frames points is vehicle point, and the n target frame point is labeled as Vehicle point p1Indicate the Euclidean distance value between laser radar of previous target frame point,Indicate the latter target frame point The Euclidean distance value between laser radar;
S4: missed point is extracted;Unlabelled target frame points all in target frame are traversed, p0For unlabelled target Frame point, it may be assumed that non-vehicle point works as p0Meet condition:
Then, p is marked0For vehicle point p1
S5: in the unlabelled all target points of traversal, judge whether n meets n > n0, when satisfying the condition, directly sentence Target frame point between disconnected 2 background frames points is vehicle point;The n0For the threshold value of setting, completes background data and filter out.
Preferably, the radar is 16 line laser radar VLP-16.
Preferably, selection background frames described in step S0 is realized by veloview application program.
Preferably, interest region described in step S0 refer in laser radar pickup area specify, need to carry out The region of data analysis.
Preferably, the threshold value n in S50Value are as follows: single laser beam beats the minimum points on vehicle.
Preferably, laser radar height h is 2.8 meters, and laser radar detection range is 100 meters, single laser beam adjacent two The distance of point is 0.35 meter, n0Value 4.
Preferably, the method also includes:
S6: noise spot removal;Removing and returning to mode value in the data set of laser radar is non-zero point.
Preferably, the method also includes:
S7: accuracy rate improves;It chooses multiple background frames and target frame carries out the operation of S2-S6, and take intersection to be used as and filter out As a result.
Embodiment 2
Vehicular traffic self-adapting detecting method based on laser radar, is tested at the parting of the ways and verifies, as a result It is described as follows:
The arrangement scene of laser radar is as shown in Figure 3.
(1) vehicle detecting algorithm result and analysis:
Vehicle identification is carried out to 1562 frame target frames, as a result as shown in Figure 4:
Fig. 4-1 is the result clustered to 1562 frame data.Cluster radius is set as 0.9 meter, min cluster point number It is 4.A kind of color represents a class cluster.The vehicle that coordinate (20,20) goes out is discontinuous on spatial position due to putting, and occurs Mistake cluster.
Fig. 4-2 is the result that the data of the fusion to 1562-1567 frame are clustered.The vehicle point of consecutive frame is merged, The number of the point of same vehicle will increase dramatically, and point spacing can also become smaller, and increase the accuracy rate of vehicle identification.At this point, cluster is calculated Cluster radius in method can set it is smaller, so that Adjacent vehicles be avoided accidentally to be clustered.By utilizing clustering algorithm to adjacent The vehicle point data of frame is handled, and the point of same target can be polymerized to one kind, thus on being directly linked.
Fig. 4-3 is the vehicle cluster result for 2098 frames separated from the data that multiframe merges.At this point, all vehicles It is all accurately detected.
(2) car tracing arithmetic result and analysis:
By being handled using vehicle point data of the clustering algorithm to consecutive frame, the point of same target can be polymerized to one Class, thus on being directly linked.Utilize place frame and class cluster information extraction car tracing information.The present invention is using distance in vehicle point The nearest point of laser radar represents each target, and Blue circles represent the initial position of vehicle, as a result as shown in figure 5, from figure In can clearly extract the tracked information of vehicle.
(3) experimental result of track of vehicle from figure as shown in fig. 6, can clearly distinguish the form rail of multiple vehicles Mark.
In this way, the present invention is without constructing complicated auto model and track of vehicle model, using simple clustering algorithm, The accuracy rate that vehicle detection is improved on the basis of simple clustering algorithm, can directly extract track of vehicle.
It should be understood that
Algorithm and display be not inherently related to any certain computer, virtual bench or other equipment provided herein. Various fexible units can also be used together with teachings based herein.As described above, it constructs required by this kind of device Structure be obvious.In addition, the present invention is also not directed to any particular programming language.It should be understood that can use various Programming language realizes summary of the invention described herein, and the description done above to language-specific is to disclose this hair Bright preferred forms.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects, Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect Shield the present invention claims features more more than feature expressly recited in each claim.More precisely, as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself All as a separate embodiment of the present invention.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed Meaning one of can in any combination mode come using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice One in the creating device of microprocessor or digital signal processor (DSP) to realize virtual machine according to an embodiment of the present invention The some or all functions of a little or whole components.The present invention is also implemented as executing method as described herein Some or all device or device programs (for example, computer program and computer program product).Such realization Program of the invention can store on a computer-readable medium, or may be in the form of one or more signals.This The signal of sample can be downloaded from an internet website to obtain, and is perhaps provided on the carrier signal or mentions in any other forms For.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame Claim.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of by anyone skilled in the art, It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of the claim Subject to enclosing.

Claims (10)

1. a kind of moving target high-precision detection and method for tracing characterized by comprising
A0: by background filtering algorithm, vehicle point and pedestrian's point being extracted from initial data, frame information where label;
A1: the vehicle point and pedestrian's point data of consecutive frame are merged, will be at the data after fusion with clustering algorithm Reason, marking class cluster information;
A2: using the information of frame where each point, the data separating of same frame being opened, according to the class cluster information realization vehicle and The identification and association of pedestrian.
2. the method according to claim 1, wherein
The clustering algorithm is DBSCAN algorithm.
3. the method according to claim 1, wherein
The background filtering algorithm includes the following steps:
S0: choosing background frames in the data frame of radar acquisition, without vehicle without vehicle or in interest region in the background frames;
S1: the data fusion of the data of background frames and target frame first sorts according to the serial number laser_id of laser beam, then to every The point of one laser beam laser_id sorts according to level angle, and background frames point and target frame dot interlace are distributed at this time;
S2: background frames point is associated with target frame point,
Case (n) is the case where number of target frame point contained between two background frames points is n;
0fIndicate previous background frames point;0lIndicate the latter background frames point;1fIndicate previous target frame point;1lIndicate the latter Target frame point;
When the two background frames points meet condition:
Previous background frames point is associated with to, correspondingly, the latter background frames point and the latter target with previous target frame point The association of frame point;θ0Indicate the horizontal angular resolution of single laser beam;Indicate the horizontal angle angle value of previous background frames point, Indicate the horizontal angle angle value of the latter background frames point;Indicate the Euclidean distance between laser radar of previous background frames point Value,Indicate the Euclidean distance value between laser radar of the latter background frames point;The laser beam of ρ (id) expression serial number id Distance between two points resolution ratio;
S3: vehicle point judgement;When the 2 background frames points and target frame point that are associated with completion in S2 meet condition:
Determine that n target frame point between the two background frames points is vehicle point, and the n target frame point is labeled as vehicle Point p1Indicate the Euclidean distance value between laser radar of previous target frame point,Indicate the latter target frame point with Euclidean distance value between laser radar;
S4: missed point is extracted;Unlabelled target frame points all in target frame are traversed, p0For unlabelled target frame point, That is: non-vehicle point, works as p0Meet condition:
Then, p is marked0For vehicle point p1
S5: in the unlabelled all target points of traversal, judge whether n meets n > n0, when satisfying the condition, directly judge two back Target frame point between scape frame point is vehicle point;The n0For the threshold value of setting, completes background data and filter out.
4. according to the method described in claim 3, it is characterized in that, the radar is 16 line laser radar VLP-16.
5. according to the method described in claim 3, it is characterized in that, selection background frames described in step S0 is to pass through veloview Application program is realized.
6. according to the method described in claim 3, it is characterized in that, interest region described in step S0 refers in laser radar Region being specified in pickup area, needing to carry out data analysis.
7. according to the method described in claim 3, it is characterized in that, threshold value n in S50Value are as follows: single laser beam is beaten in vehicle On minimum points.
8. according to the method described in claim 3, laser radar detects model it is characterized in that, laser radar height h is 2.8 meters Enclosing is 100 meters, and the distance of the adjacent two o'clock of single laser beam is 0.35 meter, n0Value 4.
9. according to method described in claim 3 to 8 any one claim, which is characterized in that the method also includes:
S6: noise spot removal;Removing and returning to mode value in the data set of laser radar is non-zero point.
10. according to the method described in claim 9, it is characterized in that, the method also includes:
S7: accuracy rate improves;It chooses multiple background frames and target frame carries out the operation of S2-S6, and take intersection to be used as and filter out result.
CN201810925487.2A 2018-08-14 2018-08-14 High-precision detection and tracking method for moving target Active CN109188390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810925487.2A CN109188390B (en) 2018-08-14 2018-08-14 High-precision detection and tracking method for moving target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810925487.2A CN109188390B (en) 2018-08-14 2018-08-14 High-precision detection and tracking method for moving target

Publications (2)

Publication Number Publication Date
CN109188390A true CN109188390A (en) 2019-01-11
CN109188390B CN109188390B (en) 2023-05-23

Family

ID=64921774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810925487.2A Active CN109188390B (en) 2018-08-14 2018-08-14 High-precision detection and tracking method for moving target

Country Status (1)

Country Link
CN (1) CN109188390B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414459A (en) * 2019-08-02 2019-11-05 中星智能系统技术有限公司 Establish the associated method and device of people's vehicle

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101017573A (en) * 2007-02-09 2007-08-15 南京大学 Method for detecting and identifying moving target based on video monitoring
CN101393264A (en) * 2008-10-12 2009-03-25 北京大学 Moving target tracking method and system based on multi-laser scanner
US20130242285A1 (en) * 2012-03-15 2013-09-19 GM Global Technology Operations LLC METHOD FOR REGISTRATION OF RANGE IMAGES FROM MULTIPLE LiDARS
CN104517275A (en) * 2013-09-27 2015-04-15 株式会社理光 Object detection method and system
CN105866782A (en) * 2016-04-04 2016-08-17 上海大学 Moving target detection system based on laser radar and moving target detection method thereof
CN106203274A (en) * 2016-06-29 2016-12-07 长沙慧联智能科技有限公司 Pedestrian's real-time detecting system and method in a kind of video monitoring
CN106910203A (en) * 2016-11-28 2017-06-30 江苏东大金智信息系统有限公司 The method for quick of moving target in a kind of video surveillance
CN108009473A (en) * 2017-10-31 2018-05-08 深圳大学 Based on goal behavior attribute video structural processing method, system and storage device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101017573A (en) * 2007-02-09 2007-08-15 南京大学 Method for detecting and identifying moving target based on video monitoring
CN101393264A (en) * 2008-10-12 2009-03-25 北京大学 Moving target tracking method and system based on multi-laser scanner
US20130242285A1 (en) * 2012-03-15 2013-09-19 GM Global Technology Operations LLC METHOD FOR REGISTRATION OF RANGE IMAGES FROM MULTIPLE LiDARS
CN104517275A (en) * 2013-09-27 2015-04-15 株式会社理光 Object detection method and system
CN105866782A (en) * 2016-04-04 2016-08-17 上海大学 Moving target detection system based on laser radar and moving target detection method thereof
CN106203274A (en) * 2016-06-29 2016-12-07 长沙慧联智能科技有限公司 Pedestrian's real-time detecting system and method in a kind of video monitoring
CN106910203A (en) * 2016-11-28 2017-06-30 江苏东大金智信息系统有限公司 The method for quick of moving target in a kind of video surveillance
CN108009473A (en) * 2017-10-31 2018-05-08 深圳大学 Based on goal behavior attribute video structural processing method, system and storage device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414459A (en) * 2019-08-02 2019-11-05 中星智能系统技术有限公司 Establish the associated method and device of people's vehicle

Also Published As

Publication number Publication date
CN109188390B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN107341453B (en) Lane line extraction method and device
Yu et al. Automated extraction of urban road facilities using mobile laser scanning data
CN105667518B (en) The method and device of lane detection
CN105160309B (en) Three lanes detection method based on morphological image segmentation and region growing
CN103927526B (en) Vehicle detecting method based on Gauss difference multi-scale edge fusion
CN109521019A (en) A kind of bridge bottom crack detection method based on unmanned plane vision
CN109100741A (en) A kind of object detection method based on 3D laser radar and image data
CN104280036B (en) A kind of detection of transport information and localization method, device and electronic equipment
CN110069972A (en) Automatic detection real world objects
CN111179300A (en) Method, apparatus, system, device and storage medium for obstacle detection
KR20180041176A (en) METHOD, DEVICE, STORAGE MEDIUM AND DEVICE
CN110197148A (en) Mask method, device, electronic equipment and the storage medium of target object
CN109584294A (en) A kind of road surface data reduction method and apparatus based on laser point cloud
CN106023622B (en) A kind of method and apparatus of determining traffic lights identifying system recognition performance
CN106327558B (en) Point cloud facade extracting method and device
CN106560835A (en) Guideboard identification method and device
CN109859260A (en) Determine the method, apparatus and computer readable storage medium of parking stall position
Nagy et al. 3D CNN-based semantic labeling approach for mobile laser scanning data
CN109870706A (en) A kind of detection method of road surface identification, device, equipment and medium
CN109871829A (en) A kind of detection model training method and device based on deep learning
Zhao et al. Autonomous driving simulation for unmanned vehicles
CN107808524A (en) A kind of intersection vehicle checking method based on unmanned plane
CN110110678A (en) Determination method and apparatus, storage medium and the electronic device of road boundary
CN110091342A (en) Vehicle condition detection method, device and detection robot
Sun et al. Geographic, geometrical and semantic reconstruction of urban scene from high resolution oblique aerial images.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant