CN104751119A - Rapid detecting and tracking method for pedestrians based on information fusion - Google Patents
Rapid detecting and tracking method for pedestrians based on information fusion Download PDFInfo
- Publication number
- CN104751119A CN104751119A CN201510071310.7A CN201510071310A CN104751119A CN 104751119 A CN104751119 A CN 104751119A CN 201510071310 A CN201510071310 A CN 201510071310A CN 104751119 A CN104751119 A CN 104751119A
- Authority
- CN
- China
- Prior art keywords
- data
- laser
- target
- represent
- sigma
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Traffic Control Systems (AREA)
Abstract
The invention discloses a rapid detecting and tracking method for pedestrians based on information fusion. The method is characterized by comprising the following steps: step S1: fixing a surrounding environment with a fixed frequency by using a laser scanner, so as to obtain laser data; step S2: filtrating ineffective data in the laser data, so as to obtain candidate target data; step S3: calibrating a coordinate parameter between the laser scanner and a camera, so as to obtain a coordinate coordination parameter between two coordinate systems; Step 4: confirming a candidate target based on the candidate target data; step S5: building a real-time tracking model, and tracking the candidate target confirmed by the step S4 through the real-time tracking model. According to the rapid detecting and tracking method provided by the invention, the advantages of a laser sensor and a vision sensor can be integrated to effectively improve the accuracy and timeliness of detecting and tracking the pedestrians by single sensor.
Description
Technical field
The invention belongs to multisensor, technical field of multisource information fusion, relate to Digital Image Processing, pattern-recognition, the aspects of contents such as data correlation, the quick detecting and tracking method of especially a kind of pedestrian based on information fusion, for the important component part in the intelligent vehicles technology---vehicle auxiliary security control loop, to factor the most uncontrollable in driving environment---pedestrian carries out perception, detect, follow the tracks of, analyze, early warning, thus support vehicles driving safety.
Background technology
The intelligent vehicles technology is mainly divided into independent navigation and safety guarantee two aspects by function.The application of autonomous navigation technology depends on the foundation of whole intelligent transportation system (ITS) and perfect, be difficult in a short time reach practical, and safeguard technology can be applied independently in DAS (Driver Assistant System), by making detecting and tracking to the running environment of surrounding, thus the threat that judgement may cause driver, therefore can provide technical support for the traffic hazard solved because of the generation of driver's subjective factor.
Intelligent vehicle safety safeguards technique is divided into safety monitoring and early warning and active safety guarantee, safety monitoring and early warning mainly refer to come monitor vehicle driver condition, vehicle hidden danger, particular surroundings etc. by sensor and warning system, thus help driver to realize safe driving, and be wherein devoted to, by sensor, noncontact detection is carried out to surrounding enviroment for the human body target detection and tracking of vehicle front and improve driving safety.
Merge with laser and visual pattern and carry out preceding object thing detecting and tracking, not only can overcome and be subject to weather conditions and illumination condition variable effect when separately application image visual transducer detects and the shortcoming that cannot obtain the depth information of detecting and tracking object, also can overcome laser ranging cannot disturbance in judgement thing classification, cannot visually process, the shortcoming of redundant warning.
Summary of the invention
The present invention, by merging the depth data obtained from laser scanner and the pictorial information obtained from ccd video camera, realizes the accurate and quick detecting and tracking to pedestrian target, gives to analyze and early warning, thus reaches the object of driver assistance safe driving.The present invention utilizes two kinds of different types of sensors simultaneously, respective advantage is utilized to complete the quick detecting and tracking of road pavement pedestrian, realize having complementary advantages, and farthest can provide correct steered reference fast for driver by the complex condition such as, haze weather poor at night, illumination condition, realize auxiliary security and drive.
For realizing object of the present invention, the quick detecting and tracking method of the pedestrian based on information fusion provided by the invention, makes full use of laser, quick detection and tracking that the advantage of image carries out pedestrian under traffic driving environment in real time, and it comprises the following steps:
Step S1: utilize laser scanner with fixing frequency sweeping surrounding environment, obtain laser data;
Step S2: the invalid data in laser data described in filtering, obtains candidate target data;
Step S3: demarcate for the coordinate parameters between laser scanner and video camera, obtains the coordinate transformation parameter between two coordinate systems;
Step S4: based on described candidate target data, candidate target is wherein confirmed;
Step S5: set up Real-time Tracking Model, and follow the tracks of according to the candidate target that described Real-time Tracking Model confirms for described step S4.
The invention has the beneficial effects as follows: the present invention makes full use of the advantage of combining under traffic scene in pedestrian's detecting and tracking of laser sensor and vision sensor, it instead of concrete Theories and methods involved in Conventional visual, meanwhile, reliable solution has been had for complicated traffic, complicated occlusion issue.Compared with traditional single-sensor detecting and tracking method, the present invention all has qualitative leap in real-time and accuracy etc.
Accompanying drawing explanation
Fig. 1 is the quick detecting and tracking method flow diagram of pedestrian that the present invention is based on information fusion;
Fig. 2 a is the laser data set of non-filtering invalid data, and Fig. 2 b is the candidate target data acquisition obtained after filtering invalid data;
Fig. 3 is that parameter scheme of installation corrected by laser scanner and video camera according to an embodiment of the invention;
Fig. 4 is that candidate region generates result schematic diagram according to an embodiment of the invention;
Fig. 5 a is HOG feature schematic diagram according to an embodiment of the invention, and Fig. 5 b is human detection schematic diagram according to an embodiment of the invention;
Fig. 6 divides observation space schematic diagram according to an embodiment of the invention;
Fig. 7 is the detecting and tracking result schematic diagram utilizing the inventive method.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly understand, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
Fig. 1 is the process flow diagram of the quick detecting and tracking method of pedestrian that the present invention is based on information fusion, as shown in Figure 1, said method comprising the steps of:
Step S1: utilize laser scanner with fixing frequency sweeping surrounding environment, obtain laser data;
In an embodiment of the present invention, the laser scanner selected is two dimensional laser scanning instrument SICKLMS291, and its sweep limit is 100 °, and angular resolution is 0.25 °, and distance range is within 80 meters.
The data that described two dimensional laser scanning instrument returns are discrete data points of one group of limited length in its two-dimensional scan plane, these data can reflect geometric position and the shape of surrounding objects, wherein, each data can also represent the distance on correspondence direction and recently between target.Concrete discrete data is counted relevant with the angular resolution of laser scanner, and in an embodiment of the present invention, return 400 discrete datas, these discrete datas are provided by polar form, that is: l at every turn
z=(d
z, φ
z)
t, z=1...400, can be expressed as in cartesian coordinate system:
u
z=(x
z,y
z)
T,z=1...400,
Wherein, x
z=d
zcos φ
z, y
z=d
zsin φ.
Step S2: the invalid data in laser data described in filtering, obtains candidate target data;
Because the object of laser scanner to unlike material, different colours exists measuring error, therefore invalid data may be there is in the laser data returned, and in invalid data, show as null value and the value exceeding maximum measure distance scope greatly, also have because of solar irradiation, vehicle jolts and reflecting object material causes noise spot in addition, by filtering, this part invalid data is removed.
Described step S2 is further comprising the steps:
Order set L={l
zrepresent one group of laser data, wherein, z=1 ..., Z, Z represent that laser data is counted.
First, will gather L and core template [-1,1] carries out convolution, retention point spacing data point within the specific limits, obtains thick denoising result set: C={c
n, n=1 ... .N, wherein, N represents that in thick denoising result set, laser data is counted;
Then, for thick denoising result set
carry out cluster operation, obtain one group of candidate target data acquisition S.
Wherein, described cluster operation is specially:
Cluster is initial, using each data point as a class, calculates the distance between new class and single sample number strong point, if adjacent two class c
n-1and c
nbetween spacing belong in a certain predetermined threshold range, then think that they belong to same class, otherwise just think and belong to inhomogeneity, and using current single sample number strong point as the starting point of newly-increased class.
In the process of cluster, a large amount of invalid laser data points is excluded, and finally obtains the one group candidate target data acquisition S={s of representation class like barrier
m, wherein, m=1 ... .M, represents the quantity of candidate target.These candidate targets on lateral length with detected target class seemingly, its feature includes the degree of depth, length and positional information.
Fig. 2 a is the laser data set of non-filtering invalid data, and Fig. 2 b is the candidate target data acquisition obtained after filtering invalid data.
Step S3: the coordinate parameters between laser scanner and video camera is demarcated, obtain the coordinate transformation parameter (φ, Δ) between two coordinate systems, wherein φ is rotation matrix, and Δ is translation vector;
On the basis of this step scaling method of plane of motion template in laser scanner and image calibration technology, add the laser coordinate system (two-dimensional scan face) at the place of laser scanner own, the method that its core is to utilize scaling board to be caught by laser scanner and video camera is simultaneously to try to achieve rotation between laser coordinate system and camera coordinate system and translation matrix.
The installation site of laser scanner and video camera when Fig. 3 shows parameter calibration, in figure, X
fy
fz
frepresent the three-dimensional coordinate plane of laser scanner, X
cy
cz
crepresent the three-dimensional coordinate of video camera.From camera coordinate system p to the transformation relation of laser coordinate system pf as shown in (1) formula, wherein, φ is the rotation matrix that camera coordinate system transforms to laser coordinate system, represents the relative direction between laser scanner and video camera; Δ is the translation vector transforming to laser coordinate system from camera coordinate system, represents relative position between the two:
p
f=φp+Δ, (1)
Under camera coordinate system, scaling board parameter is turned to a tri-vector N, the direction of this vector is parallel to the normal direction of scaling board, || N|| equals the distance of video camera to scaling board.Under camera coordinate system, get any point p on scaling board, because p is positioned at parameterized on the scaling board of N, then have:
N.p=||N||
2, (2)
Wherein, N can be obtained by the known outer parameter [R, t] (R is the orthogonal rotation matrix of external parameters of cameras, and t is the translation matrix of external parameters of cameras) of video camera:
N=-r
3(r
3 t.t) (3) wherein, r
3represent the 3rd column vector of the orthogonal rotation matrix R of external parameters of cameras.
By converting the position of scaling board, a different set of vectorial N and corresponding laser spots can be obtained at p
funder position coordinates, namely obtain one group of constraint condition:
N.φ
-1(p
f-Δ)=||N||
2(4)
Solve this equation can try to achieve rotation between camera coordinate system and laser coordinate system and translation relation √ and
this completes the combined calibrating of laser scanner and video camera.As shown in Figure 4, wherein, Fig. 4 a is the imaging of object under camera coordinate system to projection result, and wherein numeral 1 to 6 represents six objects; Fig. 4 b is the imaging of object under laser coordinate system, and Fig. 4 c is the position coordinates of object under three-dimensional coordinate.
Step S4: based on described candidate target data, candidate target is wherein confirmed;
After acquisition candidate target data, need to utilize the region of the method for vision-based detection to candidate target data place to detect, to carry out the detection of confirmatory to candidate target.Need to train different sorters (as human body, vehicle etc.) for different target in vision-based detection part.
Described step S4 is further comprising the steps:
Step S41, by training sample image (can be such as the human body training sample of 64 × 128 pixels) according to pre-sizing, such as 8 × 8, block of pixels be divided into several elementary cells (cell);
Step S42, an often adjacent m unit (such as with 4 unit that field word structure is adjacent) is divided into a region unit block, by (-90 °, 90 °) gradient direction on average divide, specify that every 20 degree is a basic orientation bin, can obtain 9 basic orientation bins within the scope of the direction of 180 °;
Step S43, for each elementary cell cell, carries out projecting to set up respective gradient orientation histogram by wherein all pixels on all basic orientation bins;
Step S44, couples together the gradient orientation histogram of contain in each block 4 elementary cell cell, obtains the vector of 36 dimensions;
Step S45, then be concatenated after the 36 dimensional vector normalization of all region unit block, obtain the HOG proper vector of each training sample;
Fig. 5 a extracts the HOG feature schematic diagram obtained, and that reflects the human body contour outline under two different attitudes.
Step S46, extracts after obtaining the HOG proper vector of training sample image, obtains a svm classifier model for target detection based on extracting the HOG proper vector training obtained;
Step S47, for certain candidate region in training sample image described in camera coordinate system, judges whether there is target in this region by the mode scanned by pixel region multi-scale image.
By pixel region multi-scale image method: as shown in Figure 5 b, in the region of a 192*96 pixel size, scanned by the basic templates of 128*64 pixel size, scan through mobile basic templates line by line at every turn, and whether there is target under judging this region by SVM classifier.
Step S5: set up Real-time Tracking Model, and follow the tracks of according to the candidate target that described Real-time Tracking Model confirms for described step S4.
The model that the present invention proposes is started with from maximum a posteriori probability problem.Suppose, in scene, the observation vector sum total of each target is: T=(T
1, T
2..., T
n), and wherein the observation vector of each target is expressed as: T
i={ p
i, o
i, v
i, f
i, wherein, p
irepresent the position coordinates of target; o
irepresent the direction of motion of target; v
irepresent the movement velocity of target; f
irepresent the external performance (as color histogram, direction gradient etc.) of target.Pursuit path for same target is estimated to use a series of path segment set to represent, such as S={s
k, wherein, s
krepresent path segment.
For given observation vector T, the target of data correlation maximizes posterior probability P (S|T) exactly, that is:
Wherein, T is given observation vector, and S is the set of candidate target path segment.
Because estimate that the space of S is very huge, be difficult to direct optimized-type (5).Consider that a target only can belong to unique path segment, therefore, this characteristic can be utilized to effectively reduce the size of search volume, namely the constraint condition of this characteristic equivalence can be expressed as a target can not multiple track label simultaneously:
Wherein, T
krepresent the observation vector of a kth target in scene, T represents the observation vector sum total of each target in scene.
According to the relation of target being observed, candidate target path segment S set can be divided into two parts, such as shown in Figure 6 a, observed object is divided into pinpoint target (marking by dashed rectangle) and joint objective (marking with dot-and-dash line square frame), Fig. 6 b is depicted as the position coordinates of observed object under laser scanner.
Then formula (5) can be write as further:
Wherein, by path segment S set
alpha+betabe divided into the region S with joint objective track
αwith pinpoint target track regions S
β.
Then do further hypothesis: the motion between target and target is separate, then formula (8) can be rewritten as:
Parameter wherein can be expressed as:
Wherein, t
j,iand t
irepresent the two-value indieating variable of 0,1, for t
j,iif equal 1, then represent observation track T
jwith T
ibe connected, if equal 0, then represent and do not connect.For indieating variable t
iif equal 1, represent that this observation track is in associating track regions, vice versa.C
j,irepresent observation track T
jwith T
ibetween transfer similarity cost value, P
simi () represents the similarity of two sections of tracks, by the external performance f of target
ithe Euclidean distance of vector is tried to achieve, C
irepresent observation track T
jdegree of depth cost value, wherein p
irepresent actual grade value.
Formula (9) is the target equation of Real-time Tracking Model, and the present invention uses the Bipartite Matching method of improvement to solve this problem.The Bipartite Matching method improved belongs to the conventional derivation algorithm in this area, and therefore not to repeat here.
Experiment proves, in continuous print frame of video, the inventive method can trace into each target exactly.As shown in Figure 7, as can be seen from Figure 7, in different traffic scenes, even if blocking repeatedly appears in pedestrian, the present invention still can identify each pedestrian to the detecting and tracking result obtained according to the present invention.
The present invention well combines the advantage of laser sensor and vision sensor, effectively improves single-sensor and carries out the degree of accuracy of detecting and tracking and ageing to pedestrian.By laser spots filtering, cluster and laser spots pattern analysis complete the Primary Location to pedestrian, and the potential site of pedestrian is projected to image layer by the Registration of Measuring Data then between laser scanner and video camera; And then utilize quick pedestrian detection technology to complete pedestrian position to determine and carry out feature extraction, finally between continuous multiple frames, carrying out data correlation, reach the target to the real-time detecting and tracking of pedestrian.The road environment of algorithm to surrounding makes analysis, and judges the threat that may cause vehicle safe driving, carries out danger early warning, realizes the function that auxiliary security is driven.
Above-described specific embodiment; object of the present invention, technical scheme and beneficial effect are further described; be understood that; the foregoing is only specific embodiments of the invention; be not limited to the present invention; within the spirit and principles in the present invention all, any amendment made, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (9)
1., based on the quick detecting and tracking method of pedestrian of information fusion, it is characterized in that, the method comprises the following steps:
Step S1: utilize laser scanner with fixing frequency sweeping surrounding environment, obtain laser data;
Step S2: the invalid data in laser data described in filtering, obtains candidate target data;
Step S3: demarcate for the coordinate parameters between laser scanner and video camera, obtains the coordinate transformation parameter between two coordinate systems;
Step S4: based on described candidate target data, candidate target is wherein confirmed;
Step S5: set up Real-time Tracking Model, and follow the tracks of according to the candidate target that described Real-time Tracking Model confirms for described step S4.
2. method according to claim 1, is characterized in that, described laser scanner is two dimensional laser scanning instrument.
3. method according to claim 1, is characterized in that, order set L={l
zrepresent one group of laser data, wherein, z=1 ..., Z, Z represent that laser data is counted, then described step S2 is further comprising the steps:
First, will gather L and core template [-1,1] carries out convolution, retention point spacing data point within the specific limits, obtains thick denoising result set: C={c
n, n=1 ... .N, wherein, N represents that in thick denoising result set, laser data is counted;
Then, for thick denoising result set
carry out cluster operation, obtain one group of candidate target data acquisition S.
4. method according to claim 3, is characterized in that, described cluster operation is specially:
Cluster is initial, using each data point as a class, calculates the distance between new class and single sample number strong point, if adjacent two class c
n-1and c
nbetween spacing belong in a certain predetermined threshold range, then think that they belong to same class, otherwise just think and belong to inhomogeneity, and using current single sample number strong point as the starting point of newly-increased class.
5. method according to claim 1, is characterized in that, in described step S3, the method utilizing scaling board simultaneously to be caught by laser scanner and video camera is to try to achieve rotation between laser coordinate system and camera coordinate system and translation matrix.
6. method according to claim 1, is characterized in that, described step S4 is further comprising the steps:
Training sample image is several elementary cells according to predetermined size pixel block comminute by step S41;
Step S42, is a region unit block by an often adjacent m dividing elements, is on average divided by gradient direction, obtains multiple basic orientation bins within the scope of the direction of 180 °;
Step S43, for each elementary cell, carries out projecting to set up respective gradient orientation histogram by wherein all pixels in all basic orientation;
Step S44, couples together the gradient orientation histogram of the elementary cell contained in each region unit, obtains vector;
Step S45, then be concatenated after the vectorial normalization of all region units, obtain the HOG proper vector of each training sample;
Step S46, obtains a svm classifier model based on extracting the HOG proper vector training obtained;
Step S47, for certain candidate region in training sample image described in camera coordinate system, by judging whether there is target in this region by the mode in pixel region scanning multi-scale image region.
7. method according to claim 1, is characterized in that, described Real-time Tracking Model is expressed as:
Wherein, T is given observation vector, and S is candidate target set.
8. method according to claim 7, is characterized in that, described Real-time Tracking Model is also provided with constraint condition:
Wherein, T
krepresent the observation vector of a kth target in scene, T represents the observation vector sum total of each target in scene.
9. method according to claim 1, is characterized in that, the target equation of described Real-time Tracking Model is expressed as:
Wherein, S
αrepresent the region with joint objective track; S
βrepresent pinpoint target track regions; t
j,iand t
irepresent two-value indieating variable,
represent observation track T
jwith T
ibetween transfer similarity cost value, C
j,i=-logP
sim(T
jk-1| T
ik), C
irepresent observation track T
jdegree of depth cost value
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510071310.7A CN104751119A (en) | 2015-02-11 | 2015-02-11 | Rapid detecting and tracking method for pedestrians based on information fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510071310.7A CN104751119A (en) | 2015-02-11 | 2015-02-11 | Rapid detecting and tracking method for pedestrians based on information fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104751119A true CN104751119A (en) | 2015-07-01 |
Family
ID=53590776
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510071310.7A Pending CN104751119A (en) | 2015-02-11 | 2015-02-11 | Rapid detecting and tracking method for pedestrians based on information fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104751119A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105701479A (en) * | 2016-02-26 | 2016-06-22 | 重庆邮电大学 | Intelligent vehicle multi-laser radar fusion recognition method based on target features |
CN106352870A (en) * | 2016-08-26 | 2017-01-25 | 深圳微服机器人科技有限公司 | Method and device for positioning targets |
CN107421754A (en) * | 2017-08-08 | 2017-12-01 | 天津瑷睿卡仕福测控技术有限公司 | A kind of pedestrian impact protects chalker |
CN107977995A (en) * | 2016-10-25 | 2018-05-01 | 菜鸟智能物流控股有限公司 | Target area position detection method and related device |
CN108241150A (en) * | 2016-12-26 | 2018-07-03 | 中国科学院软件研究所 | Moving Object Detection and tracking in a kind of three-dimensional sonar point cloud environment |
CN109822563A (en) * | 2018-12-08 | 2019-05-31 | 浙江国自机器人技术有限公司 | Task follower method for IDC robot |
CN109840454A (en) * | 2017-11-28 | 2019-06-04 | 华为技术有限公司 | Object localization method, device, storage medium and equipment |
CN110221307A (en) * | 2019-05-28 | 2019-09-10 | 哈尔滨工程大学 | A kind of non-cooperation multiple target line spectrum information fusion method of more passive sonars |
CN110517284A (en) * | 2019-08-13 | 2019-11-29 | 中山大学 | A kind of target tracking method based on laser radar and Pan/Tilt/Zoom camera |
CN110530513A (en) * | 2018-05-24 | 2019-12-03 | 三星电子株式会社 | The sensor based on event that flashing is filtered |
CN111950608A (en) * | 2020-06-12 | 2020-11-17 | 中国科学院大学 | Domain self-adaptive object detection method based on contrast loss |
CN112833785A (en) * | 2021-01-04 | 2021-05-25 | 中铁四局集团有限公司 | Track tracking method and system based on filtering fusion |
CN114326695A (en) * | 2020-10-12 | 2022-04-12 | 财团法人工业技术研究院 | Self-propelled vehicle following system and self-propelled vehicle following method |
-
2015
- 2015-02-11 CN CN201510071310.7A patent/CN104751119A/en active Pending
Non-Patent Citations (1)
Title |
---|
高山: "基于激光和视觉信息融合的行人跟踪研究", 《中国科学院大学硕士学位论文》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105701479B (en) * | 2016-02-26 | 2019-03-08 | 重庆邮电大学 | Intelligent vehicle multilasered optical radar fusion identification method based on target signature |
CN105701479A (en) * | 2016-02-26 | 2016-06-22 | 重庆邮电大学 | Intelligent vehicle multi-laser radar fusion recognition method based on target features |
CN106352870A (en) * | 2016-08-26 | 2017-01-25 | 深圳微服机器人科技有限公司 | Method and device for positioning targets |
CN106352870B (en) * | 2016-08-26 | 2019-06-28 | 深圳微服机器人科技有限公司 | A kind of localization method and device of target |
CN107977995A (en) * | 2016-10-25 | 2018-05-01 | 菜鸟智能物流控股有限公司 | Target area position detection method and related device |
CN107977995B (en) * | 2016-10-25 | 2022-05-06 | 菜鸟智能物流控股有限公司 | Target area position detection method and related device |
CN108241150A (en) * | 2016-12-26 | 2018-07-03 | 中国科学院软件研究所 | Moving Object Detection and tracking in a kind of three-dimensional sonar point cloud environment |
CN107421754B (en) * | 2017-08-08 | 2023-12-08 | 南京瑷卡测控技术有限公司 | Pedestrian collision protection scribing device |
CN107421754A (en) * | 2017-08-08 | 2017-12-01 | 天津瑷睿卡仕福测控技术有限公司 | A kind of pedestrian impact protects chalker |
CN109840454A (en) * | 2017-11-28 | 2019-06-04 | 华为技术有限公司 | Object localization method, device, storage medium and equipment |
CN109840454B (en) * | 2017-11-28 | 2021-01-29 | 华为技术有限公司 | Target positioning method, device, storage medium and equipment |
CN110530513A (en) * | 2018-05-24 | 2019-12-03 | 三星电子株式会社 | The sensor based on event that flashing is filtered |
CN109822563A (en) * | 2018-12-08 | 2019-05-31 | 浙江国自机器人技术有限公司 | Task follower method for IDC robot |
CN110221307A (en) * | 2019-05-28 | 2019-09-10 | 哈尔滨工程大学 | A kind of non-cooperation multiple target line spectrum information fusion method of more passive sonars |
CN110517284A (en) * | 2019-08-13 | 2019-11-29 | 中山大学 | A kind of target tracking method based on laser radar and Pan/Tilt/Zoom camera |
CN110517284B (en) * | 2019-08-13 | 2023-07-14 | 中山大学 | Target tracking method based on laser radar and PTZ camera |
CN111950608B (en) * | 2020-06-12 | 2021-05-04 | 中国科学院大学 | Domain self-adaptive object detection method based on contrast loss |
CN111950608A (en) * | 2020-06-12 | 2020-11-17 | 中国科学院大学 | Domain self-adaptive object detection method based on contrast loss |
CN114326695A (en) * | 2020-10-12 | 2022-04-12 | 财团法人工业技术研究院 | Self-propelled vehicle following system and self-propelled vehicle following method |
CN114326695B (en) * | 2020-10-12 | 2024-02-06 | 财团法人工业技术研究院 | Self-propelled vehicle following system and self-propelled vehicle following method |
CN112833785A (en) * | 2021-01-04 | 2021-05-25 | 中铁四局集团有限公司 | Track tracking method and system based on filtering fusion |
CN112833785B (en) * | 2021-01-04 | 2022-06-21 | 中铁四局集团有限公司 | Track tracking method and system based on filtering fusion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104751119A (en) | Rapid detecting and tracking method for pedestrians based on information fusion | |
Zhou et al. | Bridging the view disparity between radar and camera features for multi-modal fusion 3d object detection | |
US11900619B2 (en) | Intelligent vehicle trajectory measurement method based on binocular stereo vision system | |
US20220405947A1 (en) | Vehicle speed intelligent measurement method based on binocular stereo vision system | |
CN107738612B (en) | Automatic parking space detection and identification system based on panoramic vision auxiliary system | |
CN102944224B (en) | Work method for automatic environmental perception systemfor remotely piloted vehicle | |
CN103630122B (en) | Monocular vision lane line detection method and distance measurement method thereof | |
Zhang et al. | Robust inverse perspective mapping based on vanishing point | |
CN110197173B (en) | Road edge detection method based on binocular vision | |
Fernández et al. | Free space and speed humps detection using lidar and vision for urban autonomous navigation | |
CN103559791A (en) | Vehicle detection method fusing radar and CCD camera signals | |
Liu et al. | A novel multi-sensor fusion based object detection and recognition algorithm for intelligent assisted driving | |
CN107796373A (en) | A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven | |
Li et al. | Automatic parking slot detection based on around view monitor (AVM) systems | |
CN116310679A (en) | Multi-sensor fusion target detection method, system, medium, equipment and terminal | |
Chetan et al. | An overview of recent progress of lane detection for autonomous driving | |
Sun | Vision based lane detection for self-driving car | |
Xiao et al. | Lane detection based on road module and extended kalman filter | |
Hussain et al. | Multiple objects tracking using radar for autonomous driving | |
CN106960193A (en) | A kind of lane detection apparatus and method | |
CN106991415A (en) | Image processing method and device for vehicle-mounted fisheye camera | |
CN207115438U (en) | Image processing apparatus for vehicle-mounted fisheye camera | |
Delgado et al. | Virtual validation of a multi-object tracker with intercamera tracking for automotive fisheye based surround view systems | |
Han et al. | Lane detection & localization for UGV in urban environment | |
Henry et al. | Lane Detection and Distance Estimation Using Computer Vision Techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20150701 |
|
WD01 | Invention patent application deemed withdrawn after publication |