CN101975951A - Field environment barrier detection method fusing distance and image information - Google Patents
Field environment barrier detection method fusing distance and image information Download PDFInfo
- Publication number
- CN101975951A CN101975951A CN 201010195586 CN201010195586A CN101975951A CN 101975951 A CN101975951 A CN 101975951A CN 201010195586 CN201010195586 CN 201010195586 CN 201010195586 A CN201010195586 A CN 201010195586A CN 101975951 A CN101975951 A CN 101975951A
- Authority
- CN
- China
- Prior art keywords
- laser radar
- coordinate system
- point
- zone
- axle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention belongs to the technical field of barrier detection, particularly relates to a field environment barrier detection method fusing distance and image information and aims to provide a method for detecting the common barriers under the field driving condition of an unmanned vehicle so as to plan a driving route for the vehicle and enhance the independent field driving capability of the unmanned vehicle. The method comprises the following steps of: establishing a mathematical model; detecting a laser radar distance data barrier; and fusing the image processing of a video camera and a result. By the method, the common barriers such as grasslands, roads, trees, bushes and the like under the field driving condition of the unmanned vehicle can be detected and recognized and color modeling is performed on a driving region, so that an abnormal part of the driving region is further detected. The environment around the vehicle can be partitioned into a non-driving region, the driving region, an unknown region and the like, so that the planning of the driving route of the vehicle is facilitated and the independent field driving capability of the unmanned vehicle is enhanced.
Description
Technical field
The invention belongs to the obstacle detection technical field, be specifically related to the field environment disorder detection method of a kind of fusion distance and image information.
Background technology
The detection of obstacles of automatic driving vehicle on travel is the important component part in the environment sensing technical research field.In the application of obstacle detection, sensor commonly used has laser radar, video camera, millimetre-wave radar, ultrasonic sensor etc.Laser radar is by measurement emission light with from the mistiming measuring distance between the body surface reflected light.It can directly obtain range data, for vehicle provides the convenient information of environment description intuitively.The concrete application of laser radar has a lot of forms, the for example location of object and tracking, environmental modeling and keep away barrier, location and map structuring (SLAM), the classification of landform and morphologic characteristics etc. can also utilize the echo strength information of laser radar to carry out obstacle detection and tracking.The physical message of the surrounding environment that video camera obtains is abundant, and has higher disguise, has therefore obtained using widely.No matter be the single camera or the application of multiple-camera, a lot of researchs are all arranged in the prior art.But camera review is subjected to the influence of illumination, smog, environment easily, though the stereoscopic vision that multiple-camera forms can access actual range, the coupling complexity between image, consuming time has influenced practical application effect.The millimetre-wave radar volume is little, and is in light weight, can survey distance, and the ability of penetrating fog, cigarette, dust is strong, but resolution and precision are lower, is mainly used in automobile collision preventing and controls brake system etc. based on the vehicle tracing system of spacing measuring ability with knocking into the back to alleviate.Ultrasonic sensor is transmitted into the time that runs into obstacle reflected back receiver from transmitter and finds range by detecting ultrasound wave.The ultrasonic sensor volume is little cheap again, uses comparatively general, but owing to precision is low, can only be used in the occasion not high to the environment sensing accuracy requirement.
And laser radar scan-data and optical imagery have very strong complementarity to the description of environment, can obtain the intensive three-dimensional coordinate of body surface rapidly and accurately as laser scanning data, and optical imagery have comprised abundant colors.Therefore, merge laser scanning data and can obtain the comprehensive more information of automatic driving vehicle running environment, improved the rapidity of obstacle detection and the adaptive faculty of complex environment with optical imagery.About the information fusion problem of distance and image, there are many scholars to study.Peter merges the data of monocular-camera image and sonar ranging under indoor environment, has set up the indoor environment model, but has been not suitable for using under outdoor complicated field environment.The Tsai-hong Hong of University of Maryland tests the requirement that unmanned car lowered in field environment is travelled in the works at U.S. Demo III, proposed to merge the obstacle detection algorithm of three-dimensional laser radar data and camera review, be that road sign, pond and road are discerned only, lack for example research discerned of meadow, trees of other common obstacle in the environment at three kinds of typical road conditions.Monteiro uses the information after laser radar and video camera merge, and is used to detect the pedestrian and the vehicle that move on the road, surrounding environment is not discerned understanding.Xiang Zhiyu has proposed obstacle detection method in a kind of thick grass of merging laser radar and video camera information.This method is divided into groups laser radar data earlier, determine barrier point, be mapped in the camera review then, with this be partitioned in the camera review can not running region and must running region, reduce the erroneous judgement that occurs when pure laser radar is differentiated, and improved the integrality that final barrier profile detects.But when the color of barrier and thick grass very near the time, the effect of fusion detection can descend to some extent, and can not running region according to what judge, but marks off running region, lacks environment is understood more specifically.To sum up, prior art does not have and can plan driving path thereby be convenient to vehicle for automatic driving vehicle detects common obstacle under the driving conditions in the open air, improves the disorder detection method of the capacity of will that travels in the automatic driving vehicle field.
Summary of the invention
The purpose of this invention is to provide a kind of can the detection common obstacle under the driving conditions in the open air for automatic driving vehicle, thereby be convenient to vehicle planning driving path, improve the fusion distance of the capacity of will that travels in the automatic driving vehicle field and the field environment disorder detection method of image information.
The present invention is achieved in that
The field environment disorder detection method of a kind of fusion distance and image information comprises the steps:
The first step: set up mathematical model;
Second step: laser radar range data obstacle detection;
The 3rd step: camera review is handled;
The 4th step: the result merges.
Aforesaid foundation in the mathematical model step set up car body and laser radar, video camera and inertial navigation coordinate system, and described coordinate system is interrelated, specifically comprises the steps:
(1) sets up bodywork reference frame O
bX
bY
bZ
b
Initial point O
bBe positioned at certain point of fixity of car body, initial point be the laser radar light source with plane that ground overlaps on projection, X
bAxle points to car body front-right, Y
bAxle points to car body dead ahead, Z
bAxle points to the car body top; The bodywork reference frame initial point is h to the height on ground
0
(2) set up laser radar coordinate system O
lX
lY
lZ
l
Define this coordinate origin O
lBe positioned at the laser radar light source center, X
lThe axle straight up, Y
lAxle horizontal to the right, Z
lAxle points to car body the place ahead;
(3) set up camera coordinate system O
cX
cY
cZ
c
Coordinate origin O
cBe positioned at the focus of camera lens, X
cAxle and plane of delineation horizontal direction parallel, direction is to the right; Y
cAxle is parallel with the vertical direction of the plane of delineation, is directed downwards; Z
cThe axle perpendicular to the plane of delineation, directed straight ahead;
(4) set up inertial navigation coordinate system O
IX
IY
IZ
I
The inertial navigation coordinate system is consistent with the bodywork reference frame direction;
(5) determine the coordinate system transformational relation, specifically comprise the steps:
(a) from laser radar coordinate system O
lX
lY
lZ
lBe transformed into bodywork reference frame O
bX
bY
bZ
b
Earlier around Y
lThe axle half-twist is again around Z
lThe axle half-twist is then along Z
lThe downward translation h of direction of principal axis
0Distance;
Each meaning of parameters is same as described above in the formula;
(b) camera coordinate system is transformed into image coordinate system;
According to existing video camera pin-hole model, convert the three-dimensional coordinate of following of image coordinate system on the image location of pixels;
Wherein A is a camera intrinsic parameter,
α
x, α
yBe respectively the scale factor of u axle and v axle in the image coordinate system, u
0, v
0Be optical centre, γ is the u axle and the v axle out of plumb factor, under a lot of situations, makes γ=0; Other meaning of parameters is same as described above in the formula;
(c) from laser radar coordinate system O
lX
lY
lZ
lBe transformed into camera coordinate system O
cX
cY
cZ
c
Wherein R and t are respectively rotation matrix and the translation vector that is transformed into camera coordinate system from the laser radar coordinate system; Other meaning of parameters is same as described above in the formula
(d) range data is proofreaied and correct;
The car body angle of pitch of supposing inertial navigation equipment output is that α, roll angle are γ, and the range data after the correction obtains by following formula:
Meaning of parameters is same as described above in the formula.
Aforesaid laser radar range data obstacle detection step comprises the steps:
(1) range data pre-service;
The laser radar range data is carried out filtering;
(2) Region Segmentation;
The three-dimensional point cloud that the laser radar scanning circumstance obtains is cut apart, obtained a plurality of zones;
(3) zone identification;
Meadow, road, trees, bushes are carried out analyzing identification.
Aforesaid Region Segmentation step comprises the steps:
(a) retrieve certain three dimensions point p
iNearest k point in the field calculates by this k+1 and puts the point set Q that forms on every side
iNormal, as the normal vector of this analyzing spot;
(b) three dimensions is put p
iCoordinate
And normal
The composition characteristic vector
The cartesian space distance of two three-dimensional point is ρ
e(p
i, p
j)=‖ p
i-p
j‖, wherein, p
jCoordinate
p
jWith its normal
The composition characteristic vector
Angular distance is ρ
a(n
i, n
j)=0.5-0.5 *<n
i, n
j〉/‖ n
i‖ ‖ n
j‖, wherein,
For space length and angular distance are provided with different threshold value ρ
E, max, ρ
A, maxWhen the space length between two points or angular distance during, think that these two points are not at the same area greater than respective threshold;
(c) to all consecutive point calculating space length ρ each other
eWith angle ρ
aDistance; If ρ
e≤ ρ
E, MaxAnd ρ
a≤ ρ
A, max, these two some clusters are merged;
(d) if certain a bit can't with other cluster, a newly-built zone;
(e) with a cluster is arranged after, check every class to have a little quantity, if quantity is less than a certain threshold value n
Cmin, and the average height of putting in the class is far longer than the height h of vehicle leaping over obstacles
Max, such is a noise, with its deletion.
In the aforesaid Region Segmentation step, the span of k is 10~30.
Aforesaid regional identification step specifically comprises the steps:
(a) zoning feature;
Calculate average height, singular value, regional Surface Method vector; Computing method are as follows:
Singular value: [σ
1σ
2σ
3]; The coordinate of being had a few in this zone is formed matrix, carry out svd U ∑ V
T, the element in the ∑ is singular value, and with σ
1, σ
2, σ
3Descending ordering;
Zone Surface Method vector: γ; Minimum singular value σ
3Corresponding V
TIn vector of unit length;
(b) identification comprises the steps:
1. meadow identification;
When satisfying following condition, be identified as the meadow;
1) the some average height is lower than vehicle obstacle clearing capability h in the zone
Max
2) the penetrance height of point in the zone, singular value is distributed as σ
1≈ σ
2≈ σ
3
3) field method vector γ upwards;
After identifying the meadow, these points are merged to the same area, but and mark this zone for running region;
2. road Identification;
When satisfying following condition, be identified as road:
1) point highly is lower than the vehicle obstacle clearing capability in the zone
2) penetrance of point is low in the zone, σ
1>σ
2>>σ
3
3) field method vector γ upwards;
3. trunk identification;
When the regularity of distribution of singular value is σ
1>>σ
2≈ σ
3, when field method vector γ is positioned at horizontal direction, be identified as trunk;
4. bushes identification;
When singular value is σ
1>σ
2>σ
3,, when field method vector γ also is positioned at horizontal direction, be identified as bushes.
In the aforesaid identification step, as singular value σ
1With σ
2Ratio σ
1/ σ
2In the time of in [1 4], σ
1≈ σ
2As singular value σ
1With σ
2Ratio σ
1/ σ
2Greater than 50 o'clock, singular value σ
1>>σ
2
Aforesaid camera review treatment step specifically comprises the steps:
(1) laser radar data in the zone that division is come out is transformed under the image coordinate system of video camera;
(2) but the running region of preliminary judgement is further discerned;
Specifically comprise the steps:
(a) set up gauss hybrid models;
The sequence of supposing pixel in the image is { x
1, x
2X
i, the value of i pixel is x
i=[R
i, G
i, B
i], the probability of this pixel is:
Wherein K is the number of Gaussian distribution, selects 3~5, ω
K, iBe the estimated value of weight, μ
K, iBe the average of k Gaussian distribution, ∑
K, iBe the covariance matrix of k Gaussian distribution, suppose that covariance matrix meets
Rule; The probability density of pixel i in k gauss of distribution function calculated by following formula:
μ
iBe the average of i Gaussian distribution, ∑ is the covariance matrix of Gaussian distribution;
(b) zone identification;
The image in the Gauss model of the travelled field color set up and current zone to be determined is carried out the absolute value comparison, if | x
i-f| 〉=T, T are threshold value, and then this point is the non-region point of travelling.
Aforesaid fusion steps as a result specifically comprises the steps:
The differentiation result of laser radar and video camera is merged, but judge running region and can not running region, the constructing environment map; The differentiation result of comprehensive laser radar and video camera, but grating map is divided into running region and can not running region, and the principle of division is:
(1) if it is can not running region that laser radar is differentiated the result, then grating map being divided into can not running region;
(2) but if it be running region that laser radar is differentiated the result, then need to analyze according to video camera differentiation result;
(a) but if it is running region that video camera is differentiated the result, but then be running region;
(b), then be can not running region if it is can not running region that video camera is differentiated the result;
(3) if the zone that exists laser radar and video camera all not to detect should the zone be a zone of ignorance then.
The invention has the beneficial effects as follows:
The present invention can for automatic driving vehicle in the open air under the driving conditions common to obstacle for example meadow, road, trees, bushes etc. detect identification, but and to the color modeling of running region, but further detect the unusual part of running region.Vehicle-periphery can be marked off " can not running region ", " but running region " and " zone of ignorance " etc., help vehicle planning driving path, improve the capacity of will that travels in the automatic driving vehicle field.
Description of drawings
Fig. 1 is the process flow diagram of the field environment disorder detection method of a kind of fusion distance of the present invention and image information;
Fig. 2 is that the mutual alignment of bodywork reference frame of the present invention, laser radar coordinate system and camera coordinate system concerns synoptic diagram;
Fig. 3 of the present inventionly is transformed into camera coordinate system coordinate conversion synoptic diagram from the laser radar coordinate system.
Embodiment
Be introduced below in conjunction with the field environment disorder detection method of drawings and Examples a kind of fusion distance of the present invention and image information:
As shown in Figure 1, the field environment disorder detection method of a kind of fusion distance and image information comprises the steps:
The first step: set up mathematical model;
When setting up the vehicle sensor model, need set up car body and laser radar, video camera and inertial navigation coordinate system, and these coordinate systems are interrelated, specifically comprise the steps:
(1) sets up bodywork reference frame O
bX
bY
bZ
b
The bodywork reference frame that foundation and vehicle body are fixed together, it moves with vehicle.Bodywork reference frame is defined as: initial point O
bBe positioned at certain point of fixity of car body, to specify initial point be the laser radar light source with plane that ground overlaps on projection, X
bAxle points to car body front-right, Y
bAxle points to car body dead ahead, Z
bAxle points to the car body top.The bodywork reference frame initial point is h to the height on ground
0
(2) set up laser radar coordinate system O
lX
lY
lZ
l
Laser radar is installed on the The Cloud Terrace, and the laser that laser radar is launched can only scan 2-D data, could realize 3-D scanning by the The Cloud Terrace swing.Laser radar coordinate system and the laser radar of getting along well are connected this moment, but and The Cloud Terrace consider as a whole together, promptly define this coordinate origin O
lBe positioned at the laser radar light source center, X
lThe axle straight up, Y
lAxle horizontal to the right, Z
lAxle points to car body the place ahead.
(3) set up camera coordinate system O
cX
cY
cZ
c
The camera coordinate system that foundation and video camera are fixed together.Coordinate origin O
cBe positioned at the focus of camera lens, X
cAxle and plane of delineation horizontal direction parallel, direction is to the right; Y
cAxle is parallel with the vertical direction of the plane of delineation, is directed downwards; Z
cThe axle perpendicular to the plane of delineation, directed straight ahead.
(4) set up inertial navigation coordinate system O
IX
IY
IZ
I
Inertial navigation equipment is fixedly mounted on the car body.Set up the inertial navigation coordinate system consistent with the bodywork reference frame direction.The alignment error of inertial navigation equipment can be carried out sport car experiment when mounted and resolved out, and after alignment error was proofreaied and correct, the attitude angle of inertial navigation equipment output was exactly the attitude angle of car body.
When adopting laser radar data to carry out obstacle identification, the angle of pitch and the roll angle of the car body that needs use inertial navigation sensitivity arrives; By the structure rotation matrix, range data is corrected in the coordinate system of true origin on surface level.
The mutual alignment relation of bodywork reference frame, laser radar coordinate system and camera coordinate system as shown in Figure 2.
(5) determine the coordinate system transformational relation, specifically comprise the steps:
(a) from laser radar coordinate system O
lX
lY
lZ
lBe transformed into bodywork reference frame O
bX
bY
bZ
b
Earlier around Y
lThe axle half-twist is again around Z
lThe axle half-twist is then along Z
lThe downward translation h of direction of principal axis
0Distance.
Can get:
(b) camera coordinate system is transformed into image coordinate system;
According to existing video camera pin-hole model, convert the three-dimensional coordinate of following of image coordinate system on the image location of pixels.
Wherein A is a camera intrinsic parameter,
α
x, α
yBe respectively the scale factor of u axle and v axle in the image coordinate system, u
0, v
0Be optical centre, γ is the u axle and the v axle out of plumb factor, under a lot of situations, makes γ=0.A can be obtained by the plane target drone method that people such as Zhang Zhengyou propose.
The camera marking method based on plane target drone according to people such as Zhang Zhengyou propose requires video camera to take same plane target drone in orientation different more than two, and video camera and plane target drone can move freely, and need not to know kinematic parameter.In calibration process, the inner parameter of supposing video camera is constant, when different angles are taken target, have only external parameter to change, take the image of diverse location gridiron pattern target by video camera, the angle point that extracts grid on the gridiron pattern target is set up the relation between target point and the corresponding diagram picture point as unique point, thereby calculates the intrinsic parameter of video camera and the target external parameter with respect to camera coordinate system.
Concrete, by with the plane gridiron pattern as target, video camera is taken the image of target at diverse location, extract the unique point of the angle point of grid on the gridiron pattern target as the target plane, set up the unique point on target plane and the relation between the point on the correspondence image plane, calculate the intrinsic parameter matrix A of video camera and target external parameter R with respect to camera coordinate system
c, t
cCalibration process is as described below:
At first, find the solution homography matrix H:
If the unique point on target plane
Point on the corresponding plane of delineation
The mapping relations of the two are as shown in Equation (4):
In the formula (4), s is a scale factor, r
C1, r
C2The unique point that is respectively the target plane transforms to the rotation matrix r of image coordinate system from the target plane coordinate system
cPreceding two row, t
cIt is corresponding translation vector; A is a camera intrinsic parameter,
Wherein, α
x, α
yBe respectively the scale factor of u axle and v axle in the image coordinate system, u
0, v
0Be optical centre, γ is the u axle and the v axle out of plumb factor, under a lot of situations, makes γ=0.
Make H=A[r
C1r
C2t
c], then formula (4) can be write a Chinese character in simplified form into formula (5):
Order
Wherein,
The transposition of three column vectors of expression H matrix, the mapping relations of the point on the plane of delineation and the unique point on target plane can formulate (6) form:
According to n point on every width of cloth image of taking, adopt least square method can solve the pairing matrix H of every width of cloth image
i, i=1,2 ...
Secondly, according to the H matrix of all images that obtains, find the solution intermediate vector b:
Make B=A
-TA
-1, then:
Write above-mentioned matrix as vector form, as shown in Equation (8):
b=[B
11?B
12?B
22?B
13?B
23?B
33]
T (8)
So, just have relational expression shown in the formula (9) to exist:
Wherein, h
1=[h
I1h
I2h
I3]
T, h
j=[h
J1h
J2h
J3]
T, which width of cloth image i and j all represent, i=1, and 2, J=1,2,
v
ij=[h
i1h
j1?h
i1h
j2+h
i2h
j1?h
i2h
j2?h
i3h
j1+h
i1h
j2?h
i3h
j2+h
i2h
j3?h
i3h
j3]
T
………………………………………………………(10)
With all v
IjForm matrix V, V is the matrix of 2n * 6, wherein, n=1,2, There is relational expression shown in the formula (11):
Vb=0 (11)
Can solve b by formula (11).
At last, can decomposite the intrinsic parameter matrix A of video camera according to b, and according to A
-1Find the solution R
c, t
c:
The intrinsic parameter that decomposites video camera according to b is specially: utilize Cholesky matrix decomposition algorithm to solve A b
-1, inverting obtains A again;
Demarcate the external parameter t of every width of cloth image
cFor:
r
c1=λA
-1h
1 (12)
r
c2=λA
-1h
2 (13)
r
c3=r
c1×r
c2 (14)
t
c=λA
-1h
3 (15)
λ=1/ ‖ A wherein
-1h
1‖=1/ ‖ A
-1h
2‖, so the rotation matrix in the external parameter of demarcating is:
R
c=[r
c1?r
c2?r
c3] (16)
(c) from laser radar coordinate system O
lX
lY
lZ
lBe transformed into camera coordinate system O
cX
cY
cZ
c
Wherein R and t are respectively rotation matrix and the translation vector that is transformed into camera coordinate system from the laser radar coordinate system, and R and t computation process are as follows:
With black and white square plane gridiron pattern target at interval, target is positioned over the video camera position different with the laser radar front, start video camera and laser radar simultaneously, gather visible images and range data and record.It is right to obtain a series of " images-distance " like this.This target both had been used for the demarcation of camera intrinsic parameter A, can be used for the external parameter R of video camera and laser radar and the demarcation of t again.
As shown in Figure 3, be example with i target plane, the initial point O of camera coordinate system
cDistance to i target plane is λ
C, i, the vertical line direction is γ
C, iThree-dimensional laser radar coordinate origin O
lDistance to i target plane is λ
L, i, the vertical line direction is γ
L, i, computing method are as follows.
R wherein
3, c, iExpression R
C, iThe 3rd row.The camera coordinate system initial point can be write as following form to the normal direction and the distance on all target planes:
Γ
c=[γ
c,1?γ
c,2…γ
c,n] (20)
Λ
c=[λ
c,1?λ
c,2…λ
c,n]
T (21)
Γ wherein
cBe 3 * n matrix, Λ
cBe n * 1 column vector, n is " distance-image " logarithm.
Use least square method to simulate the target plane from the target point data of laser radar scanning, can obtain the laser radar coordinate origin is γ to the normal direction on i target plane
L, iAnd distance lambda
L, iIf x
L, iBe i target plane m
i* 3 some set, m
iBe the quantity of putting on the target plane,
Be the average of dot matrix set, then
So γ
L, iAnd λ
L, iFor
γ
l,i=U
3,l,i (24)
Wherein, U
3, l, iExpression U
L, iThe 3rd row.The laser radar coordinate origin can be write as following form to the normal direction and the distance on all target planes:
Γ
l=[γ
l,1?γ
l,2…γ
l,n] (26)
Λ
l=[λ
l,1?λ
l,2…λ
l,n]
T (27)
Γ wherein
lBe 3 * n matrix, Λ
lBe n * 1 column vector, n is " distance-image " logarithm.
R=VU
T (28)
(d) range data is proofreaied and correct;
The car body angle of pitch of supposing inertial navigation equipment output is that α, roll angle are γ, and the range data after proofreading and correct so can obtain by following formula:
Second step: laser radar range data obstacle detection comprises the steps:
(1) range data pre-service;
Because there is certain noise in the influence of factors such as laser radar drifted about, body surface characteristic and vehicle movement in the ranging data, therefore at first tackling the raw range data carries out pre-service.Comprise the noise spot and the mistake measuring point that bring by disturbance in the measurement data, these points are carried out filtering, can improve the degree of accuracy of arithmetic speed and feature detection.In addition, because the mixed pixel phenomenon of laser has some data acnodes with front and back data point spacing is very big in the raw data, these are nonsensical for environment understanding.Therefore, for these scattered data pointses and isolated point, can adopt 3 * 3 template to carry out medium filtering and be handled.
(2) Region Segmentation;
The three-dimensional point cloud that the laser radar scanning circumstance obtains is cut apart, obtained a plurality of zones, these zones are discerned respectively.
For handling cloud data, multiple dividing method is arranged in the prior art.A kind of is will point three-dimensional coordinate and color unification be placed in the six-vector, the calculation criterion of three-dimensional coordinate and color distance is merged in definition, adopts the region growing method, and the plane of color similarity in the scene is split.Another kind is to adopt the plane that body surface is carried out match, and the plane seed is constantly carried out region growing, has realized cutting apart a cloud.The third be the radial boundary arest neighbors (A Radially Bounded Nearest Neighbor, RBNN) clustering method can be cut apart putting cloud preferably, and step is as follows:
(a) retrieve certain three dimensions point p
iNearest k point in the field calculates by this k+1 and puts the point set Q that forms on every side
iNormal, as the normal vector of this analyzing spot.In the present embodiment, the span of k is 10~30, as 10,20 or 30.
Concrete computing method are as follows: with Q
iIn point carry out svd U ∑ V
T, with V
TIn the vector of unit length of minimum value in the corresponding ∑, as p
iNormal
Above-mentioned svd is a prior art.
The cartesian space distance of two three-dimensional point of definition is ρ
e(p
i, p
j)=‖ p
i-p
j‖, wherein, p
jCoordinate
p
jWith its normal
The composition characteristic vector
Angular distance is ρ
a(n
i, n
j)=0.5-0.5 *<n
i, n
j〉/‖ n
i‖ ‖ n
j‖, wherein,
For space length and angular distance are provided with different threshold value ρ
E, max, ρ
A, max,, think that promptly these two points are not at the same area when the space length between two points or angular distance during greater than respective threshold.
(c) to all consecutive point calculating space length ρ each other
eWith angle ρ
aDistance.If ρ
e≤ ρ
E, maxAnd ρ
a≤ ρ
A, max, these two some clusters can be merged so.All consecutive point are merged to a zone as far as possible.
(d) if certain a bit can't with other cluster, so with regard to a newly-built zone.
(e) with a cluster is arranged after, check every class to have a little quantity, if quantity is less than a certain threshold value n
Cmin, and the average height of putting in the class is far longer than the height h of vehicle leaping over obstacles
Max, think that such is a noise, can delete.Threshold value n
CminHeight h with the vehicle leaping over obstacles
MaxChoose according to actual conditions.
(3) zone identification;
Automatic driving vehicle travels in the lowered in field environment, has the various road conditions of meadow, road, trees, bushes etc., need analyze identification to these landform, specifically comprises the steps:
(a) zoning feature;
Before the identification, regional several characteristic be need calculate, average height, singular value, regional Surface Method vector etc. comprised.Computing method are as follows:
Singular value: [σ
1σ
2σ
3].The coordinate of being had a few in this zone is formed matrix, carry out svd U ∑ V
T, the element in the ∑ is singular value, and with σ
1, σ
2, σ
3Descending ordering.
Zone Surface Method vector: γ.Minimum singular value σ
3Corresponding V
TIn vector of unit length.
(b) identification comprises the steps:
1. meadow identification;
When laser was got on the meadow, because blade of grass is at random, the data point that receives was very mixed and disorderly, and general edge detection method can't accurately detect.The maximum difference of meadow and ordinary road is that the density of analyzing spot is different, just the penetrance difference.The thick grass that branches and leaves are sparse, the surface pore of laser scanning is more, and penetrance is bigger; And dense bushes shoot and leaf growth is in great numbers, and then penetrance is less; If the laser scanning surface is hard surface, the distance value of each analyzing spot is evenly distributed, and its penetrance is just approximate to equal zero.When automatic driving vehicle travels lowered in field environment, the thick grass that covers the face of land can travel by.Therefore when detecting the bigger thick grass of penetrance, but should be designated running region.
When satisfying following condition, be identified as the meadow.
1) the some average height is lower than vehicle obstacle clearing capability h in the zone
Max
2) the penetrance height of point in the zone, singular value is distributed as σ
1≈ σ
2≈ σ
3
3) field method vector γ upwards.
Because in Region Segmentation, may be divided into different zones to the point on the meadow, therefore after identifying the meadow, these points need be merged to the same area, but and mark this zone and be running region.Merging herein is meant the point in several zones is placed in the zone.
2. road Identification;
The maximum difference in road area and zone, meadow is laser beam penetration capacity difference.The surface ratio of road is more smooth, so light beam irradiates is on the road surface, and analyzing spot distributes more even.But road is a running region.
When satisfying following condition, be identified as road.
2) penetrance of point is low in the zone, σ
1>σ
2>>σ
3
3) field method vector γ upwards.
3. the identification of trunk;
Analyzing spot on the trunk distributes and generally is the part on the face of cylinder.When the regularity of distribution of singular value is σ
1>>σ
2≈ σ
3, when field method vector γ is positioned at horizontal direction, be identified as trunk.
4. bushes identification;
Analyzing spot all distributes in one plane on the bushes, when singular value is σ
1>σ
2>σ
3, when field method vector γ also is positioned at horizontal direction, be identified as bushes.
In the present invention, as singular value σ
1With σ
2Ratio σ
1/ σ
2In the time of in [14], think σ
1≈ σ
2As singular value σ
1With σ
2Ratio σ
1/ σ
2Greater than 50 o'clock, think singular value σ
1>>σ
2
The 3rd step: camera review is handled, and specifically comprises the steps:
(1) laser radar data in the zone that division is come out is transformed under the image coordinate system of video camera;
Change according to the coordinate transformation relation in the first step.
(2) but the running region of preliminary judgement is further discerned;
Because in the lowered in field environment, except the road that can travel, thick grass, may have water beach, stone etc.Laser radar scanning is on the water beach, part laser can not reflect, cause the laser radar scan-data to have the cavity, also might reflected back part laser, the height of this part point is all on surface level, therefore only adopt laser radar data to carry out obstacle identification, but be easy to the water beach is treated as running region.And in fact, present technology also can't be assessed the degree of depth on water beach, soft degree, so in a single day automatic driving vehicle sail into, just may cause danger.May have the stone of some in addition in the running environment,, automatic driving vehicle is jolted, influence the speed of vehicle though volume is little.Only adopt the range data of laser radar,, can't accurately depict the stone shape,, can judge these less barriers so adopt the image of video camera this moment because the resolution of laser radar is relatively low.
But for running region and can not running region separately, (GaussianMixture Models GMM) classifies to color to use gauss hybrid models.But the color of running region is obeyed certain statistical law, and each color of pixel value can be represented with the mixture model of several Gaussian distribution.But behind the image that obtains new running region, just upgrade the model that mixed Gaussian distributes, if the pixel of present image and model are complementary, but judge that so this point is the point in the running region, otherwise just judge that this point is the point of other character.
Specifically comprise the steps:
(a) set up gauss hybrid models;
The sequence of supposing pixel in the image is { x
1, x
2X
i, the value of i pixel is x
i=[R
i, G
i, B
i], the probability of this pixel is:
Wherein K is the number of Gaussian distribution, generally selects 3~5, ω
K, iBe the estimated value of weight, μ
K, iBe the average of k Gaussian distribution, ∑
K, iBe the covariance matrix of k Gaussian distribution, suppose that generally covariance matrix meets
Rule.The probability density of pixel i in k gauss of distribution function can be calculated by following formula:
μ
iBe the average of i Gaussian distribution, ∑ is the covariance matrix of Gaussian distribution;
Can the travel process of gauss hybrid models of field color feature of foundation is made up of following step:
1. the processed images zone is carried out the conversion of color space.
2. each parameter of initialization comprises the overall situation region threshold T that can travel, learning rate α, Gauss model quantity K, and initializes weights ω
K, i
3. the color value of the road that travels of the initial appointment of reading images and meadow image, as the average of gauss hybrid models, variance is set up gauss hybrid models for predetermined empirical value.
But 4. read the view data of the running region of initial appointment, each color of pixel value and existing Gauss model compared, judge whether following formula is set up:
In the formula, x
iBe the color of pixel value,
It is the variance of k Gaussian distribution model.
Coupling: upgrade the parameter and the weight of k gauss hybrid models, parameter comprises expectation, variance, the study factor.
μ
i=(1-ρ)μ
i-1+ρx
i (34)
ρ=αη(x
i|μ
k,σ
k) (36)
In the formula, μ
kAnd σ
kBe respectively the mean value vector and the variance of k Gaussian distribution model this moment, α is a learning rate, and ρ is the study factor that model adapts to, and acts on similar to α.
Do not match: if k<K increases a Gauss model, the color value that new pixel is got in new Gauss model distribution is an average, and variance and weight are got empirical value.If k=K replaces the minimum model of weight with new Gaussian distribution model, average and variance are the same.
Weights omega
kMore new formula be
ω
k,i=(1-α)ω
k,i-1+αM
k,i (37)
In the formula, ω
K, iBe current weight, α is a learning rate, ω
K, i-1Be last respective weights, M
K, iBe the coupling quantized value, if coupling M
K, i=1, then M does not match
K, i=0.
(b) zone identification;
The image in the Gauss model of the travelled field color set up and current zone to be determined is carried out the absolute value comparison, if | x
i-f| 〉=T, T are threshold value, think that so this point is the non-region point of travelling.The value of T is chosen according to actual needs.Point in this zone is set up corresponding gauss hybrid models, but it is identical to set up the model of process and running region, and adds in the knowledge base, be convenient to analyze.
The 4th step: the result merges, and specifically comprises the steps:
The differentiation result of laser radar and video camera is merged, but judge running region and can not running region, the constructing environment map.
Vivid, easy-to-use grating map usually is used for simulating ground surface environment.Grating map is proposed by Elfes and Moravec the earliest, afterwards constantly development.Existing grating map generally can be divided into two classes, two-dimensional grid map and 3 d grid map.Each grid cell in the two-dimensional grid map, can only represent has and does not have two states, but uses conveniently; And the 3 d grid map can also be represented other attributes of this unit, for example highly, thus meticulousr expression terrain environment.During the structure grating map, the size of environment terrain resolution and lattice dimensions is closely related.If the high resolving power of requirement will reduce the size of grid so, grid quantity increases, and causes complexity of calculation.And reduce grid quantity, and just must increase lattice dimensions, resolution reduces.Therefore, the foundation of grating map need be taken all factors into consideration the various factorss such as arithmetic capability of the driveability of automatic driving vehicle itself, sensor resolution, environment topographic features, system.
The differentiation result of comprehensive laser radar and video camera, but grating map is divided into running region and can not running region, and the automatic driving vehicle of being more convenient for is planned the path, has therefore set up two-dimentional grating map.The principle of above-mentioned division is:
(1) if it is can not running region that laser radar is differentiated the result, then grating map being divided into can not running region;
(2) but if it be running region that laser radar is differentiated the result, then need to analyze according to video camera differentiation result;
(a) but if it is running region that video camera is differentiated the result, but then be running region;
(b) if it is can not running region that video camera is differentiated the result, then being can not running region.
(3) if the zone that exists laser radar and video camera all not to detect should the zone be a zone of ignorance then.
The detailed process of having set up the two-dimensional grid map is as follows:
(1) in bodywork reference frame, divides grid size.
(2) but with laser radar data and video camera merge back identification running region, can not running region, zone of ignorance carries out mark.
Be described below in conjunction with the field environment disorder detection method of a specific embodiment a kind of fusion distance of the present invention and image information:
Start three-dimensional laser radar, scene around the scanning obtains one group of range data of describing surrounding environment, gathers the attitude information of inertial navigation equipment this moment simultaneously.
The first step: set up mathematical model;
Set up coordinate system according to above-mentioned steps, the angle of pitch and attitude angle according to inertial navigation equipment output are corrected to this group range data in the coordinate system of true origin on surface level.
, and then be transformed under the bodywork reference frame.
Second step: laser radar range data obstacle detection;
(1) data pre-service;
Each point in the scanning distance data successively adopts 3 * 3 template that data are carried out medium filtering.
(2) Region Segmentation;
Adopt region segmentation method, these range data are cut apart, parameter is set to k=20, n
Cmin=150, h
Max=0.1 meter, ρ
E, max=0.04 meter, ρ
A, max=0.011.Can obtain several regions after cutting apart.
(3) zone identification;
The zoning feature is comprising average height
, singular value [σ
1σ
2σ
3], regional Surface Method vector γ.
According to the characteristic quantity in zone, which kind of road conditions discerns each zone is, as meadow, trees, road, bushes etc.But meadow and road can be thought running region, and road and bushes are obstacle.Parameter is set to h
Max=0.1.
Start the video camera photographic images.But with the range data in the running region, be transformed under the camera coordinate system,, obtain the corresponding color information in this zone according to camera review.
The 3rd step: camera review is handled;
Analyze camera review, but set up the gauss hybrid models of running region.Parameter is set to K=5, and initializes weights is ω=0.05, the overall situation region threshold T=0.7 that can travel, learning rate α=0.2.
The 4th step: the result merges;
The recognition result of the data of adjusting the distance and camera review merges.According to gauss hybrid models, further analysis image, but the obstacle in the identification running region adds new obstacle in the knowledge base to.
But in conjunction with laser radar data and video camera merge back identification running region, can not running region, zone of ignorance, give corresponding attribute for corresponding grid, set up grating map.
Claims (9)
1. the field environment disorder detection method of fusion distance and image information comprises the steps:
The first step: set up mathematical model;
Second step: laser radar range data obstacle detection;
The 3rd step: camera review is handled;
The 4th step: the result merges.
2. the field environment disorder detection method of a kind of fusion distance according to claim 1 and image information, it is characterized in that: described foundation in the mathematical model step, set up car body and laser radar, video camera and inertial navigation coordinate system, and described coordinate system is interrelated, specifically comprise the steps:
(1) sets up bodywork reference frame O
bX
bY
bZ
b
Initial point O
bBe positioned at certain point of fixity of car body, initial point be the laser radar light source with plane that ground overlaps on projection, X
bAxle points to car body front-right, Y
bAxle points to car body dead ahead, Z
bAxle points to the car body top; The bodywork reference frame initial point is h to the height on ground
0
(2) set up laser radar coordinate system O
lX
lY
lZ
l
Define this coordinate origin O
lBe positioned at the laser radar light source center, X
lThe axle straight up, Y
lAxle horizontal to the right, Z
lAxle points to car body the place ahead;
(3) set up camera coordinate system O
cX
cY
cZ
c
Coordinate origin O
cBe positioned at the focus of camera lens, X
cAxle and plane of delineation horizontal direction parallel, direction is to the right; Y
cAxle is parallel with the vertical direction of the plane of delineation, is directed downwards; Z
cThe axle perpendicular to the plane of delineation, directed straight ahead;
(4) set up inertial navigation coordinate system O
IX
IY
IZ
I
The inertial navigation coordinate system is consistent with the bodywork reference frame direction;
(5) determine the coordinate system transformational relation, specifically comprise the steps:
(a) from laser radar coordinate system O
lX
lY
lZ
lBe transformed into bodywork reference frame O
bX
bY
bZ
b
Earlier around Y
lThe axle half-twist is again around Z
lThe axle half-twist is then along Z
lThe downward translation h of direction of principal axis
0Distance;
Each meaning of parameters is same as described above in the formula;
(b) camera coordinate system is transformed into image coordinate system;
According to existing video camera pin-hole model, convert the three-dimensional coordinate of following of image coordinate system on the image location of pixels;
Wherein A is a camera intrinsic parameter,
α
x, α
yBe respectively the scale factor of u axle and v axle in the image coordinate system, u
0, v
0Be optical centre, γ is the u axle and the v axle out of plumb factor, under a lot of situations, makes γ=0; Other meaning of parameters is same as described above in the formula;
(c) from laser radar coordinate system O
lX
lY
lZ
lBe transformed into camera coordinate system O
cX
cY
cZ
c
Wherein R and t are respectively rotation matrix and the translation vector that is transformed into camera coordinate system from the laser radar coordinate system; Other meaning of parameters is same as described above in the formula
(d) range data is proofreaied and correct;
The car body angle of pitch of supposing inertial navigation equipment output is that α, roll angle are γ, and the range data after the correction obtains by following formula:
Meaning of parameters is same as described above in the formula.
3. the field environment disorder detection method of a kind of fusion distance according to claim 1 and image information is characterized in that: described laser radar range data obstacle detection step comprises the steps:
(1) range data pre-service;
The laser radar range data is carried out filtering;
(2) Region Segmentation;
The three-dimensional point cloud that the laser radar scanning circumstance obtains is cut apart, obtained a plurality of zones;
(3) zone identification;
Meadow, road, trees, bushes are carried out analyzing identification.
4. the field environment disorder detection method of a kind of fusion distance according to claim 3 and image information is characterized in that: described Region Segmentation step comprises the steps:
(a) retrieve certain three dimensions point p
iNearest k point in the field calculates by this k+1 and puts the point set Q that forms on every side
iNormal, as the normal vector of this analyzing spot;
(b) three dimensions is put p
iCoordinate
And normal
The composition characteristic vector
The cartesian space distance of two three-dimensional point is ρ
e(p
i, p
j)=|| p
i-p
j||, wherein, p
jCoordinate
p
jWith its normal
The composition characteristic vector
Angular distance is ρ
a(n
i, n
j)=0.5-0.5 *<n
i, n
j〉/|| n
i|| || n
j‖, wherein,
For space length and angular distance are provided with different threshold value ρ
E, max, ρ
A, maxWhen the space length between two points or angular distance during, think that these two points are not at the same area greater than respective threshold;
(c) to all consecutive point calculating space length ρ each other
eWith angle ρ
aDistance; If ρ
e≤ ρ
E, maxAnd ρ
a≤ ρ
A, max, these two some clusters are merged;
(d) if certain a bit can't with other cluster, a newly-built zone;
(e) with a cluster is arranged after, check every class to have a little quantity, if quantity is less than a certain threshold value n
C min, and the average height of putting in the class is far longer than the height h of vehicle leaping over obstacles
Max, such is a noise, with its deletion.
5. the field environment disorder detection method of a kind of fusion distance according to claim 3 and image information is characterized in that: in the described Region Segmentation step, the span of k is 10~30.
6. the field environment disorder detection method of a kind of fusion distance according to claim 3 and image information is characterized in that: described regional identification step specifically comprises the steps:
(a) zoning feature;
Calculate average height, singular value, regional Surface Method vector; Computing method are as follows:
Average height:
N is the quantity of point in this zone, and i is the sequence number of point;
Singular value: [σ
1σ
2σ
3]; The coordinate of being had a few in this zone is formed matrix, carry out svd U ∑ V
T, the element in the ∑ is singular value, and with σ
1, σ
2, σ
3Descending ordering;
Zone Surface Method vector: γ; Minimum singular value σ
3Corresponding V
TIn vector of unit length;
(b) identification comprises the steps:
1. meadow identification;
When satisfying following condition, be identified as the meadow;
1) the some average height is lower than vehicle obstacle clearing capability h in the zone
Max
2) the penetrance height of point in the zone, singular value is distributed as σ
1≈ σ
2≈ σ
3
3) field method vector γ upwards;
After identifying the meadow, these points are merged to the same area, but and mark this zone for running region;
2. road Identification;
When satisfying following condition, be identified as road:
2) penetrance of point is low in the zone, σ
1>σ
2>>σ
3
3) field method vector γ upwards;
3. trunk identification;
When the regularity of distribution of singular value is σ
1>>σ
2≈ σ
3, when field method vector γ is positioned at horizontal direction, be identified as trunk;
4. bushes identification;
When singular value is σ
1>σ
2>σ
3, when field method vector γ also is positioned at horizontal direction, be identified as bushes.
7. the field environment disorder detection method of a kind of fusion distance according to claim 6 and image information is characterized in that: in the described identification step, as singular value σ
1With σ
2Ratio σ
1/ σ
2In the time of in [1 4], σ
1≈ σ
2As singular value σ
1With σ
2Ratio σ
1/ σ
2Greater than 50 o'clock, singular value σ
1>>σ
2
8. the field environment disorder detection method of a kind of fusion distance according to claim 1 and image information is characterized in that: described camera review treatment step specifically comprises the steps:
(1) laser radar data in the zone that division is come out is transformed under the image coordinate system of video camera;
(2) but the running region of preliminary judgement is further discerned;
Specifically comprise the steps:
(a) set up gauss hybrid models;
The sequence of supposing pixel in the image is { x
1, x
2X
i, the value of i pixel is x
i=[R
i, G
i, B
i], the probability of this pixel is:
Wherein K is the number of Gaussian distribution, selects 3~5, ω
K, iBe the estimated value of weight, μ
K, iBe the average of k Gaussian distribution, ∑
K, iBe the covariance matrix of k Gaussian distribution, suppose that covariance matrix meets
Rule; The probability density of pixel i in k gauss of distribution function calculated by following formula:
μ
iBe the average of i Gaussian distribution, ∑ is the covariance matrix of Gaussian distribution;
(b) zone identification;
The image in the Gauss model of the travelled field color set up and current zone to be determined is carried out the absolute value comparison, if | x
i-f| 〉=T, T are threshold value, and then this point is the non-region point of travelling.
9. the field environment disorder detection method of a kind of fusion distance according to claim 1 and image information is characterized in that: described fusion steps as a result specifically comprises the steps:
The differentiation result of laser radar and video camera is merged, but judge running region and can not running region, the constructing environment map; The differentiation result of comprehensive laser radar and video camera, but grating map is divided into running region and can not running region, and the principle of division is:
(1) if it is can not running region that laser radar is differentiated the result, then grating map being divided into can not running region;
(2) but if it be running region that laser radar is differentiated the result, then need to analyze according to video camera differentiation result;
(a) but if it is running region that video camera is differentiated the result, but then be running region;
(b) if it is can not running region that video camera is differentiated the result, then being can not running region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201010195586 CN101975951B (en) | 2010-06-09 | 2010-06-09 | Field environment barrier detection method fusing distance and image information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201010195586 CN101975951B (en) | 2010-06-09 | 2010-06-09 | Field environment barrier detection method fusing distance and image information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101975951A true CN101975951A (en) | 2011-02-16 |
CN101975951B CN101975951B (en) | 2013-03-20 |
Family
ID=43575853
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201010195586 Expired - Fee Related CN101975951B (en) | 2010-06-09 | 2010-06-09 | Field environment barrier detection method fusing distance and image information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101975951B (en) |
Cited By (99)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102338621A (en) * | 2011-04-27 | 2012-02-01 | 天津工业大学 | Method for detecting height of obstacle for indoor visual navigation |
CN102538766A (en) * | 2011-12-21 | 2012-07-04 | 武汉科技大学 | Obstacle test method for active intelligent vehicle |
CN102779280A (en) * | 2012-06-19 | 2012-11-14 | 武汉大学 | Traffic information extraction method based on laser sensor |
CN103366154A (en) * | 2012-04-09 | 2013-10-23 | 通用汽车环球科技运作有限责任公司 | Reconfigurable clear path detection system |
CN103514427A (en) * | 2012-06-15 | 2014-01-15 | 株式会社理光 | A vehicle detection method and a system |
CN103884281A (en) * | 2014-03-18 | 2014-06-25 | 北京控制工程研究所 | Patrol device obstacle detection method based on initiative structure light |
CN103908346A (en) * | 2012-12-31 | 2014-07-09 | 复旦大学 | High-precision automatic neuronavigation space registration method |
CN104482872A (en) * | 2014-12-31 | 2015-04-01 | 中联重科股份有限公司 | Curb boundary detection method and device based on parallel structured light and engineering machine |
CN104503449A (en) * | 2014-11-24 | 2015-04-08 | 杭州申昊科技股份有限公司 | Positioning method based on environment line features |
CN104574376A (en) * | 2014-12-24 | 2015-04-29 | 重庆大学 | Anti-collision method based on joint verification of binocular vision and laser radar in congested traffic |
CN104833364A (en) * | 2015-05-07 | 2015-08-12 | 苏州天鸣信息科技有限公司 | Safe route indicating method for bumpy roads |
CN104899855A (en) * | 2014-03-06 | 2015-09-09 | 株式会社日立制作所 | Three-dimensional obstacle detection method and apparatus |
CN104933708A (en) * | 2015-06-07 | 2015-09-23 | 浙江大学 | Barrier detection method in vegetation environment based on multispectral and 3D feature fusion |
CN105182358A (en) * | 2014-04-25 | 2015-12-23 | 谷歌公司 | Methods and systems for object detection using laser point clouds |
CN105205859A (en) * | 2015-09-22 | 2015-12-30 | 王红军 | Similarity measurement method of environmental characteristics based on three-dimensional raster map |
CN105204510A (en) * | 2015-10-09 | 2015-12-30 | 福州华鹰重工机械有限公司 | Generation method and device for probability map for accurate positioning |
CN105301577A (en) * | 2015-10-10 | 2016-02-03 | 福州华鹰重工机械有限公司 | Laser strength calibration method and laser strength calibration device |
CN105512641A (en) * | 2015-12-31 | 2016-04-20 | 哈尔滨工业大学 | Method for using laser radar scanning method to calibrate dynamic pedestrians and vehicles in video in snowing or raining state |
CN105571599A (en) * | 2015-12-21 | 2016-05-11 | 小米科技有限责任公司 | Road condition information processing method and device |
CN105818763A (en) * | 2016-03-09 | 2016-08-03 | 乐卡汽车智能科技(北京)有限公司 | Method, device and system for confirming distance of object around vehicle |
CN105867408A (en) * | 2014-11-07 | 2016-08-17 | 现代摩比斯株式会社 | Apparatus and method for determining available driving space |
CN105893931A (en) * | 2015-02-16 | 2016-08-24 | 松下知识产权经营株式会社 | Object detection apparatus and method |
CN105956527A (en) * | 2016-04-22 | 2016-09-21 | 百度在线网络技术(北京)有限公司 | Method and device for evaluating barrier detection result of driverless vehicle |
CN106164698A (en) * | 2014-03-31 | 2016-11-23 | 三美电机株式会社 | Radar module, conveying equipment and object identification method |
CN106326810A (en) * | 2015-06-25 | 2017-01-11 | 株式会社理光 | Road scene identification method and equipment |
CN106447697A (en) * | 2016-10-09 | 2017-02-22 | 湖南穗富眼电子科技有限公司 | Specific moving target fast tracking method based on moving platform |
CN106546996A (en) * | 2016-10-15 | 2017-03-29 | 北海益生源农贸有限责任公司 | Road Detection and tracking based on four line laser radars |
CN106646474A (en) * | 2016-12-22 | 2017-05-10 | 中国兵器装备集团自动化研究所 | Unstructured road accidented barrier detection apparatus |
CN106650640A (en) * | 2016-12-05 | 2017-05-10 | 浙江大学 | Negative obstacle detection method based on local structure feature of laser radar point cloud |
CN106896353A (en) * | 2017-03-21 | 2017-06-27 | 同济大学 | A kind of unmanned vehicle crossing detection method based on three-dimensional laser radar |
CN107169986A (en) * | 2017-05-23 | 2017-09-15 | 北京理工大学 | A kind of obstacle detection method and system |
CN107167141A (en) * | 2017-06-15 | 2017-09-15 | 同济大学 | Robot autonomous navigation system based on double line laser radars |
CN107194962A (en) * | 2017-04-01 | 2017-09-22 | 深圳市速腾聚创科技有限公司 | Point cloud and plane picture fusion method and device |
CN107209268A (en) * | 2015-01-28 | 2017-09-26 | 夏普株式会社 | Obstacle detector, moving body, obstacle detection method and detection of obstacles program |
CN107215339A (en) * | 2017-06-26 | 2017-09-29 | 地壳机器人科技有限公司 | The lane-change control method and device of automatic driving vehicle |
CN107421507A (en) * | 2017-04-28 | 2017-12-01 | 上海华测导航技术股份有限公司 | Streetscape data acquisition measuring method |
CN107450529A (en) * | 2016-05-06 | 2017-12-08 | 优步技术公司 | improved object detection for automatic driving vehicle |
CN107673283A (en) * | 2017-11-17 | 2018-02-09 | 芜湖金智王机械设备有限公司 | The control system of unmanned fork truck |
CN107703935A (en) * | 2017-09-12 | 2018-02-16 | 安徽胜佳和电子科技有限公司 | Multiple data weighting fusions carry out method, storage device and the mobile terminal of avoidance |
CN107807632A (en) * | 2016-09-08 | 2018-03-16 | 福特全球技术公司 | Condition of road surface is perceived from the sensing data of fusion |
CN107966700A (en) * | 2017-11-20 | 2018-04-27 | 天津大学 | A kind of front obstacle detecting system and method for pilotless automobile |
CN108021132A (en) * | 2017-11-29 | 2018-05-11 | 芜湖星途机器人科技有限公司 | Paths planning method |
CN108037490A (en) * | 2017-11-30 | 2018-05-15 | 中煤航测遥感集团有限公司 | Ground Penetrating Radar Linear Positioning Accuracy Measurement Methods and system |
CN108073893A (en) * | 2016-11-16 | 2018-05-25 | 福特全球技术公司 | Plant is detected using range data |
CN108089185A (en) * | 2017-03-10 | 2018-05-29 | 南京沃杨机械科技有限公司 | The unmanned air navigation aid of agricultural machinery perceived based on farm environment |
CN108107887A (en) * | 2017-03-10 | 2018-06-01 | 南京沃杨机械科技有限公司 | Farm machinery navigation farm environment cognitive method |
CN108169743A (en) * | 2017-03-10 | 2018-06-15 | 南京沃杨机械科技有限公司 | Agricultural machinery is unmanned to use farm environment cognitive method |
CN108256413A (en) * | 2017-11-27 | 2018-07-06 | 科大讯飞股份有限公司 | It can traffic areas detection method and device, storage medium, electronic equipment |
CN108398672A (en) * | 2018-03-06 | 2018-08-14 | 厦门大学 | Road surface based on the 2D laser radar motion scans that lean forward and disorder detection method |
CN108535736A (en) * | 2017-03-05 | 2018-09-14 | 苏州中德睿博智能科技有限公司 | Three dimensional point cloud acquisition methods and acquisition system |
CN108549087A (en) * | 2018-04-16 | 2018-09-18 | 北京瑞途科技有限公司 | A kind of online test method based on laser radar |
CN108573492A (en) * | 2018-04-02 | 2018-09-25 | 电子科技大学 | A kind of real time radar search coverage detection method |
CN108629231A (en) * | 2017-03-16 | 2018-10-09 | 百度在线网络技术(北京)有限公司 | Obstacle detection method, device, equipment and storage medium |
CN108779984A (en) * | 2016-03-16 | 2018-11-09 | 索尼公司 | Signal handling equipment and signal processing method |
CN108847026A (en) * | 2018-05-31 | 2018-11-20 | 安徽四创电子股份有限公司 | A method of it is converted based on matrix coordinate and realizes that data investigation is shown |
CN109074490A (en) * | 2018-07-06 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | Path detection method, related device and computer readable storage medium |
CN109100741A (en) * | 2018-06-11 | 2018-12-28 | 长安大学 | A kind of object detection method based on 3D laser radar and image data |
CN109099923A (en) * | 2018-08-20 | 2018-12-28 | 江苏大学 | Road scene based on laser, video camera, GPS and inertial navigation fusion characterizes system and method |
CN109116374A (en) * | 2017-06-23 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | Determine the method, apparatus, equipment and storage medium of obstacle distance |
CN109116867A (en) * | 2018-09-28 | 2019-01-01 | 拓攻(南京)机器人有限公司 | A kind of unmanned plane during flying barrier-avoiding method, device, electronic equipment and storage medium |
CN109238221A (en) * | 2017-07-10 | 2019-01-18 | 上海汽车集团股份有限公司 | A kind of detection method and device of vehicle-periphery |
CN109444916A (en) * | 2018-10-17 | 2019-03-08 | 上海蔚来汽车有限公司 | The unmanned travelable area determining device of one kind and method |
CN109633686A (en) * | 2018-11-22 | 2019-04-16 | 浙江中车电车有限公司 | A kind of method and system based on laser radar detecting ground obstacle |
CN109696173A (en) * | 2019-02-20 | 2019-04-30 | 苏州风图智能科技有限公司 | A kind of car body air navigation aid and device |
CN109781129A (en) * | 2019-01-28 | 2019-05-21 | 重庆邮电大学 | A kind of road surface safety detection system and method based on inter-vehicular communication |
CN109816135A (en) * | 2017-11-22 | 2019-05-28 | 上海博泰悦臻电子设备制造有限公司 | Cluster hedging method, system, computer readable storage medium and server-side |
CN109901142A (en) * | 2019-02-28 | 2019-06-18 | 东软睿驰汽车技术(沈阳)有限公司 | A kind of scaling method and device |
CN109946701A (en) * | 2019-03-26 | 2019-06-28 | 新石器慧通(北京)科技有限公司 | A kind of cloud coordinate transformation method and device |
CN110001648A (en) * | 2017-12-18 | 2019-07-12 | 丰田自动车株式会社 | Controller of vehicle |
CN110007293A (en) * | 2019-04-24 | 2019-07-12 | 禾多科技(北京)有限公司 | The online calibration method of the multi-thread beam laser radar in field end |
CN110235026A (en) * | 2017-01-26 | 2019-09-13 | 御眼视觉技术有限公司 | The automobile navigation of image and laser radar information based on alignment |
CN110243380A (en) * | 2019-06-26 | 2019-09-17 | 华中科技大学 | A kind of map-matching method based on multi-sensor data and angle character identification |
CN110346808A (en) * | 2019-07-15 | 2019-10-18 | 上海点积实业有限公司 | A kind of Processing Method of Point-clouds and system of laser radar |
CN110378904A (en) * | 2018-07-09 | 2019-10-25 | 北京京东尚科信息技术有限公司 | The method and apparatus that point cloud data is split |
CN110488320A (en) * | 2019-08-23 | 2019-11-22 | 南京邮电大学 | A method of vehicle distances are detected using stereoscopic vision |
CN110567135A (en) * | 2019-10-08 | 2019-12-13 | 珠海格力电器股份有限公司 | air conditioner control method and device, storage medium and household equipment |
CN110688440A (en) * | 2019-09-29 | 2020-01-14 | 中山大学 | Map fusion method suitable for less sub-map overlapping parts |
CN110702028A (en) * | 2019-09-04 | 2020-01-17 | 中国农业机械化科学研究院 | Three-dimensional detection positioning method and device for orchard trunk |
CN110807439A (en) * | 2019-11-12 | 2020-02-18 | 银河水滴科技(北京)有限公司 | Method and device for detecting obstacle |
CN110850856A (en) * | 2018-07-25 | 2020-02-28 | 北京欣奕华科技有限公司 | Data processing method and device and robot |
CN110909569A (en) * | 2018-09-17 | 2020-03-24 | 深圳市优必选科技有限公司 | Road condition information identification method and terminal equipment |
CN110909671A (en) * | 2019-11-21 | 2020-03-24 | 大连理工大学 | Grid map obstacle detection method integrating probability and height information |
CN110927762A (en) * | 2018-09-20 | 2020-03-27 | 上海汽车集团股份有限公司 | Positioning correction method, device and system |
CN111256651A (en) * | 2018-12-03 | 2020-06-09 | 北京京东尚科信息技术有限公司 | Week vehicle distance measuring method and device based on monocular vehicle-mounted camera |
CN111551958A (en) * | 2020-04-28 | 2020-08-18 | 北京踏歌智行科技有限公司 | Mining area unmanned high-precision map manufacturing method |
CN111694903A (en) * | 2019-03-11 | 2020-09-22 | 北京地平线机器人技术研发有限公司 | Map construction method, map construction device, map construction equipment and readable storage medium |
CN111830470A (en) * | 2019-04-16 | 2020-10-27 | 杭州海康威视数字技术股份有限公司 | Combined calibration method and device, and target object detection method, system and device |
CN111879287A (en) * | 2020-07-08 | 2020-11-03 | 河南科技大学 | Forward terrain three-dimensional construction method of low-speed vehicle based on multiple sensors |
CN111928782A (en) * | 2017-07-21 | 2020-11-13 | 株式会社多田野 | Method for estimating upper surface of object to be measured, guidance information display device, and crane |
WO2020253764A1 (en) * | 2019-06-18 | 2020-12-24 | 华为技术有限公司 | Method and apparatus for determining running region information |
CN112179361A (en) * | 2019-07-02 | 2021-01-05 | 华为技术有限公司 | Method, device and storage medium for updating work map of mobile robot |
CN112241677A (en) * | 2019-07-17 | 2021-01-19 | 本田技研工业株式会社 | Information providing device, information providing method, and storage medium |
CN112424629A (en) * | 2018-06-26 | 2021-02-26 | 苏州宝时得电动工具有限公司 | Radar-applied electric device |
CN112684430A (en) * | 2020-12-23 | 2021-04-20 | 哈尔滨工业大学(威海) | Indoor old person walking health detection method and system, storage medium and terminal |
CN112698301A (en) * | 2020-12-11 | 2021-04-23 | 中国科学院微电子研究所 | Laser radar target identification method, device and equipment based on distance intensity correlation |
CN113466822A (en) * | 2017-07-04 | 2021-10-01 | 百度在线网络技术(北京)有限公司 | Method and apparatus for detecting obstacles |
CN113484875A (en) * | 2021-07-30 | 2021-10-08 | 燕山大学 | Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering |
CN115540892A (en) * | 2022-11-28 | 2022-12-30 | 北京理工大学深圳汽车研究院(电动车辆国家工程实验室深圳研究院) | Obstacle-detouring terminal point selection method and system for fixed line vehicle |
CN116985803A (en) * | 2023-09-26 | 2023-11-03 | 赛奎鹰智能装备(威海)有限责任公司 | Self-adaptive speed control system and method for electric scooter |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101701818A (en) * | 2009-11-05 | 2010-05-05 | 上海交通大学 | Method for detecting long-distance barrier |
-
2010
- 2010-06-09 CN CN 201010195586 patent/CN101975951B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101701818A (en) * | 2009-11-05 | 2010-05-05 | 上海交通大学 | Method for detecting long-distance barrier |
Non-Patent Citations (5)
Title |
---|
《中国惯性技术学报》 20090831 邓志红等 转台角位置基准误差对激光捷联惯导标定的影响分析 498-505 1-9 第17卷, 第4期 2 * |
《光学技术》 20100131 邓志红等 一种改进的视觉传感器与激光测距雷达特征匹配点提取算法 43-48 1-9 第36卷, 第1期 2 * |
《兵工学报》 20100228 赵诚等 基于粒子群优化的多分辨率图像融合 171-176 1-9 第31卷, 第2期 2 * |
《系统仿真学报》 20070430 苏建刚等 激光制导武器弹目视线半实物仿真技术研究 1717-1720 1-9 第19卷, 第8期 2 * |
《系统工程与电子技术》 20100531 邓志红灯 基于不完全观测数据的多速率多传感器数据融合 886-890 1-9 第32卷, 第5期 2 * |
Cited By (152)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102338621A (en) * | 2011-04-27 | 2012-02-01 | 天津工业大学 | Method for detecting height of obstacle for indoor visual navigation |
CN102338621B (en) * | 2011-04-27 | 2013-11-20 | 天津工业大学 | Method for detecting height of obstacle for indoor visual navigation |
CN102538766A (en) * | 2011-12-21 | 2012-07-04 | 武汉科技大学 | Obstacle test method for active intelligent vehicle |
CN103366154A (en) * | 2012-04-09 | 2013-10-23 | 通用汽车环球科技运作有限责任公司 | Reconfigurable clear path detection system |
CN103514427A (en) * | 2012-06-15 | 2014-01-15 | 株式会社理光 | A vehicle detection method and a system |
CN102779280A (en) * | 2012-06-19 | 2012-11-14 | 武汉大学 | Traffic information extraction method based on laser sensor |
CN102779280B (en) * | 2012-06-19 | 2014-07-30 | 武汉大学 | Traffic information extraction method based on laser sensor |
CN103908346B (en) * | 2012-12-31 | 2016-04-20 | 复旦大学 | A kind of High Precision Automatic Use of Neuronavigation spatial registration method |
CN103908346A (en) * | 2012-12-31 | 2014-07-09 | 复旦大学 | High-precision automatic neuronavigation space registration method |
CN104899855A (en) * | 2014-03-06 | 2015-09-09 | 株式会社日立制作所 | Three-dimensional obstacle detection method and apparatus |
CN103884281B (en) * | 2014-03-18 | 2015-10-21 | 北京控制工程研究所 | A kind of rover obstacle detection method based on initiating structure light |
CN103884281A (en) * | 2014-03-18 | 2014-06-25 | 北京控制工程研究所 | Patrol device obstacle detection method based on initiative structure light |
CN106164698A (en) * | 2014-03-31 | 2016-11-23 | 三美电机株式会社 | Radar module, conveying equipment and object identification method |
CN105182358A (en) * | 2014-04-25 | 2015-12-23 | 谷歌公司 | Methods and systems for object detection using laser point clouds |
CN105867408A (en) * | 2014-11-07 | 2016-08-17 | 现代摩比斯株式会社 | Apparatus and method for determining available driving space |
CN105867408B (en) * | 2014-11-07 | 2019-06-21 | 现代摩比斯株式会社 | It can travel space judgment means and its method |
CN104503449A (en) * | 2014-11-24 | 2015-04-08 | 杭州申昊科技股份有限公司 | Positioning method based on environment line features |
CN104574376A (en) * | 2014-12-24 | 2015-04-29 | 重庆大学 | Anti-collision method based on joint verification of binocular vision and laser radar in congested traffic |
CN104574376B (en) * | 2014-12-24 | 2017-08-08 | 重庆大学 | Avoiding collision based on binocular vision and laser radar joint verification in hustle traffic |
CN104482872A (en) * | 2014-12-31 | 2015-04-01 | 中联重科股份有限公司 | Curb boundary detection method and device based on parallel structured light and engineering machine |
CN104482872B (en) * | 2014-12-31 | 2018-02-13 | 中联重科股份有限公司 | Curb boundary detection method and device, engineering machinery based on Structured Light |
CN107209268A (en) * | 2015-01-28 | 2017-09-26 | 夏普株式会社 | Obstacle detector, moving body, obstacle detection method and detection of obstacles program |
CN105893931A (en) * | 2015-02-16 | 2016-08-24 | 松下知识产权经营株式会社 | Object detection apparatus and method |
CN104833364A (en) * | 2015-05-07 | 2015-08-12 | 苏州天鸣信息科技有限公司 | Safe route indicating method for bumpy roads |
CN104833364B (en) * | 2015-05-07 | 2018-05-18 | 深圳市爱民科技有限公司 | A kind of emergency route indicating means on bump course |
CN104933708A (en) * | 2015-06-07 | 2015-09-23 | 浙江大学 | Barrier detection method in vegetation environment based on multispectral and 3D feature fusion |
CN106326810A (en) * | 2015-06-25 | 2017-01-11 | 株式会社理光 | Road scene identification method and equipment |
CN105205859B (en) * | 2015-09-22 | 2018-05-15 | 王红军 | A kind of method for measuring similarity of the environmental characteristic based on 3 d grid map |
CN105205859A (en) * | 2015-09-22 | 2015-12-30 | 王红军 | Similarity measurement method of environmental characteristics based on three-dimensional raster map |
CN105204510A (en) * | 2015-10-09 | 2015-12-30 | 福州华鹰重工机械有限公司 | Generation method and device for probability map for accurate positioning |
CN105301577A (en) * | 2015-10-10 | 2016-02-03 | 福州华鹰重工机械有限公司 | Laser strength calibration method and laser strength calibration device |
CN105301577B (en) * | 2015-10-10 | 2016-07-06 | 福州华鹰重工机械有限公司 | A kind of laser intensity calibration steps and device |
CN105571599A (en) * | 2015-12-21 | 2016-05-11 | 小米科技有限责任公司 | Road condition information processing method and device |
CN105512641B (en) * | 2015-12-31 | 2019-02-19 | 哈尔滨工业大学 | A method of dynamic pedestrian and vehicle under calibration sleet state in video |
CN105512641A (en) * | 2015-12-31 | 2016-04-20 | 哈尔滨工业大学 | Method for using laser radar scanning method to calibrate dynamic pedestrians and vehicles in video in snowing or raining state |
CN105818763B (en) * | 2016-03-09 | 2018-06-22 | 睿驰智能汽车(广州)有限公司 | A kind of method, apparatus and system of determining vehicle periphery object distance |
CN105818763A (en) * | 2016-03-09 | 2016-08-03 | 乐卡汽车智能科技(北京)有限公司 | Method, device and system for confirming distance of object around vehicle |
CN108779984A (en) * | 2016-03-16 | 2018-11-09 | 索尼公司 | Signal handling equipment and signal processing method |
CN105956527B (en) * | 2016-04-22 | 2019-10-22 | 百度在线网络技术(北京)有限公司 | Detection of obstacles outcome evaluation method and apparatus for automatic driving car |
CN105956527A (en) * | 2016-04-22 | 2016-09-21 | 百度在线网络技术(北京)有限公司 | Method and device for evaluating barrier detection result of driverless vehicle |
CN107450529A (en) * | 2016-05-06 | 2017-12-08 | 优步技术公司 | improved object detection for automatic driving vehicle |
CN107807632B (en) * | 2016-09-08 | 2022-05-31 | 福特全球技术公司 | Perceiving road conditions from fused sensor data |
CN107807632A (en) * | 2016-09-08 | 2018-03-16 | 福特全球技术公司 | Condition of road surface is perceived from the sensing data of fusion |
CN106447697B (en) * | 2016-10-09 | 2018-10-26 | 湖南穗富眼电子科技有限公司 | A kind of specific moving-target fast tracking method based on moving platform |
CN106447697A (en) * | 2016-10-09 | 2017-02-22 | 湖南穗富眼电子科技有限公司 | Specific moving target fast tracking method based on moving platform |
CN106546996A (en) * | 2016-10-15 | 2017-03-29 | 北海益生源农贸有限责任公司 | Road Detection and tracking based on four line laser radars |
CN108073893B (en) * | 2016-11-16 | 2023-09-12 | 福特全球技术公司 | Detection of plants using distance data |
CN108073893A (en) * | 2016-11-16 | 2018-05-25 | 福特全球技术公司 | Plant is detected using range data |
CN106650640B (en) * | 2016-12-05 | 2020-03-03 | 浙江大学 | Negative obstacle detection method based on laser radar point cloud local structure characteristics |
CN106650640A (en) * | 2016-12-05 | 2017-05-10 | 浙江大学 | Negative obstacle detection method based on local structure feature of laser radar point cloud |
CN106646474A (en) * | 2016-12-22 | 2017-05-10 | 中国兵器装备集团自动化研究所 | Unstructured road accidented barrier detection apparatus |
US11953599B2 (en) | 2017-01-26 | 2024-04-09 | Mobileye Vision Technologies Ltd. | Vehicle navigation based on aligned image and LIDAR information |
CN110235026A (en) * | 2017-01-26 | 2019-09-13 | 御眼视觉技术有限公司 | The automobile navigation of image and laser radar information based on alignment |
CN108535736A (en) * | 2017-03-05 | 2018-09-14 | 苏州中德睿博智能科技有限公司 | Three dimensional point cloud acquisition methods and acquisition system |
CN108169743A (en) * | 2017-03-10 | 2018-06-15 | 南京沃杨机械科技有限公司 | Agricultural machinery is unmanned to use farm environment cognitive method |
CN108107887A (en) * | 2017-03-10 | 2018-06-01 | 南京沃杨机械科技有限公司 | Farm machinery navigation farm environment cognitive method |
CN108089185A (en) * | 2017-03-10 | 2018-05-29 | 南京沃杨机械科技有限公司 | The unmanned air navigation aid of agricultural machinery perceived based on farm environment |
CN108629231A (en) * | 2017-03-16 | 2018-10-09 | 百度在线网络技术(北京)有限公司 | Obstacle detection method, device, equipment and storage medium |
CN108629231B (en) * | 2017-03-16 | 2021-01-22 | 百度在线网络技术(北京)有限公司 | Obstacle detection method, apparatus, device and storage medium |
US10698106B2 (en) | 2017-03-16 | 2020-06-30 | Baidu Online Network Technology (Beijing) Co., Ltd. | Obstacle detecting method and apparatus, device and storage medium |
CN106896353A (en) * | 2017-03-21 | 2017-06-27 | 同济大学 | A kind of unmanned vehicle crossing detection method based on three-dimensional laser radar |
CN107194962A (en) * | 2017-04-01 | 2017-09-22 | 深圳市速腾聚创科技有限公司 | Point cloud and plane picture fusion method and device |
CN107194962B (en) * | 2017-04-01 | 2020-06-05 | 深圳市速腾聚创科技有限公司 | Point cloud and plane image fusion method and device |
CN107421507A (en) * | 2017-04-28 | 2017-12-01 | 上海华测导航技术股份有限公司 | Streetscape data acquisition measuring method |
CN107169986A (en) * | 2017-05-23 | 2017-09-15 | 北京理工大学 | A kind of obstacle detection method and system |
CN107169986B (en) * | 2017-05-23 | 2019-09-17 | 北京理工大学 | A kind of obstacle detection method and system |
CN107167141B (en) * | 2017-06-15 | 2020-08-14 | 同济大学 | Robot autonomous navigation system based on double laser radars |
CN107167141A (en) * | 2017-06-15 | 2017-09-15 | 同济大学 | Robot autonomous navigation system based on double line laser radars |
CN109116374A (en) * | 2017-06-23 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | Determine the method, apparatus, equipment and storage medium of obstacle distance |
CN107215339A (en) * | 2017-06-26 | 2017-09-29 | 地壳机器人科技有限公司 | The lane-change control method and device of automatic driving vehicle |
CN113466822A (en) * | 2017-07-04 | 2021-10-01 | 百度在线网络技术(北京)有限公司 | Method and apparatus for detecting obstacles |
CN109238221A (en) * | 2017-07-10 | 2019-01-18 | 上海汽车集团股份有限公司 | A kind of detection method and device of vehicle-periphery |
CN109238221B (en) * | 2017-07-10 | 2021-02-26 | 上海汽车集团股份有限公司 | Method and device for detecting surrounding environment of vehicle |
CN111928782A (en) * | 2017-07-21 | 2020-11-13 | 株式会社多田野 | Method for estimating upper surface of object to be measured, guidance information display device, and crane |
CN107703935A (en) * | 2017-09-12 | 2018-02-16 | 安徽胜佳和电子科技有限公司 | Multiple data weighting fusions carry out method, storage device and the mobile terminal of avoidance |
CN107673283A (en) * | 2017-11-17 | 2018-02-09 | 芜湖金智王机械设备有限公司 | The control system of unmanned fork truck |
CN107966700A (en) * | 2017-11-20 | 2018-04-27 | 天津大学 | A kind of front obstacle detecting system and method for pilotless automobile |
CN109816135B (en) * | 2017-11-22 | 2023-09-26 | 博泰车联网科技(上海)股份有限公司 | Cluster risk avoiding method, system, computer readable storage medium and service terminal |
CN109816135A (en) * | 2017-11-22 | 2019-05-28 | 上海博泰悦臻电子设备制造有限公司 | Cluster hedging method, system, computer readable storage medium and server-side |
CN108256413A (en) * | 2017-11-27 | 2018-07-06 | 科大讯飞股份有限公司 | It can traffic areas detection method and device, storage medium, electronic equipment |
CN108021132A (en) * | 2017-11-29 | 2018-05-11 | 芜湖星途机器人科技有限公司 | Paths planning method |
CN108037490A (en) * | 2017-11-30 | 2018-05-15 | 中煤航测遥感集团有限公司 | Ground Penetrating Radar Linear Positioning Accuracy Measurement Methods and system |
CN108037490B (en) * | 2017-11-30 | 2020-07-24 | 中煤航测遥感集团有限公司 | Method and system for detecting positioning accuracy of ground penetrating radar |
CN110001648A (en) * | 2017-12-18 | 2019-07-12 | 丰田自动车株式会社 | Controller of vehicle |
CN110001648B (en) * | 2017-12-18 | 2022-06-03 | 丰田自动车株式会社 | Vehicle control device |
CN108398672B (en) * | 2018-03-06 | 2020-08-14 | 厦门大学 | Forward-tilting 2D laser radar mobile scanning-based pavement and obstacle detection method |
CN108398672A (en) * | 2018-03-06 | 2018-08-14 | 厦门大学 | Road surface based on the 2D laser radar motion scans that lean forward and disorder detection method |
CN108573492A (en) * | 2018-04-02 | 2018-09-25 | 电子科技大学 | A kind of real time radar search coverage detection method |
CN108573492B (en) * | 2018-04-02 | 2020-04-03 | 电子科技大学 | Real-time radar detection area detection method |
CN108549087A (en) * | 2018-04-16 | 2018-09-18 | 北京瑞途科技有限公司 | A kind of online test method based on laser radar |
CN108549087B (en) * | 2018-04-16 | 2021-10-08 | 北京瑞途科技有限公司 | Online detection method based on laser radar |
CN108847026A (en) * | 2018-05-31 | 2018-11-20 | 安徽四创电子股份有限公司 | A method of it is converted based on matrix coordinate and realizes that data investigation is shown |
CN109100741B (en) * | 2018-06-11 | 2020-11-20 | 长安大学 | Target detection method based on 3D laser radar and image data |
CN109100741A (en) * | 2018-06-11 | 2018-12-28 | 长安大学 | A kind of object detection method based on 3D laser radar and image data |
CN112424629A (en) * | 2018-06-26 | 2021-02-26 | 苏州宝时得电动工具有限公司 | Radar-applied electric device |
CN112424629B (en) * | 2018-06-26 | 2024-04-09 | 苏州宝时得电动工具有限公司 | Electric device using radar |
CN109074490A (en) * | 2018-07-06 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | Path detection method, related device and computer readable storage medium |
CN110378904A (en) * | 2018-07-09 | 2019-10-25 | 北京京东尚科信息技术有限公司 | The method and apparatus that point cloud data is split |
CN110378904B (en) * | 2018-07-09 | 2021-10-01 | 北京京东尚科信息技术有限公司 | Method and device for segmenting point cloud data |
CN110850856A (en) * | 2018-07-25 | 2020-02-28 | 北京欣奕华科技有限公司 | Data processing method and device and robot |
CN110850856B (en) * | 2018-07-25 | 2022-11-25 | 北京欣奕华科技有限公司 | Data processing method and device and robot |
CN109099923A (en) * | 2018-08-20 | 2018-12-28 | 江苏大学 | Road scene based on laser, video camera, GPS and inertial navigation fusion characterizes system and method |
CN110909569A (en) * | 2018-09-17 | 2020-03-24 | 深圳市优必选科技有限公司 | Road condition information identification method and terminal equipment |
CN110909569B (en) * | 2018-09-17 | 2022-09-23 | 深圳市优必选科技有限公司 | Road condition information identification method and terminal equipment |
CN110927762A (en) * | 2018-09-20 | 2020-03-27 | 上海汽车集团股份有限公司 | Positioning correction method, device and system |
CN110927762B (en) * | 2018-09-20 | 2023-09-01 | 上海汽车集团股份有限公司 | Positioning correction method, device and system |
CN109116867A (en) * | 2018-09-28 | 2019-01-01 | 拓攻(南京)机器人有限公司 | A kind of unmanned plane during flying barrier-avoiding method, device, electronic equipment and storage medium |
CN109444916B (en) * | 2018-10-17 | 2023-07-04 | 上海蔚来汽车有限公司 | Unmanned driving drivable area determining device and method |
CN109444916A (en) * | 2018-10-17 | 2019-03-08 | 上海蔚来汽车有限公司 | The unmanned travelable area determining device of one kind and method |
CN109633686A (en) * | 2018-11-22 | 2019-04-16 | 浙江中车电车有限公司 | A kind of method and system based on laser radar detecting ground obstacle |
CN109633686B (en) * | 2018-11-22 | 2021-01-19 | 浙江中车电车有限公司 | Method and system for detecting ground obstacle based on laser radar |
CN111256651A (en) * | 2018-12-03 | 2020-06-09 | 北京京东尚科信息技术有限公司 | Week vehicle distance measuring method and device based on monocular vehicle-mounted camera |
CN111256651B (en) * | 2018-12-03 | 2022-06-07 | 北京京东乾石科技有限公司 | Week vehicle distance measuring method and device based on monocular vehicle-mounted camera |
CN109781129A (en) * | 2019-01-28 | 2019-05-21 | 重庆邮电大学 | A kind of road surface safety detection system and method based on inter-vehicular communication |
CN109696173A (en) * | 2019-02-20 | 2019-04-30 | 苏州风图智能科技有限公司 | A kind of car body air navigation aid and device |
CN109901142A (en) * | 2019-02-28 | 2019-06-18 | 东软睿驰汽车技术(沈阳)有限公司 | A kind of scaling method and device |
CN111694903B (en) * | 2019-03-11 | 2023-09-12 | 北京地平线机器人技术研发有限公司 | Map construction method, device, equipment and readable storage medium |
CN111694903A (en) * | 2019-03-11 | 2020-09-22 | 北京地平线机器人技术研发有限公司 | Map construction method, map construction device, map construction equipment and readable storage medium |
CN109946701A (en) * | 2019-03-26 | 2019-06-28 | 新石器慧通(北京)科技有限公司 | A kind of cloud coordinate transformation method and device |
CN109946701B (en) * | 2019-03-26 | 2021-02-02 | 新石器慧通(北京)科技有限公司 | Point cloud coordinate conversion method and device |
CN111830470A (en) * | 2019-04-16 | 2020-10-27 | 杭州海康威视数字技术股份有限公司 | Combined calibration method and device, and target object detection method, system and device |
CN110007293A (en) * | 2019-04-24 | 2019-07-12 | 禾多科技(北京)有限公司 | The online calibration method of the multi-thread beam laser radar in field end |
US11698459B2 (en) | 2019-06-18 | 2023-07-11 | Huawei Technologies Co., Ltd. | Method and apparatus for determining drivable region information |
WO2020253764A1 (en) * | 2019-06-18 | 2020-12-24 | 华为技术有限公司 | Method and apparatus for determining running region information |
US20220108552A1 (en) | 2019-06-18 | 2022-04-07 | Huawei Technologies Co., Ltd. | Method and Apparatus for Determining Drivable Region Information |
CN110243380A (en) * | 2019-06-26 | 2019-09-17 | 华中科技大学 | A kind of map-matching method based on multi-sensor data and angle character identification |
CN110243380B (en) * | 2019-06-26 | 2020-11-24 | 华中科技大学 | Map matching method based on multi-sensor data and angle feature recognition |
WO2021000630A1 (en) * | 2019-07-02 | 2021-01-07 | 华为技术有限公司 | Method and apparatus for updating working map of mobile robot, and storage medium |
CN112179361A (en) * | 2019-07-02 | 2021-01-05 | 华为技术有限公司 | Method, device and storage medium for updating work map of mobile robot |
US11896175B2 (en) | 2019-07-02 | 2024-02-13 | Huawei Technologies Co., Ltd. | Method and apparatus for updating working map of mobile robot, and storage medium |
CN110346808A (en) * | 2019-07-15 | 2019-10-18 | 上海点积实业有限公司 | A kind of Processing Method of Point-clouds and system of laser radar |
CN112241677A (en) * | 2019-07-17 | 2021-01-19 | 本田技研工业株式会社 | Information providing device, information providing method, and storage medium |
CN110488320B (en) * | 2019-08-23 | 2023-02-03 | 南京邮电大学 | Method for detecting vehicle distance by using stereoscopic vision |
CN110488320A (en) * | 2019-08-23 | 2019-11-22 | 南京邮电大学 | A method of vehicle distances are detected using stereoscopic vision |
CN110702028B (en) * | 2019-09-04 | 2020-09-15 | 中国农业机械化科学研究院 | Three-dimensional detection positioning method and device for orchard trunk |
CN110702028A (en) * | 2019-09-04 | 2020-01-17 | 中国农业机械化科学研究院 | Three-dimensional detection positioning method and device for orchard trunk |
CN110688440B (en) * | 2019-09-29 | 2022-03-04 | 中山大学 | Map fusion method suitable for less sub-map overlapping parts |
CN110688440A (en) * | 2019-09-29 | 2020-01-14 | 中山大学 | Map fusion method suitable for less sub-map overlapping parts |
CN110567135A (en) * | 2019-10-08 | 2019-12-13 | 珠海格力电器股份有限公司 | air conditioner control method and device, storage medium and household equipment |
CN110807439A (en) * | 2019-11-12 | 2020-02-18 | 银河水滴科技(北京)有限公司 | Method and device for detecting obstacle |
CN110909671A (en) * | 2019-11-21 | 2020-03-24 | 大连理工大学 | Grid map obstacle detection method integrating probability and height information |
CN111551958B (en) * | 2020-04-28 | 2022-04-01 | 北京踏歌智行科技有限公司 | Mining area unmanned high-precision map manufacturing method |
CN111551958A (en) * | 2020-04-28 | 2020-08-18 | 北京踏歌智行科技有限公司 | Mining area unmanned high-precision map manufacturing method |
CN111879287B (en) * | 2020-07-08 | 2022-07-15 | 河南科技大学 | Forward terrain three-dimensional construction method of low-speed vehicle based on multiple sensors |
CN111879287A (en) * | 2020-07-08 | 2020-11-03 | 河南科技大学 | Forward terrain three-dimensional construction method of low-speed vehicle based on multiple sensors |
CN112698301A (en) * | 2020-12-11 | 2021-04-23 | 中国科学院微电子研究所 | Laser radar target identification method, device and equipment based on distance intensity correlation |
CN112684430A (en) * | 2020-12-23 | 2021-04-20 | 哈尔滨工业大学(威海) | Indoor old person walking health detection method and system, storage medium and terminal |
CN113484875A (en) * | 2021-07-30 | 2021-10-08 | 燕山大学 | Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering |
CN115540892B (en) * | 2022-11-28 | 2023-03-24 | 北京理工大学深圳汽车研究院(电动车辆国家工程实验室深圳研究院) | Obstacle-detouring terminal point selection method and system for fixed line vehicle |
CN115540892A (en) * | 2022-11-28 | 2022-12-30 | 北京理工大学深圳汽车研究院(电动车辆国家工程实验室深圳研究院) | Obstacle-detouring terminal point selection method and system for fixed line vehicle |
CN116985803A (en) * | 2023-09-26 | 2023-11-03 | 赛奎鹰智能装备(威海)有限责任公司 | Self-adaptive speed control system and method for electric scooter |
CN116985803B (en) * | 2023-09-26 | 2023-12-29 | 赛奎鹰智能装备(威海)有限责任公司 | Self-adaptive speed control system and method for electric scooter |
Also Published As
Publication number | Publication date |
---|---|
CN101975951B (en) | 2013-03-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101975951B (en) | Field environment barrier detection method fusing distance and image information | |
US11885910B2 (en) | Hybrid-view LIDAR-based object detection | |
US10846874B2 (en) | Method and apparatus for processing point cloud data and storage medium | |
US11790668B2 (en) | Automated road edge boundary detection | |
CN106570454B (en) | Pedestrian traffic parameter extracting method based on mobile laser scanning | |
Guan et al. | Automated road information extraction from mobile laser scanning data | |
Zhu et al. | 3d lidar point cloud based intersection recognition for autonomous driving | |
CN105930819A (en) | System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system | |
CN104848867B (en) | The pilotless automobile Combinated navigation method of view-based access control model screening | |
CN110235026A (en) | The automobile navigation of image and laser radar information based on alignment | |
CN112346463B (en) | Unmanned vehicle path planning method based on speed sampling | |
CN106199558A (en) | Barrier method for quick | |
CN104933708A (en) | Barrier detection method in vegetation environment based on multispectral and 3D feature fusion | |
CN110531376A (en) | Detection of obstacles and tracking for harbour automatic driving vehicle | |
Silver et al. | Experimental analysis of overhead data processing to support long range navigation | |
CN113009453B (en) | Mine road edge detection and mapping method and device | |
CN106446785A (en) | Passable road detection method based on binocular vision | |
CN114325634A (en) | Method for extracting passable area in high-robustness field environment based on laser radar | |
Browning et al. | 3D Mapping for high-fidelity unmanned ground vehicle lidar simulation | |
Mukhija et al. | Outdoor intersection detection for autonomous exploration | |
CN109241855A (en) | Intelligent vehicle based on stereoscopic vision can travel area detection method | |
CN114842660B (en) | Unmanned lane track prediction method and device and electronic equipment | |
Rasmussen et al. | Trail following with omnidirectional vision | |
CN113762195A (en) | Point cloud semantic segmentation and understanding method based on road side RSU | |
Huang et al. | Ground filtering algorithm for mobile LIDAR using order and neighborhood point information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130320 Termination date: 20130609 |