CN101067557A - Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle - Google Patents

Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle Download PDF

Info

Publication number
CN101067557A
CN101067557A CNA2007101229022A CN200710122902A CN101067557A CN 101067557 A CN101067557 A CN 101067557A CN A2007101229022 A CNA2007101229022 A CN A2007101229022A CN 200710122902 A CN200710122902 A CN 200710122902A CN 101067557 A CN101067557 A CN 101067557A
Authority
CN
China
Prior art keywords
coordinate system
car body
path
cos
sin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2007101229022A
Other languages
Chinese (zh)
Other versions
CN100494900C (en
Inventor
毛晓艳
张晋
陈建新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Control Engineering
Original Assignee
Beijing Institute of Control Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Control Engineering filed Critical Beijing Institute of Control Engineering
Priority to CNB2007101229022A priority Critical patent/CN100494900C/en
Publication of CN101067557A publication Critical patent/CN101067557A/en
Application granted granted Critical
Publication of CN100494900C publication Critical patent/CN100494900C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention is suitable to the independent motion vehicles environment sensation monocular visual navigation method, first surveys the camera distortion parameter, puts the camera on the independent motion vehicles, determines transformation relation from image plane coordinate system to world coordinate system; then records current car body posture, gathers the picture in the current car body visible scope, feedback the picture to the picture operation contact surface in order to definite the walk mode of the car body and do distortion correction on the gathering picture, select the path and the path spot on the path, according to the coordinates transformation relations under the world coordinate system put the current position and choice path result to the car body and demonstrate on the graph operation contact surface by the car body width; the car body walks according to the independently evades bonds or the way track way till the current visible scope last way spot, and then circling start from the gather picture to the final object point. The invention applies the monocular measuring technique in the unknown environment the visual navigation.

Description

Be applicable to the monocular vision air navigation aid of the environment sensing of autonomous moving vehicle
Technical field
The present invention relates to a kind of monocular vision air navigation aid that is applicable to the environment sensing of autonomous moving vehicle.
Background technology
A gordian technique of autonomous moving vehicle is the perception to surrounding environment.The three-dimensional coordinate that mainly comprises circumstances not known on every side recovers and the identification of obstacle, thereby guarantees the navigation control system operate as normal, and makes vehicle can walk and arrive the intended target point safely to finish the scientific exploration task.In order to strengthen the environment sensing ability of autonomous vehicle, improve it from the master program ability, thereby realization independent navigation, domestic how tame unit has all carried out broad research to the multi-eye stereo vision technique, but that is that all right is ripe because theory on computer vision is in the research of autonomous moving vehicle application facet, the precision of algorithm and the difficult problem of coupling are solved at all, produce bottleneck on engineering is used.
Than binocular stereo vision, the method for utilizing single camera to navigate according to image presentation feature is also studied a lot of, has obtained the application under some actual scenes.But because the monocular three-dimensional measurement is incomplete theoretically, lack an initial conditions, so being used for the algorithm of car body navigation, seen monocular is mostly to yet there are no application in semi-structured or known scene about monocular and measure the relevant report of on the autonomous roaming car of circumstances not known, navigating.
Chinese patent publication number CN1569558, open day on January 26th, 2005, invention and created name is the mobile robot visual air navigation aid based on the image appearance feature, the scene image that this application case discloses a kind of robot present position that obtains by the single width camera is finished the method for navigation, and this method must obtain the known scene map that builds in advance.Chinese patent publication number CN1873656, open day on Dec 6th, 2006, invention and created name is the natural target detection method in the robot visual guidance, this application case discloses a kind ofly finishes the method navigate of extracting to the natural target in the image, this method need be before off-line state be got off walking, and the model of setting up natural target just can carry out.
The content of invention
Technology of the present invention is dealt with problems and is: overcome the deficiencies in the prior art, a kind of monocular vision air navigation aid that is applicable to the environment sensing of autonomous moving vehicle is provided, this method is applied to monocular measuring method in the circumstances not known, by real-time correction track path, assurance reaches the precision of impact point, and, guarantee the safety traffic of car body by the automatic obstacle avoiding walking manner.
Technical solution of the present invention is: be applicable to the monocular vision air navigation aid of the environment sensing of autonomous moving vehicle, it is characterized in that comprising the following steps:
(1) measures the intrinsic parameter that camera distorts, after the measurement camera is installed on the autonomous moving vehicle, determine the transformational relation of photo coordinate system to world coordinate system;
(2) record is when the front vehicle body attitude, in images acquired in the front vehicle body visual range, and the image of gathering is back to pattern manipulation interface, select the car body walking manner according to target location and image situation, then the image that collects is carried out distortion correction, make the image after the correction satisfy the pinhole imaging system principle, according to the path point on selection walking path of the image behind the distortion correction and the path, the path point of car body current location and selection is converted under the world coordinate system according to the transformational relation of the described photo coordinate system of step (1) to world coordinate system, the result issues car body with conversion, and the path the selected form with width of the carbody is presented on the pattern manipulation interface;
(3) car body when car body in the step (2) selects path point to follow the tracks of walking manner, is revised the coordinate figure of path point under world coordinate system, last path point in arriving current visual range along drafting the path walking in real time;
When car body is selected the automatic obstacle avoiding walking manner in the step (2), the camera pitching is taken pictures to current scene to specified angle, barrier in the scene is carried out Region Segmentation, the cognitive disorders zone, estimation obstacle height, planning gets around the path of obstacle, and car body is according to the path walking of planning, last path point in arriving current visual range;
(4) in car body arrives current visual range, during last path point, begin circulation from step (2) and carry out, arrive the final goal position until car body.
Also regular images acquired in the car body walking process in the described step (3), after image is back to pattern manipulation interface image is carried out distortion correction, and will be according to the path point of car body correction or planning being presented on the pattern manipulation interface when front vehicle body position, attitude, when car body does not have avoiding obstacles, car body is sent the instruction of stopping in emergency.
The present invention compared with prior art beneficial effect is:
(1) the present invention adopts the monocular vision air navigation aid to compare with binocular stereo vision, simplifies and calculates, requirement of real time, good stability.
(2) the present invention with monocular measuring method be applied in the circumstances not known with existing known scene map under the monocular air navigation aid relatively, be applicable to non-structured unknown scene, effectively improve the precision of following the tracks of.
(3) the present invention is owing to adopt based on the distant method of operating that quantizes figure, and the scene grid information that shows the track of planning and standard at distant end is on image, and handled easily has better guaranteed security.
Description of drawings
Fig. 1 is the inventive method process flow diagram;
Fig. 2 is the synoptic diagram of image distortion correction of the present invention;
Fig. 3 is the synoptic diagram of monocular of the present invention projection coordinate conversion;
Fig. 4 is the real-time synoptic diagram of revising of monocular projection coordinate of the present invention;
Fig. 5 independently carries out the schematic diagram of obstacle level identification for the present invention;
Fig. 6 is the design sketch of automatic obstacle avoiding of the present invention;
Fig. 7 quantizes the design sketch of graphic presentation for the present invention.
Embodiment
As shown in Figure 1, be the inventive method process flow diagram, concrete steps are as follows:
(1) measures the intrinsic parameter that camera distorts, after the measurement camera is installed on the autonomous moving vehicle, determine the transformational relation of photo coordinate system to world coordinate system;
(2) record is when the front vehicle body attitude, in images acquired in the front vehicle body visual range, and the image of gathering is back to pattern manipulation interface, select the car body walking manner according to target location and image situation, then the image that collects is carried out distortion correction, make the image after the correction satisfy the pinhole imaging system principle, according to the path point on selection walking path of the image behind the distortion correction and the path, the path point of car body current location and selection is converted under the world coordinate system according to the transformational relation of the described photo coordinate system of step (1) to world coordinate system, the result issues car body with conversion, and the path the selected form with width of the carbody is presented on the pattern manipulation interface;
As shown in Figure 2, o-X Lc-Y Lc-Z LcRepresent camera coordinates system, u-v-f represents photo coordinate system, and P is an actual point, and P ' is the corresponding point of P in the picture plane, and captured image satisfies the principle of pinhole imaging system, can derive the following formula that satisfies pixel coordinate and camera photocentre coordinate.
udx f = y lc x lc vdy f = z lc x lc
If x Lc=t obtains the parametric equation of imaging formula:
x lc y lc z lc = 1 u · dx f v · dy f * t (formula 1)
Wherein u, v are respectively the horizontal ordinate and the ordinate of pixel, and dx is the physical length of a pixel on the CCD target surface u direction, and dy is the physical length of a pixel on the CCD target surface v direction.
This is the ultimate principle of monocular projection, under little pore model, image coordinate system is projected on the calculative plane, obtains the three-dimensional coordinate of pixel under this plane.But in the application of reality, because the factors such as optical distortion of camera lens, imaging is difficult to satisfy pinhole imaging system.So before the conversion of carrying out above-mentioned formula, must carry out some main distortion parameters to image and proofread and correct, make it satisfy the pinhole imaging system principle as far as possible, apply mechanically above-mentioned formula again and calculate.The present invention mainly considers three kinds of distortion of camera lens: radial distortion, decentering distortion and the distortion of picture plane.Formula is as follows:
x=(x-x 0) y=(y-y 0)
r 2= x 2+ y 2
Δ x r = K 1 x ‾ r 2 + K 2 x ‾ r 4 + K 3 x ‾ r 6 + · · · Δ y r = K 1 y ‾ r 2 + K 2 y ‾ r 4 + K 3 y ‾ r 6 + · · ·
Δx d = p 1 ( r 2 + 2 x ‾ 2 ) + 2 p 2 x ‾ × y ‾ Δ y d = p 2 ( r 2 + 2 y ‾ 2 ) + 2 p 1 x ‾ × y ‾
Δx m = b 1 x ‾ + b 2 y ‾ Δy m = 0
x ′ = x + Δx r + Δx d + Δx m y ′ = y + Δy r + Δy d + Δy m
Δ x wherein rΔ y rBe radial distortion, Δ x dΔ y dBe decentering distortion, Δ x mΔ y mBe the distortion of picture plane.
Under breadboard accurate known environment, can calibrate above deformation coefficient exactly through optical means, promptly adopt special-purpose photogrammetric optics calibration system, the image that when practical application shooting obtained is through above distortion treatment for correcting, obtains new image and then thinks the principle that satisfies pinhole imaging system.
Through type 1 can be set up the relation between photo coordinate system and the camera coordinates system, practical application for Navigation Control, need be under bodywork reference frame or under certain navigation coordinate system on ground with the coordinate conversion under the camera coordinates system, be called world coordinate system at this, so should clearly describe each coordinate system and the mutual transformational relation thereof of probe vehicles pose.
As shown in Figure 3, be defined as follows coordinate system for pose and the coordinate transformation relation of describing detector:
Wherein:
O LcX LcY LcZ Lc: be camera coordinates system, initial point is positioned at camera photocentre, X LcAlong camera light axially before, Z LcPerpendicular to X LcUpwards, Y LcWith X Lc, Z LcBecome right-handed system;
O cX cY cZ c: camera support coordinate system, initial point are positioned at camera support center, Y cAlong support left, X cVertical Y cAnd be positioned at X LcY LcIn the plane, place, Z cWith X c, Y cBecome right-handed system;
O lX lY lZ l: be the mast coordinate system, initial point is positioned at the junction of mast and car body, X lFor the car body apical axis forward, Y lVertical X lLeft, Z lWith X l, Y lBecome right-handed system;
O bX bY bZ b: be bodywork reference frame, initial point is positioned at car body geometric center, X bBe car body forward direction, Y bPoint to the car body left side;
O rX rY rZ r: be world coordinate system, initial point be the car body barycenter vertically downward with the intersection point of local horizontal coordinates, the same bodywork reference frame of coordinate system direction.
Parameter-definition is as follows simultaneously:
α: the camera angle of pitch;
β: camera crab angle;
S ABC: for A is that initial point is the translational movement of initial point in the C direction to B;
Figure A20071012290200101
The camera angle of pitch and crab angle all are that camera coordinates is tied to the rotation matrix of bodywork reference frame under zero the situation.
Step 1 is finished the transformational relation of camera coordinates system to the camera support coordinate system
x c y c z c = ( r 1 1 r 1 2 r 1 3 r 1 4 r 1 5 r 1 6 r 1 7 r 1 8 r 1 9 * x lc y lc z lc + S lx S ly S lz )
[S LxS LyS Lz]: the camera coordinates initial point is to the translational movement of camera support coordinate system.All be that camera support coordinate system and bodywork reference frame are completely parallel under zero the situation in the camera angle of pitch and crab angle.
Step 2 is finished the conversion that the camera support coordinate is tied to the mast coordinate system
x l y l z l = cos α cos β - sin β sin α cos β cos α sin β cos β sin α sin β - sin α 0 cos α * x c y c z c + S clx S cly S clz
Step 3 is finished the conversion that the mast coordinate is tied to bodywork reference frame
x b y b z b = x l y l z l + S lbx S lby S lbz
Step 4 is finished the conversion of bodywork reference frame to world coordinate system
According to the definition of world coordinate system this moment, bodywork reference frame is exactly a simple translation to the conversion of world coordinate system, and is as follows
x r y r z r = x b y b z b + 0 0 H
Step 5, in comprehensive above four steps, progressively iteration converts and obtains the corresponding relation of world coordinate system and camera coordinates system, the overall formula when obtaining issuing path point:
x r y r z r = cos α cos β - sin β sin α cos β cos α sin β cos β sin α sin β - sin α 0 cos α * ( r 1 1 r 1 2 r 1 3 r 1 4 r 1 5 r 1 6 r 1 7 r 1 8 r 1 9 * x lc y lc z lc + S lx S ly S lz ) + S lbx + S clx S lby + S cly H + S lbz + S clz (formula 2)
Step 6, simultaneous formula (1) and formula (2) just can be resolved and obtain the conversion relational expression of photo coordinate system to world coordinate system.Because the plane of delineation coordinate of two dimension resolves three-dimensional coordinate and lacks an amount, z is proposed r=0 hypothesis just can be resolved out by the coordinate figure that three-dimensional is definite.But hypothesis can produce error with the situation of reality is inconsistent like this, and error is further compensating in the motion.
(3) as shown in Figure 4, select path point tracking mode walking path point for car body and revise synoptic diagram in real time, because real landform does not conform to the plane of hypothesis, in initial piece image, specify a series of tracing points of detector motion in motion thereafter, all can change, in order to guarantee the precision of track following, revise the coordinate of these tracing points under world coordinate system in real time according to the attitude variation that car body is current, the distance that while, acquisition was walked according to locating information, appear at the rear of car body when the tracing point of revising, show car body this point of having passed by, change into and follow the tracks of the next impact point of revising.The frequency of revising is high more, and the precision that tracing point is followed the tracks of is high more.
Concrete implementation step: provide five signal points among Fig. 4, the path points of five signal points for selecting in the step (2) are according to z this moment r=0 world coordinates plane issues the initial path point coordinate.In the process that car body is advanced, because the variation of car body attitude and position uses the world coordinates plane of initial position can cause error increasing unalterablely.As shown in the figure, five ray equations of five initial points under camera coordinates system determine that its intersection point with the hypothesis plane is the corresponding three-dimensional coordinate point of this pixel.When the hypothesis plane for when the landform tangent plane of front vehicle body position the time, if tangent line is enough lacked, the coordinate of three-dimensional point can approach actual value gradually.So, revise the z that is associated with the real-time position and attitude of car body according to the change of car body with respect to initial attitude and position r=0 plane equation utilizes the car body position and the attitude information that record in real time to revise, and finally makes the more approaching real face of land of the coordinate figure of point under world coordinate system, path situation.When intersecting point coordinate is positioned at when the rear of front vehicle body position, think that then this path point passes by, change into and follow the tracks of next path point, last the path point in current visual range.
The coordinate correction formula of path point is as follows:
x r ′ y r ′ z r ′ = cos Δp sin Δr sin Δp cos Δ r sin Δp 0 cos Δr - sin Δr - sin Δp sin Δr cos Δp cos Δr cos Δp * ( cos α cos β - sin β sin α cos β cos α sin β cos β sin α sin β - sin α 0 cos α * ( r 1 1 r 1 2 r 1 3 r 1 4 r 1 5 r 1 6 r 1 7 r 1 8 r 1 9 * x lc y lc z lc + S lx S ly S lz ) + S lbx + S clx S lby + S cly H + S lbz + S clz - S x ′ S y ′ S z ′ ) (formula 3)
Wherein, Δ r: when the roll angle of the relative initial position of front vehicle body;
Δ p: when the angle of pitch of the relative initial position of front vehicle body;
S x', S y', S z' for the current location of car body issues coordinate figure under the initial position world coordinate system at the path point, provide by navigator fix information.
Through the real-time correction of car body position in the traveling process and attitude, the tracking accuracy of next path point has been improved more near actual value in initial impact point position.
When current road conditions were judged that car body selects automatic obstacle avoiding to advance mode, concrete steps were as follows:
Step 1 is taken pictures camera pitching specified angle to current scene.
Step 2 according to the characteristic of the barrier gray scale in the scene, adopts the least error dividing method to carry out the Region Segmentation of barrier.
The principle of least error dividing method is based on the half-tone information of image, and the gray scale function of establishing image is f (i), and i is a gray-scale value, is 0~255 in the gray level image of 8bits.Calculate total sample number, distribution average and the variance of two class intensity profile:
p 0 ( t ) = Σ i = min t f ( i ) p 1 ( t ) = Σ i = t max f ( i )
μ 0 ( t ) = Σ i = min t f ( i ) * i p 0 ( t ) μ 1 ( t ) = Σ i = t max f ( i ) * i p 1 ( t )
σ 0 ( t ) = Σ i = min t [ i - μ 0 ( t ) ] 2 f ( i ) p 0 ( t ) σ 1 ( t ) = Σ i = t max [ i - μ 1 ( t ) ] 2 f ( i ) p 1 ( t )
Min, max are minimum gray value and the gray scale maximal value that occurs in the piece image, and t is the gray threshold of cutting apart.Defining smallest error function then is:
H(t)=1+2*[p 0(t)lnσ 0(t)+p 1(t)lnσ 1(t)-2*[p 0(t)ln?p 0(t)+p 1(t)ln?p 1(t)]]
The H of all t of cycle calculations (t) value is therefrom found out the minimum pairing t value of H (t) value, utilizes the t value that image is carried out binary conversion treatment then.Image after the binaryzation is at first removed the isolated point of extraction, and promptly region area is eliminated the cavity through basic morphology operations corrosion and expansion algorithm again less than the point set of a certain threshold level, and employed structural element is in the program B = 1 1 1 1 1 1 1 1 1 .
Step 3, to the zone that links up that obtains, promptly removing zone behind isolated point and the cavity identifies with area and judges, the effective barrier of zone conduct greater than threshold area, the zone of threshold area can be set according to size of the stone in the internal field and picture characteristics, is taken as 2000 pixels.Roughly estimate the elevation information of barrier, the stone that is higher than maximum height of surmountable obstacle need be kept away barrier as the hazardous location.
As shown in Figure 5, obstacle height principle of calculating figure, the front end face of supposing barrier have maximum height and with the level ground approximate vertical, Δ r in the modification formula 3 and Δ p, the rolling and the angle of pitch of surface level is (though uneven according to landform relative to the earth to change the current attitude of car body into, but the characteristic of general planar), suppose the equation of the earth surface level, obtain after world coordinate system compensated the pitching of relative surface level and roll angle, calculating its length that intercepts according to the lower edge of barrier and upper edge on surface level is Obstacle, subtended angle between the pixel coordinate of lower edge and upper edge correspondence is δ, and the attitude during according to the camera current shooting can obtain lower edge and the sight line of upper edge correspondence and the angle α of surface level LowAnd α High, then the high computational formula of barrier is:
Height=the Obstacle of obstacle * tan α High
This hypothesis has been carried out verification experimental verification to the vertical height thing on the level ground, proves this method principle establishment and can reach degree of precision.In actual applications, because the fluctuating of landform and the irregularly shaped meeting of barrier bring the inaccuracy of measurement, but, can realize that the safety of big obstacle is evaded by consideration to the height allowance.
Step 4, the big obstacle that needs were kept away extracts its four coordinate extreme values under photo coordinate system, generate the envelope rectangle frame by these four coordinate extreme values, Δ r in the modification formula 3 and Δ p, change the current attitude of the car body rolling and the angle of pitch of surface level relative to the earth into, other conversions are identical, these four points have formed a hazardous location scope on surface level, carry out path planning according to hazardous location scope and the visual scope of camera, cook up one and both can keep away the hazardous location, can reach the secure path of impact point again, paths planning method can be for using more Artificial Potential Field method at present, fuzzy logic algorithm, the neural network method, genetic algorithm and grid method etc., the path planning aftercarriage is walked along path planning, last path point in current visual range.
(3) also regular images acquired in the car body walking process, after image is back to pattern manipulation interface image is carried out distortion correction, and will be according to the path point of car body correction or planning being presented on the pattern manipulation interface when front vehicle body position, attitude, when car body does not have avoiding obstacles, car body is sent the instruction of stopping in emergency.
As shown in Figure 6, be the design sketch of obstacle identification of the present invention, wherein white box is the obstacle envelope rectangle of discerning after the Flame Image Process, the first dangerous obstacle in the image that be designated of barrier represented in 1 of the lower left corner.All the other white mesh lines are the horizontal 0.5m of car body current location surface level and the equal space line of vertical 1m.Need to prove, be that image is back to distant end here, and promptly pattern manipulation interface shows, the processing of actual vehicle-mounted end can not show, but the data of gained are with identical here.
In the operating process of distant end, in order to guarantee the security of car body more intuitively, proposed based on the distant operation conception that quantizes figure, and done the technology realization, on the pattern manipulation interface of distant end, show the scene grid information of calculating according to car body current location attitude information that issues track and standard, and next step movable scope that reaches of car body is shown.
Method realizes it mainly being the transformational relation of coordinate to be negated separate, and obtains the some world coordinate system under position and programme by graphical interfaces under photo coordinate system and is presented on the image, and its theoretical formula is as follows:
x lc y lc z lc = r 1 1 r 1 2 r 1 3 r 1 4 r 1 5 r 1 6 r 1 7 r 1 8 r 1 9 T *
( cos α cos β cos α sin β - sin α - sin β cos β 0 sin α cos β sin α sin β cos α * ( ( x r y r z r - 0 0 H ) - S lbx S lby S lbz - S clx S cly S clz ) - S lx S ly S lz )
After the image passback, the grid of monospacing under the world coordinate system, the reachable path point range of the path that distant operation issues and next step car body continuous motion can calculate it and be presented on the image as planimetric coordinates according to the anti-solution formula of projection relation, the track result who makes car be about to walking comes into plain view, handled easily person's intuitive judgment has important practical value.
As shown in Figure 6, effect synoptic diagram for the internal field experiment, the path of the expression planning in centre of three curves among the figure, the width range of two expression car bodies on both sides, from figure, can judge the correctness that issues the path and the security of walking very intuitively, can guarantee also that simultaneously but next path point is positioned at the scope that the car body continuous motion arrives, and then carry out the path and issue.
(4) in car body arrives current visual range, during last path point, begin circulation from step (2) and carry out, arrive the final goal position until car body.

Claims (7)

1, is applicable to the monocular vision air navigation aid of the environment sensing of autonomous moving vehicle, it is characterized in that comprising the following steps:
(1) measures the intrinsic parameter that camera distorts, after the measurement camera is installed on the autonomous moving vehicle, determine the transformational relation of photo coordinate system to world coordinate system;
(2) record is when the front vehicle body attitude, in images acquired in the front vehicle body visual range, and the image of gathering is back to pattern manipulation interface, select the car body walking manner according to target location and image situation, then the image that collects is carried out distortion correction, make the image after the correction satisfy the pinhole imaging system principle, according to the path point on selection walking path of the image behind the distortion correction and the path, the path point of car body current location and selection is converted under the world coordinate system according to the transformational relation of the described photo coordinate system of step (1) to world coordinate system, the result issues car body with conversion, and the path the selected form with width of the carbody is presented on the pattern manipulation interface;
(3) car body when car body in the step (2) selects path point to follow the tracks of walking manner, is revised the coordinate figure of path point under world coordinate system, last path point in arriving current visual range along selecting the path walking in real time;
When car body is selected the automatic obstacle avoiding walking manner in the step (2), the camera pitching is taken pictures to current scene to specified angle, barrier in the scene is carried out Region Segmentation, the cognitive disorders zone, estimation obstacle height, planning gets around the path of obstacle, and car body is according to the path walking of planning, last path point in arriving current visual range;
(4) in car body arrives current visual range, during last path point, begin circulation from step (2) and carry out, arrive the final goal position until car body.
2, according to the claim 1 described monocular vision air navigation aid that is applicable to the environment sensing of autonomous moving vehicle, it is characterized in that: also regular images acquired in the car body walking process in the described step (3), after image is back to pattern manipulation interface image is carried out distortion correction, and will be according to the path point of car body correction or planning being presented on the pattern manipulation interface when front vehicle body position, attitude, when car body does not have avoiding obstacles, car body is sent the instruction of stopping in emergency.
3, according to claim 1 or the 2 described monocular vision air navigation aids that are applicable to the environment sensing of autonomous moving vehicle, it is characterized in that: the photo coordinate system in the described step (1) is tied to by camera coordinates to the transformational relation of world coordinate system that the conversion of camera support coordinate system, camera support coordinate are tied to the conversion of mast coordinate system, the mast coordinate is tied to bodywork reference frame conversion, bodywork reference frame after four steps of world coordinate system conversion, goes on foot comprehensively to be combined into as the parameters of formula equation with four to obtain the transformational relation of photo coordinate system to world coordinate system.
4, according to claim 1 or the 2 described monocular vision air navigation aids that are applicable to the environment sensing of autonomous moving vehicle, it is characterized in that: the path with walking in the described step (2) is presented at the form of width of the carbody adopts on the image based on the distant method of operating that quantizes figure, photo coordinate system negated to the transformational relation of world coordinate system separate, obtain the position of point under photo coordinate system under the world coordinate system.
5, according to claim 1 or the 2 described monocular vision air navigation aids that are applicable to the environment sensing of autonomous moving vehicle, it is characterized in that: revise the coordinate figure of path point under world coordinate system in the described step (3) in real time, change correction in real time according to the car body position that records in real time, attitude, correction formula is:
x r ′ y r ′ z r ′ = cos Δp sin Δ r sin Δp cos Δ r sin Δp 0 cos Δr - sin Δr - sin Δp sin Δr cos Δp cos Δ r cos Δp * ( cos α cos β - sin β sin α cos β cos α sin β cos β sin α sin β - sin α 0 cos α ( rl 1 rl 2 rl 3 rl 4 rl 5 rl 6 rl 7 rl 8 rl 9 * x lc y lc z lc + S lx S ly S lz ) + S lbx + S clx S lby + S cly H + S lbz + S clz - S x ′ S y ′ S z ′ )
Wherein, α: the camera angle of pitch;
β: camera crab angle;
[S TxS TyS Tz]: the camera coordinates initial point is to the translational movement of camera support coordinate system;
Subscript lc, l, b, c represent camera coordinates system, mast coordinate system, bodywork reference frame, camera support coordinate system respectively;
S ABC: for A is that initial point is the translational movement in the C direction of initial point to B;
rl 1 rl 2 rl 3 rl 4 rl 5 rl 6 rl 7 rl 8 rl 9 : The camera angle of pitch and crab angle all are that camera coordinates is tied to the rotation matrix of bodywork reference frame under zero the situation;
Δ r: when the roll angle of the relative initial position of front vehicle body;
Δ p: when the angle of pitch of the relative initial position of front vehicle body;
S x', S y', S z' issue coordinate figure under the initial position world coordinate system at the path point for the current location of car body;
H: bodywork reference frame is to the translational movement of world coordinate system.
6, according to claim 1 or the 2 described monocular vision air navigation aids that are applicable to the environment sensing of autonomous moving vehicle, it is characterized in that: barrier is carried out Region Segmentation adopt the least error dividing method in the described step (3), at first calculate the total sample number of intensity profile, the distribution average, variance, then according to the total sample number of calculating, the distribution average, variance is calculated the smallest error function of gray threshold, find out minimum corresponding threshold in the smallest error function of all gray thresholds, utilize the threshold value that obtains that image is carried out binary conversion treatment at last, and adopt the corrosion dilation operation of image to remove the isolated point that extracts, eliminate the cavity.
7, according to claim 1 or the 2 described monocular vision air navigation aids that are applicable to the environment sensing of autonomous moving vehicle, it is characterized in that: the estimation obstacle height in the described step (3), the front end face of supposing barrier have maximum height and with the ground approximate vertical, then according to the triangle relation dyscalculia height that obtains after the hypothesis.
CNB2007101229022A 2007-07-03 2007-07-03 Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle Active CN100494900C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2007101229022A CN100494900C (en) 2007-07-03 2007-07-03 Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2007101229022A CN100494900C (en) 2007-07-03 2007-07-03 Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle

Publications (2)

Publication Number Publication Date
CN101067557A true CN101067557A (en) 2007-11-07
CN100494900C CN100494900C (en) 2009-06-03

Family

ID=38880179

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2007101229022A Active CN100494900C (en) 2007-07-03 2007-07-03 Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle

Country Status (1)

Country Link
CN (1) CN100494900C (en)

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010066123A1 (en) * 2008-12-10 2010-06-17 东软集团股份有限公司 Method and device for partitioning barrier
CN101973032A (en) * 2010-08-30 2011-02-16 东南大学 Off-line programming system and method of optical visual sensor with linear structure for welding robot
CN102087530A (en) * 2010-12-07 2011-06-08 东南大学 Vision navigation method of mobile robot based on hand-drawing map and path
CN101469991B (en) * 2007-12-26 2011-08-10 南京理工大学 All-day structured road multi-lane line detection method
CN101753827B (en) * 2008-12-04 2012-05-23 财团法人工业技术研究院 Image capturing device angle deciding method and vehicle collision warning system thereof
CN102520718A (en) * 2011-12-02 2012-06-27 上海大学 Physical modeling-based robot obstacle avoidance path planning method
CN101582164B (en) * 2009-06-24 2012-07-18 北京万得嘉瑞汽车技术有限公司 Image processing method of parking assist system
CN102116612B (en) * 2009-12-31 2012-08-08 北京控制工程研究所 Method for perceiving star catalogue topography by laser stripe information
CN101900560B (en) * 2009-05-27 2012-08-29 宏碁股份有限公司 Electronic device having leading function and object leading method thereof
CN102662402A (en) * 2012-06-05 2012-09-12 北京理工大学 Intelligent camera tracking car model for racing tracks
CN102737476A (en) * 2010-03-30 2012-10-17 新日铁系统集成株式会社 Information processing apparatus, information processing method, and program
CN101900562B (en) * 2009-05-29 2013-02-06 通用汽车环球科技运作公司 Clear path detection using divide approach
CN103075998A (en) * 2012-12-31 2013-05-01 华中科技大学 Monocular space target distance-measuring and angle-measuring method
CN103292807A (en) * 2012-03-02 2013-09-11 江阴中科矿业安全科技有限公司 Drill carriage posture measurement method based on monocular vision
CN103328928A (en) * 2011-01-11 2013-09-25 高通股份有限公司 Camera-based inertial sensor alignment for personal navigation device
CN103868519A (en) * 2012-12-13 2014-06-18 上海工程技术大学 Binocular intelligent vehicle online path planning system
CN104010274A (en) * 2014-06-12 2014-08-27 国家电网公司 Indoor wireless positioning method based on path matching
CN104034514A (en) * 2014-06-12 2014-09-10 中国科学院上海技术物理研究所 Large visual field camera nonlinear distortion correction device and method
CN104067191A (en) * 2012-01-17 2014-09-24 村田机械株式会社 Traveling vehicle system
CN104833360A (en) * 2014-02-08 2015-08-12 无锡维森智能传感技术有限公司 Method for transforming two-dimensional coordinates into three-dimensional coordinates
CN104867158A (en) * 2015-06-03 2015-08-26 武汉理工大学 Monocular vision-based indoor water surface ship precise positioning system and method
CN101430207B (en) * 2007-11-09 2015-09-30 三星电子株式会社 Structured light is used to produce equipment and the method for three-dimensional map
CN105973240A (en) * 2016-07-15 2016-09-28 哈尔滨工大服务机器人有限公司 Conversion method of navigation module coordinate system and robot coordinate system
CN106556395A (en) * 2016-11-17 2017-04-05 北京联合大学 A kind of air navigation aid of the single camera vision system based on quaternary number
CN104786865B (en) * 2015-04-22 2017-06-16 厦门大学 A kind of method of docking of being charged for electric automobile is provided based on monocular vision
CN106886217A (en) * 2017-02-24 2017-06-23 安科智慧城市技术(中国)有限公司 Automatic navigation control method and apparatus
CN104180818B (en) * 2014-08-12 2017-08-11 北京理工大学 A kind of monocular vision mileage calculation device
CN107209268A (en) * 2015-01-28 2017-09-26 夏普株式会社 Obstacle detector, moving body, obstacle detection method and detection of obstacles program
CN107608314A (en) * 2016-07-12 2018-01-19 波音公司 The method and apparatus automated for working cell and factory level
CN107838926A (en) * 2017-10-18 2018-03-27 歌尔科技有限公司 One kind picks robot automatically
CN108132054A (en) * 2017-12-20 2018-06-08 百度在线网络技术(北京)有限公司 For generating the method and apparatus of information
CN108399632A (en) * 2018-03-02 2018-08-14 重庆邮电大学 A kind of RGB-D camera depth image repair methods of joint coloured image
CN108430032A (en) * 2017-12-08 2018-08-21 深圳新易乘科技有限公司 A kind of method and apparatus for realizing that VR/AR device locations are shared
CN108426580A (en) * 2018-01-22 2018-08-21 中国地质大学(武汉) Unmanned plane based on image recognition and intelligent vehicle collaborative navigation method
CN109238281A (en) * 2017-07-10 2019-01-18 南京原觉信息科技有限公司 Vision guided navigation and barrier-avoiding method based on image spiral line
CN109387192A (en) * 2017-08-02 2019-02-26 湖南格纳微信息科技有限公司 A kind of indoor and outdoor consecutive tracking method and device
CN109582032A (en) * 2018-10-11 2019-04-05 天津大学 Quick Real Time Obstacle Avoiding routing resource of the multi-rotor unmanned aerial vehicle under complex environment
CN109636897A (en) * 2018-11-23 2019-04-16 桂林电子科技大学 A kind of Octomap optimization method based on improvement RGB-D SLAM
CN110118558A (en) * 2019-04-25 2019-08-13 芜湖智久机器人有限公司 A kind of envelope construction method, device and the memory of AGV fork truck
CN111174765A (en) * 2020-02-24 2020-05-19 北京航天飞行控制中心 Planet vehicle target detection control method and device based on visual guidance
CN111190418A (en) * 2018-10-29 2020-05-22 安波福技术有限公司 Adjusting lateral clearance of a vehicle using a multi-dimensional envelope
CN111213101A (en) * 2019-04-26 2020-05-29 深圳市大疆创新科技有限公司 Line patrol control method and device for movable platform, movable platform and system
CN111563936A (en) * 2020-04-08 2020-08-21 蘑菇车联信息科技有限公司 Camera external parameter automatic calibration method and automobile data recorder
WO2020258721A1 (en) * 2019-06-27 2020-12-30 广东利元亨智能装备股份有限公司 Intelligent navigation method and system for cruiser motorcycle
CN112611344A (en) * 2020-11-30 2021-04-06 北京建筑大学 Autonomous mobile flatness detection method, device and storage medium
CN112631134A (en) * 2021-01-05 2021-04-09 华南理工大学 Intelligent trolley obstacle avoidance method based on fuzzy neural network
CN112923930A (en) * 2016-07-21 2021-06-08 御眼视觉技术有限公司 Crowd-sourcing and distributing sparse maps and lane measurements for autonomous vehicle navigation
CN112932910A (en) * 2021-01-25 2021-06-11 杭州易享优智能科技有限公司 Wearable intelligent sensing blind guiding system
CN113110453A (en) * 2021-04-15 2021-07-13 哈尔滨工业大学 Artificial potential field obstacle avoidance method based on graph transformation
CN113377097A (en) * 2021-01-25 2021-09-10 杭州易享优智能科技有限公司 Path planning and obstacle avoidance method for blind person guide
CN113837332A (en) * 2021-09-23 2021-12-24 北京京东乾石科技有限公司 Shelf angle adjusting method and device, electronic equipment and computer readable medium
WO2022148143A1 (en) * 2021-01-08 2022-07-14 华为技术有限公司 Target detection method and device
CN114812513A (en) * 2022-05-10 2022-07-29 北京理工大学 Unmanned aerial vehicle positioning system and method based on infrared beacon
CN116242366A (en) * 2023-03-23 2023-06-09 广东省特种设备检测研究院东莞检测院 Spherical tank inner wall climbing robot walking space tracking and navigation method

Cited By (81)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101430207B (en) * 2007-11-09 2015-09-30 三星电子株式会社 Structured light is used to produce equipment and the method for three-dimensional map
CN101469991B (en) * 2007-12-26 2011-08-10 南京理工大学 All-day structured road multi-lane line detection method
CN101753827B (en) * 2008-12-04 2012-05-23 财团法人工业技术研究院 Image capturing device angle deciding method and vehicle collision warning system thereof
WO2010066123A1 (en) * 2008-12-10 2010-06-17 东软集团股份有限公司 Method and device for partitioning barrier
US8463039B2 (en) 2008-12-10 2013-06-11 Neusoft Corporation Method and device for partitioning barrier
CN101900560B (en) * 2009-05-27 2012-08-29 宏碁股份有限公司 Electronic device having leading function and object leading method thereof
CN101900562B (en) * 2009-05-29 2013-02-06 通用汽车环球科技运作公司 Clear path detection using divide approach
CN101582164B (en) * 2009-06-24 2012-07-18 北京万得嘉瑞汽车技术有限公司 Image processing method of parking assist system
CN102116612B (en) * 2009-12-31 2012-08-08 北京控制工程研究所 Method for perceiving star catalogue topography by laser stripe information
CN102737476B (en) * 2010-03-30 2013-11-27 新日铁住金系统集成株式会社 Information providing apparatus and information providing method
CN102737476A (en) * 2010-03-30 2012-10-17 新日铁系统集成株式会社 Information processing apparatus, information processing method, and program
CN101973032B (en) * 2010-08-30 2013-06-26 东南大学 Off-line programming system and method of optical visual sensor with linear structure for welding robot
CN101973032A (en) * 2010-08-30 2011-02-16 东南大学 Off-line programming system and method of optical visual sensor with linear structure for welding robot
CN102087530A (en) * 2010-12-07 2011-06-08 东南大学 Vision navigation method of mobile robot based on hand-drawing map and path
CN103328928B (en) * 2011-01-11 2016-08-10 高通股份有限公司 Inertial sensor based on camera for personal navigation equipment is directed at
CN103328928A (en) * 2011-01-11 2013-09-25 高通股份有限公司 Camera-based inertial sensor alignment for personal navigation device
US9160980B2 (en) 2011-01-11 2015-10-13 Qualcomm Incorporated Camera-based inertial sensor alignment for PND
CN102520718B (en) * 2011-12-02 2013-06-05 上海大学 Physical modeling-based robot obstacle avoidance path planning method
CN102520718A (en) * 2011-12-02 2012-06-27 上海大学 Physical modeling-based robot obstacle avoidance path planning method
CN104067191B (en) * 2012-01-17 2016-04-27 村田机械株式会社 Traveling vehicle system
CN104067191A (en) * 2012-01-17 2014-09-24 村田机械株式会社 Traveling vehicle system
CN103292807A (en) * 2012-03-02 2013-09-11 江阴中科矿业安全科技有限公司 Drill carriage posture measurement method based on monocular vision
CN103292807B (en) * 2012-03-02 2016-04-20 江阴中科矿业安全科技有限公司 Based on the drill carriage posture measurement method of monocular vision
CN102662402A (en) * 2012-06-05 2012-09-12 北京理工大学 Intelligent camera tracking car model for racing tracks
CN102662402B (en) * 2012-06-05 2014-04-09 北京理工大学 Intelligent camera tracking car model for racing tracks
CN103868519A (en) * 2012-12-13 2014-06-18 上海工程技术大学 Binocular intelligent vehicle online path planning system
CN103075998B (en) * 2012-12-31 2015-08-26 华中科技大学 A kind of monocular extraterrestrial target range finding angle-measuring method
CN103075998A (en) * 2012-12-31 2013-05-01 华中科技大学 Monocular space target distance-measuring and angle-measuring method
CN104833360A (en) * 2014-02-08 2015-08-12 无锡维森智能传感技术有限公司 Method for transforming two-dimensional coordinates into three-dimensional coordinates
CN104833360B (en) * 2014-02-08 2018-09-18 无锡维森智能传感技术有限公司 A kind of conversion method of two-dimensional coordinate to three-dimensional coordinate
CN104034514A (en) * 2014-06-12 2014-09-10 中国科学院上海技术物理研究所 Large visual field camera nonlinear distortion correction device and method
CN104010274A (en) * 2014-06-12 2014-08-27 国家电网公司 Indoor wireless positioning method based on path matching
CN104010274B (en) * 2014-06-12 2017-09-26 国家电网公司 A kind of indoor wireless positioning method based on route matching
CN104180818B (en) * 2014-08-12 2017-08-11 北京理工大学 A kind of monocular vision mileage calculation device
CN107209268A (en) * 2015-01-28 2017-09-26 夏普株式会社 Obstacle detector, moving body, obstacle detection method and detection of obstacles program
CN104786865B (en) * 2015-04-22 2017-06-16 厦门大学 A kind of method of docking of being charged for electric automobile is provided based on monocular vision
CN104867158A (en) * 2015-06-03 2015-08-26 武汉理工大学 Monocular vision-based indoor water surface ship precise positioning system and method
CN104867158B (en) * 2015-06-03 2017-09-29 武汉理工大学 Indoor above water craft Precise Position System and method based on monocular vision
CN107608314A (en) * 2016-07-12 2018-01-19 波音公司 The method and apparatus automated for working cell and factory level
CN105973240A (en) * 2016-07-15 2016-09-28 哈尔滨工大服务机器人有限公司 Conversion method of navigation module coordinate system and robot coordinate system
CN105973240B (en) * 2016-07-15 2018-11-23 哈尔滨工大服务机器人有限公司 A kind of conversion method of navigation module coordinate system and robot coordinate system
CN112923930B (en) * 2016-07-21 2022-06-28 御眼视觉技术有限公司 Crowd-sourcing and distributing sparse maps and lane measurements for autonomous vehicle navigation
CN112923930A (en) * 2016-07-21 2021-06-08 御眼视觉技术有限公司 Crowd-sourcing and distributing sparse maps and lane measurements for autonomous vehicle navigation
CN106556395A (en) * 2016-11-17 2017-04-05 北京联合大学 A kind of air navigation aid of the single camera vision system based on quaternary number
CN106886217A (en) * 2017-02-24 2017-06-23 安科智慧城市技术(中国)有限公司 Automatic navigation control method and apparatus
CN109238281A (en) * 2017-07-10 2019-01-18 南京原觉信息科技有限公司 Vision guided navigation and barrier-avoiding method based on image spiral line
CN109387192A (en) * 2017-08-02 2019-02-26 湖南格纳微信息科技有限公司 A kind of indoor and outdoor consecutive tracking method and device
CN109387192B (en) * 2017-08-02 2022-08-26 湖南云箭格纳微信息科技有限公司 Indoor and outdoor continuous positioning method and device
CN107838926A (en) * 2017-10-18 2018-03-27 歌尔科技有限公司 One kind picks robot automatically
CN108430032B (en) * 2017-12-08 2020-11-17 深圳新易乘科技有限公司 Method and equipment for realizing position sharing of VR/AR equipment
CN108430032A (en) * 2017-12-08 2018-08-21 深圳新易乘科技有限公司 A kind of method and apparatus for realizing that VR/AR device locations are shared
CN108132054A (en) * 2017-12-20 2018-06-08 百度在线网络技术(北京)有限公司 For generating the method and apparatus of information
CN108426580A (en) * 2018-01-22 2018-08-21 中国地质大学(武汉) Unmanned plane based on image recognition and intelligent vehicle collaborative navigation method
CN108426580B (en) * 2018-01-22 2021-04-30 中国地质大学(武汉) Unmanned aerial vehicle and intelligent vehicle collaborative navigation method based on image recognition
CN108399632B (en) * 2018-03-02 2021-06-15 重庆邮电大学 RGB-D camera depth image restoration method based on color image combination
CN108399632A (en) * 2018-03-02 2018-08-14 重庆邮电大学 A kind of RGB-D camera depth image repair methods of joint coloured image
CN109582032B (en) * 2018-10-11 2021-10-12 天津大学 Multi-rotor unmanned aerial vehicle rapid real-time obstacle avoidance path selection method in complex environment
CN109582032A (en) * 2018-10-11 2019-04-05 天津大学 Quick Real Time Obstacle Avoiding routing resource of the multi-rotor unmanned aerial vehicle under complex environment
CN111190418A (en) * 2018-10-29 2020-05-22 安波福技术有限公司 Adjusting lateral clearance of a vehicle using a multi-dimensional envelope
CN111190418B (en) * 2018-10-29 2023-12-05 动态Ad有限责任公司 Adjusting lateral clearance of a vehicle using a multi-dimensional envelope
US11827241B2 (en) 2018-10-29 2023-11-28 Motional Ad Llc Adjusting lateral clearance for a vehicle using a multi-dimensional envelope
CN109636897B (en) * 2018-11-23 2022-08-23 桂林电子科技大学 Octmap optimization method based on improved RGB-D SLAM
CN109636897A (en) * 2018-11-23 2019-04-16 桂林电子科技大学 A kind of Octomap optimization method based on improvement RGB-D SLAM
CN110118558A (en) * 2019-04-25 2019-08-13 芜湖智久机器人有限公司 A kind of envelope construction method, device and the memory of AGV fork truck
CN111213101A (en) * 2019-04-26 2020-05-29 深圳市大疆创新科技有限公司 Line patrol control method and device for movable platform, movable platform and system
WO2020215296A1 (en) * 2019-04-26 2020-10-29 深圳市大疆创新科技有限公司 Line inspection control method for movable platform, and line inspection control device, movable platform and system
WO2020258721A1 (en) * 2019-06-27 2020-12-30 广东利元亨智能装备股份有限公司 Intelligent navigation method and system for cruiser motorcycle
CN111174765B (en) * 2020-02-24 2021-08-13 北京航天飞行控制中心 Planet vehicle target detection control method and device based on visual guidance
CN111174765A (en) * 2020-02-24 2020-05-19 北京航天飞行控制中心 Planet vehicle target detection control method and device based on visual guidance
CN111563936A (en) * 2020-04-08 2020-08-21 蘑菇车联信息科技有限公司 Camera external parameter automatic calibration method and automobile data recorder
CN112611344A (en) * 2020-11-30 2021-04-06 北京建筑大学 Autonomous mobile flatness detection method, device and storage medium
CN112631134A (en) * 2021-01-05 2021-04-09 华南理工大学 Intelligent trolley obstacle avoidance method based on fuzzy neural network
WO2022148143A1 (en) * 2021-01-08 2022-07-14 华为技术有限公司 Target detection method and device
CN113377097B (en) * 2021-01-25 2023-05-05 杭州易享优智能科技有限公司 Path planning and obstacle avoidance method for blind guiding of visually impaired people
CN112932910A (en) * 2021-01-25 2021-06-11 杭州易享优智能科技有限公司 Wearable intelligent sensing blind guiding system
CN113377097A (en) * 2021-01-25 2021-09-10 杭州易享优智能科技有限公司 Path planning and obstacle avoidance method for blind person guide
CN113110453A (en) * 2021-04-15 2021-07-13 哈尔滨工业大学 Artificial potential field obstacle avoidance method based on graph transformation
CN113837332A (en) * 2021-09-23 2021-12-24 北京京东乾石科技有限公司 Shelf angle adjusting method and device, electronic equipment and computer readable medium
CN114812513A (en) * 2022-05-10 2022-07-29 北京理工大学 Unmanned aerial vehicle positioning system and method based on infrared beacon
CN116242366A (en) * 2023-03-23 2023-06-09 广东省特种设备检测研究院东莞检测院 Spherical tank inner wall climbing robot walking space tracking and navigation method
CN116242366B (en) * 2023-03-23 2023-09-12 广东省特种设备检测研究院东莞检测院 Spherical tank inner wall climbing robot walking space tracking and navigation method

Also Published As

Publication number Publication date
CN100494900C (en) 2009-06-03

Similar Documents

Publication Publication Date Title
CN101067557A (en) Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle
CN108256413B (en) Passable area detection method and device, storage medium and electronic equipment
CN106228110B (en) A kind of barrier and drivable region detection method based on vehicle-mounted binocular camera
CA2950791C (en) Binocular visual navigation system and method based on power robot
JP4676373B2 (en) Peripheral recognition device, peripheral recognition method, and program
JP5157067B2 (en) Automatic travel map creation device and automatic travel device.
WO2018020954A1 (en) Database construction system for machine-learning
CN113110451B (en) Mobile robot obstacle avoidance method based on fusion of depth camera and single-line laser radar
CN103413313A (en) Binocular vision navigation system and method based on power robot
CN108444390A (en) A kind of pilotless automobile obstacle recognition method and device
CN111967360A (en) Target vehicle attitude detection method based on wheels
JP2010282615A (en) Object motion detection system based on combining 3d warping technique and proper object motion (pom) detection
CN111443704B (en) Obstacle positioning method and device for automatic driving system
CN115774444B (en) Path planning optimization method based on sparse navigation map
CN113096190B (en) Omnidirectional mobile robot navigation method based on visual mapping
CN109241855B (en) Intelligent vehicle travelable area detection method based on stereoscopic vision
CN114808649B (en) Highway scribing method based on vision system control
CN113671522B (en) Dynamic environment laser SLAM method based on semantic constraint
CN109919139B (en) Road surface condition rapid detection method based on binocular stereo vision
CN111862146B (en) Target object positioning method and device
CN115797900B (en) Vehicle-road gesture sensing method based on monocular vision
CN113658240B (en) Main obstacle detection method and device and automatic driving system
KR101639264B1 (en) Apparatus and method for controling automatic termial
CN112530270B (en) Mapping method and device based on region allocation
Khan et al. Real-time traffic light detection from videos with inertial sensor fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant