CN101008566A - Intelligent vehicular vision device based on ground texture and global localization method thereof - Google Patents

Intelligent vehicular vision device based on ground texture and global localization method thereof Download PDF

Info

Publication number
CN101008566A
CN101008566A CN 200710036550 CN200710036550A CN101008566A CN 101008566 A CN101008566 A CN 101008566A CN 200710036550 CN200710036550 CN 200710036550 CN 200710036550 A CN200710036550 A CN 200710036550A CN 101008566 A CN101008566 A CN 101008566A
Authority
CN
China
Prior art keywords
map
vehicle
intelligent
ground texture
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200710036550
Other languages
Chinese (zh)
Other versions
CN100541121C (en
Inventor
杨明
方辉
杨汝清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CNB2007100365509A priority Critical patent/CN100541121C/en
Publication of CN101008566A publication Critical patent/CN101008566A/en
Application granted granted Critical
Publication of CN100541121C publication Critical patent/CN100541121C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

This invention relates to mobile robot technique based on earth seam intelligent car visual device and its whole layout position method, which comprises the following steps: a, establishing environment layout earth seam map into system; b, sending the caught image signals by data line or collection program into top layer process unit; c, top layer process unit processes images to extract image seam information to form one local earth seam map; d, using intelligent mile information to reduce local map index range; e, searching the current local earth map and whole earth seam map.

Description

Intelligent vehicular vision device and global localization method thereof based on ground texture
Technical field
What the present invention relates to is the devices and methods therefor in a kind of mobile robot technology field, specifically is a kind of intelligent vehicular vision device and global localization method thereof based on ground texture.
Background technology
The self-align problem of intelligent vehicle be meant intelligent vehicle by sensor information estimate from running environment residing pose (position and attitude).What at first run in the practical application is exactly orientation problem, and it is the prerequisite that realizes the intelligent vehicle navigation.At present main localization method has based on the location of magnetic signal and based on the location of vision etc.Location based on magnetic signal is the localization method of present comparative maturity, but this method need be buried sensing equipment (as electrified wire or magnetic nail) underground in running environment, the system implementation process is cumbersome, and easy care not, when changing operating path, need bury sensing equipment again underground, therefore big limitation is arranged in the use.Method based on vision has plurality of advantages, as has abundant quantity of information, and low to the infrastructure requirement, system is flexible, and vision sensor low price etc. show " information of driver more than 90% obtains by eyes " simultaneously according to the study.Therefore become a focus of intelligent vehicle research field based on the localization method of vision, and be acknowledged as the most promising a kind of method.
In a lot of existing practical applications, the location navigation of vision all is to use road sign basically, as the track collimation method etc., is commonly referred to as road and follows the tracks of.But this class localization method can only provide the located lateral information of vehicle, lacks longitudinal register information, promptly can not carry out overall situation location to intelligent vehicle, and this has hindered the actual of intelligent vehicle to a great extent and has applied.
Find through literature search prior art, the Chinese patent name is called " based on the mobile robot visual air navigation aid of image appearance feature ", application number: 03147554.x, publication number: CN1569558, this patent adopts global localization method, at first set up the topological map of indoor environment, mate the overall locating information that obtains the mobile robot by current images acquired and map then.But the operative scenario that adopts in this patent is indoor environment (corridor), and with door, the tangible scene characteristic of structurings such as pillar or turning is set up environmental map as principal character information.Therefore if under the outdoor environment of relative complex, feature extraction is difficulty relatively, and the method described in the literary composition is just inapplicable.And do not mention the robustness of described method in the literary composition to environmental change.
Another subject matter that exists based on the method for vision is when environmental baseline such as illumination, and when when shade changes or camera the place ahead was blocked, system can not normally move probably, promptly can not show strong robustness to environmental change.
Summary of the invention
The objective of the invention is to the deficiency that exists at existing intelligent vehicle localization method based on vision, a kind of intelligent vehicular vision device and global localization method thereof based on ground texture is provided.The present invention not only have the intrinsic advantage of visible sensation method and apparatus structure simple, easy to maintenance, also environmental change is shown extremely strong robustness, by day or do not influence use night, and do not have any occlusion issue.
The present invention is achieved by the following technical solutions:
Intelligent vehicular vision device based on ground texture provided by the invention comprises: car body, gobo, the pulsed fluorescent tube, camera, wide-angle lens, base plate, data line, gobo is arranged on the wheel inboard of car body bottom, the pulsed fluorescent tube is arranged on the gobo inboard of car body bottom, camera is arranged on the base plate, base plate is arranged at the car body bottom, camera is taken downwards, wide-angle lens is arranged on the camera, camera is connected with the upper strata processing unit by data line, the camera that has wide-angle lens is arranged on the car body bottom, be provided with pulsed fluorescent tube and gobo all around, the car body bottom section that gobo surrounded forms one and is not subjected to ectocine, the zone of ambient controlled, pulsed fluorescent tube can provide initiatively light source when the camera collection signal.
Described gobo, totally 4, be arranged at the bottom of the car around the wheel inboard, make ambient controlled zone of square one-tenth under the car, be not subjected to ambient light according to condition influence.The gobo material therefor should be opaque, and is that flexible material is so that vehicle operating.As black rubber cloth etc.
Described pulsed fluorescent tube, provides initiatively light source by totally 4; Use the pulsed fluorescent tube can increase initiatively light light intensity; Only need provide light source in the moment of camera collection signal, but energy savings.The intensity of light source of pulsed fluorescent tube should be according to being decided by the intrinsic brilliance situation in the formed zone of control of gobo.The important point is to make the brightness of illumination in the zone of control even as far as possible.
Described camera is used to gather environmental information.The general IP Camera based on CCD or CMOS can satisfy application requirements and low price.
Described wide-angle lens because camera is overhead nearer, for images acquired signal in the big scope of trying one's best, is provided with wide-angle lens on camera in zone of control.The selection of wide-angle lens can according to camera overhead distance and determine by the scope of the formed zone of control of gobo.
Described data line is used for image signal transmission to the upper strata processing unit.As usb data line etc.
Intelligent vehicular visual global positioning method based on ground texture provided by the invention comprises the steps:
1. at first set up the ground texture map of the environment overall situation, and be stored in the system, so that position operation;
By data line and image acquisition program the picture signal (local ground image) that camera absorbs is sent to the upper strata processing unit when 2. intelligent vehicle moves in environment;
3. carry out Flame Image Process in the processing unit of upper strata, the texture information that extracts in the image forms the local ground of width of cloth texture map;
4. in addition in order to reduce the search matched scope of local map in global map, used the mileage information of intelligent vehicle: mileage information provide last one constantly to the current time vehicle the distance of process and the angle that turns over, to determine that roughly current vehicle pose is within a certain scope;
5. and then with current local ground texture map with 4. described in overall ground texture map in a certain scope carry out search matched, thereby try to achieve the locating information of intelligent vehicle in global context.
Employed texture information is the marginal point of texture in the inventive method, and therefore a width of cloth texture image can be expressed as the set of marginal point, and being about to grid, triangle, circle and other various texture abstract is point set.Like this, broken away from the limitation of concrete texture pattern, can under any scene that ground texture arranged, use with a set representations texture information, and no matter the concrete pattern of texture; Texture information extraction problem can be used as the extraction of image border point, can use ripe image processing algorithm to realize.
Described step 1., if the environmental field that intelligent vehicle moved is less, can be by the ground texture map of manually creating global context; If environmental field is bigger, then can consider to use the next auxiliary global map of setting up of device such as high-precision RTK-GPS, constructive process comprises based on the texture information of image processing algorithm and extracts (be that marginal point extracts, the Flame Image Process concrete grammar is 3. similar with following step) and store.
Described step 2., the upper strata processing unit is that a chassis carries notebook computer, camera links to each other with notebook computer by the usb data line.Windows XP operating system is housed in the notebook computer, and has developed the real time image collection software platform of a cover, utilize the environmental information (local ground image) of this software platform acquisition camera picked-up based on DirectShow.The upper strata processing unit also can use high performance embedded system to realize, for example DSP.
Described step 3., the purpose of carrying out Flame Image Process in the processing unit of upper strata is to extract the texture information (being marginal point) of current images acquired.Flame Image Process comprises four steps: 1, in order to reduce the time-delay influence of image acquisition, extract the subimage (being strange field picture) that constitutes by the odd-numbered line pixel; 2, adopt median filtering algorithm that image is handled with the noise in the less image; 3, utilize the Canny edge detection operator to extract marginal point; 4, write down the ground texture map that each edge point position forms a width of cloth part.
Described step 4., intelligent vehicle trailing wheel and front-wheel place are provided with drive motor photoelectric encoder and steer motor photoelectric encoder respectively, can be recorded in sometime at interval respectively in vehicle distance and the angle that turns over, i.e. the vehicle mileage information moved.At last one constantly pose, and can extrapolate the pose of vehicle according to these mileage information and vehicle at current time by the motion model of vehicle.Though it should be noted that the vehicle mileage information that is obtained by scrambler is coarse, but can come roughly to determine the current pose of vehicle according to these information, thereby the real-time of localization method has been guaranteed in the hunting zone of less follow-up map match significantly.
5. described step adopts ICP (Iterative Closest Point) iterative closest point algorithms to finish map matching process.Be stored in that local ground texture map global context ground texture map in the system and current collection and that obtain through Flame Image Process is all abstract be the form of edge point set, so the coupling of local map and global map can be regarded as the coupling between the point set.Marginal point in the local map can be mapped in the global map according to the result of camera calibration and by the current vehicle pose that mileage information is tried to achieve, { W_EdgeP} represents with set.Though since the out of true of the vehicle pose of trying to achieve by mileage information make pairing true edge point set in the W_EdgeP} and the global map of getting along well (with True_W_EdgeP} represents) coincidence, if find { W_EdgeP} and { relation between the True_W_EdgeP} can be corrected coarse vehicle pose.Find the solution by the ICP algorithm among the present invention and obtain that { W_EdgeP} is with respect to { transformation relation of True_W_EdgeP} obtains very accurate vehicle overall situation pose thereby correct the vehicle pose of being tried to achieve by mileage information according to this transformation relation then.
Described iterative closest point algorithms, specific as follows: { each point (P) among the W_EdgeP} is according to the search corresponding point (CP) in a certain scope (4. this scope is determined by step) of global map of standard closest approach principle under the Euclidean distance for set.Point set corresponding after search finishes is { True_W_EdgeP}.Suppose total n group corresponding point, can be expressed as { (P i, CP i), i=1L n.There is the rotation translation relation between two point sets: (r, t x, t y).This n group corresponding point can form composed as follows system of equations:
R r · P 1 + T - CP 1 = 0 M M R r · P n + T - CP n = 0
Wherein:
R r = cos ( r ) - sin ( r ) sin ( r ) cos ( r ) , T=[t x,t y] T
Find the solution this system of equations get final product (r, t x, t y) analytic solution, and then improve precision by alternative manner.The error function of definition ICP algorithm is:
E d ( r , T ) = 1 n Σ i = 1 n ( R r · P i + T - CP i ) 2
Can try to achieve (r, t according to desired precision by error function control iteration termination condition x, t y).
The current vehicle pose of supposing 4. to be tried to achieve by step is for (x, y θ), then advance overmatching result (r, t x, t y) correct the back and the vehicle pose be:
x , = x + t x cos ( θ ) - t y sin ( θ ) y , = y + t x sin ( θ ) + t y cos ( θ ) θ , = θ + r
This result is final vehicle overall situation pose.
In order to solve overall orientation problem, the present invention has adopted the technology based on map match.Camera is taken downwards, and the environmental information of picked-up mainly is a terrain surface specifications.Consider that abundant texture information is often contained on ground in a lot of actual environments, various traffic sign textures are contained on for example a lot of road surfaces, as zebra crossing, stop line, speed(-)limit sign, turn marking etc.; Texture miscellaneous such as various decorative pattern are often contained in ground in environment such as park, square, campus, grid, and triangle, therefore circle or the like has adopted the localization method based on ground texture.Obtain the locating information of vehicle in the method by matching technique, overall self-alignly provide effective solution for what in a lot of actual environments, realize intelligent vehicle.
System architecture of the present invention is simple, and is easy to maintenance; Camera sensitive zones ambient controlled is not subjected to ectocine, makes system have extremely strong robustness, can be at reliability service under the outdoor environment; In any environment that ground texture arranged, all can use, and often contain abundant texture information in the actual ground, consolidate invention and have numerous application scenarioss; Localization method uses matching technique and merges mileage information, can realize the overall situation location of intelligent vehicle fast, has ensured the real time execution of system; Intelligent vehicle can be realized accurate overall situation location, and the entire system performance shows extremely strong robustness to environmental change.
Description of drawings
Fig. 1: structural representation front view of the present invention
Fig. 2: structural representation vertical view of the present invention
Fig. 3: structural representation sectional view of the present invention
Fig. 4: localization method schematic flow sheet of the present invention
Embodiment
Below in conjunction with accompanying drawing embodiments of the invention are elaborated: present embodiment has provided detailed embodiment being to implement under the prerequisite with the technical solution of the present invention, but protection scope of the present invention is not limited to following embodiment.
As shown in Figure 1, 2, 3, present embodiment comprises: car body 1, gobo 3, pulsed fluorescent tube 4, camera 5, wide-angle lens 6, base plate 7, data line 8.
Gobo 3 is arranged on wheel 2 inboards of car body 1 bottom, pulsed fluorescent tube 4 is arranged on gobo 3 inboards of car body 1 bottom, camera 5 is arranged on the base plate 7, base plate 7 is arranged at car body 1 bottom, camera 3 is taken downwards, wide-angle lens 6 is arranged on the camera 5, and camera 5 is connected with the upper strata processing unit by data line 8.
Gobo 3 adopts the rubberized fabric of black; Pulsed fluorescent tube 4 has adopted the adjustable brightness formula to improve adaptability; Camera 5 adopts general CCD IP Camera, is connected with the upper strata processing unit by the usb data line; Wide-angle lens 6 focal lengths of selecting for use are 2.5mm.
As shown in Figure 4, the intelligent vehicle in the present embodiment is from the CyberC3 project (CN/ASIA-IT﹠amp of European Union; C/002-88667).The running environment of present embodiment is the square in the campus, and scope is approximately 50 * 50 square metres, and square ground has abundant square node texture.
Step 1: because environmental field is less, so adopt manual type to set up the ground texture map of the environment overall situation.In the square, select the true origin of a point of fixity earlier as global map, difference on the square utilizes the camera collection environmental information respectively then, extract texture information (marginal point) through after the Flame Image Process, and texture information and corresponding world coordinates thereof are stored in the system.What finally obtain is exactly the environment overall situation ground texture map that a width of cloth is made up of the edge point set.
Step 2: the upper strata processing unit is that a chassis carries notebook computer in the present embodiment, and camera links to each other with notebook computer by the usb data line.Windows XP operating system is housed in the notebook computer.We have developed the environmental information (local ground image) that a cover comes acquisition camera to absorb based on the real time image collection program of DirectShow under Visual C++.The picture signal of current collection can be sent to the upper strata notebook computer by usb data line and this real time image collection program.
Step 3: in notebook computer, developed corresponding image processing program, comprised and extract strange field picture, image smoothing denoising, texture (marginal point) extraction etc.Can extract texture information in the current images acquired by this handling procedure, be called local ground texture map.
Step 4: trailing wheel and front-wheel place at intelligent vehicle are provided with drive motor photoelectric encoder and steer motor photoelectric encoder respectively, information by these two sensors can calculate distance and the angle that turns over, i.e. the vehicle mileage information that at interval interior sometime vehicle is moved respectively.At last one constantly pose, and can extrapolate the pose of vehicle according to these mileage information and vehicle at current time by the motion model of vehicle.Though because there be feasible vehicle pose and the out of true of trying to achieve in this way of error in encoder data, but can utilize this information greatly to reduce the hunting zone of map match, promptly in follow-up map match, only need near this vehicle pose, to carry out match search in the zonule and get final product.The scope of this zonule can be determined according to the precision of encoder data.
Step 5: adopt matching technique, current local ground texture map and overall ground texture map are mated, thereby try to achieve the overall pose of high-precision vehicle (matching algorithm ask for an interview in the summary of the invention step 5.) based on the ICP algorithm.
Whole vehicle global localization method is write realization under Visual C++, and moves in the notebook computer of upper strata.Can realize accurate overall situation location from the operation result of intelligent vehicle method of the present invention as can be seen, and the entire system performance is very reliable, and environmental change is shown extremely strong robustness.

Claims (10)

1. intelligent vehicular vision device based on ground texture, comprise: car body, camera, wide-angle lens, base plate, data line, it is characterized in that, also comprise: gobo, pulsed fluorescent tube, gobo are arranged on the wheel inboard of car body bottom, the pulsed fluorescent tube is arranged on the gobo inboard of car body bottom, camera is arranged on the base plate, and base plate is arranged at the car body bottom, and camera is taken downwards, wide-angle lens is arranged on the camera, and camera is connected with the upper strata processing unit by data line.
2. the intelligent vehicular vision device based on ground texture as claimed in claim 1 is characterized in that, described gobo, is black rubber cloth by totally 4.
3. the intelligent vehicular vision device based on ground texture as claimed in claim 1 is characterized in that, described pulsed fluorescent tube, provides initiatively light source by totally 4.
4. the novel sighting device that is used for the intelligent vehicle location described in claim 1 is characterized in that described wide-angle lens, its focal length are 2.5mm.
5. the intelligent vehicular visual global positioning method based on ground texture is characterized in that, comprises the steps:
1. at first set up the ground texture map of the environment overall situation, and be stored in the system, to position operation;
By data line and image acquisition program the picture signal that camera absorbs is sent to the upper strata processing unit when 2. intelligent vehicle moves;
3. the upper strata processing unit carries out Flame Image Process, and the texture information that extracts in the image forms the local ground of width of cloth texture map;
4. use the mileage information of intelligent vehicle to reduce the search matched scope of local map in global map, mileage information provide last one constantly to the current time vehicle the distance of process and the angle that turns over, to determine that current vehicle pose is within a certain scope;
5. and then with current local ground texture map with 4. described in overall ground texture map in a certain scope carry out search matched, thereby try to achieve the locating information of intelligent vehicle in global context.
6. the intelligent vehicular visual global positioning method based on ground texture described in claim 5 is characterized in that, described step 1., if the environmental field that intelligent vehicle moved is little, by the ground texture map of manually creating global context; If environmental field is big, then operative installations is assisted and is set up global map, and constructive process comprises that the texture information based on image processing algorithm extracts and storage.
7. the intelligent vehicular visual global positioning method described in claim 5 based on ground texture, it is characterized in that, described step 2., the upper strata processing unit is that a chassis carries notebook computer, camera links to each other with notebook computer by the usb data line, Windows XP operating system is housed in the notebook computer, and developed the real time image collection software platform of a cover based on DirectShow, utilizing the environmental information of this software platform acquisition camera picked-up is local ground image, and upper strata processing unit or employing embedded system realize.
8. the intelligent vehicular visual global positioning method based on ground texture described in claim 5 is characterized in that, described step 3., Flame Image Process comprises four steps:
The first, extracting the subimage that is made of the odd-numbered line pixel is strange field picture;
The second, adopt median filtering algorithm that image is handled with the noise in the less image;
The 3rd, utilize the Canny edge detection operator to extract marginal point;
The 4th, write down the ground texture map that each edge point position forms a width of cloth part.
9. the intelligent vehicular visual global positioning method described in claim 5 based on ground texture, it is characterized in that, described step 4., intelligent vehicle trailing wheel and front-wheel place are provided with drive motor photoelectric encoder and steer motor photoelectric encoder respectively, be recorded in distance and the angle that turns over, i.e. vehicle mileage information that at interval interior sometime vehicle is moved respectively; At last one constantly pose, and extrapolate the pose of vehicle according to these mileage information and vehicle at current time by the motion model of vehicle.
10. the intelligent vehicular visual global positioning method described in claim 5 based on ground texture, it is characterized in that, described step 5., adopt iterative closest point algorithms to finish map matching process, be stored in that local ground texture map global context ground texture map in the system and current collection and that obtain through Flame Image Process is all abstract be the form of edge point set, so the coupling of local map and global map is regarded the coupling between the point set as; Marginal point in the local map can be mapped in the global map according to the result of camera calibration and by the current vehicle pose that mileage information is tried to achieve, { W_EdgeP} represents with set; Pairing true edge point set is with { True_W_EdgeP} represents in the global map, find the solution by iterative closest point algorithms and to obtain that { W_EdgeP} is with respect to { transformation relation of True_W_EdgeP} obtains very accurate vehicle overall situation pose thereby correct the vehicle pose of being tried to achieve by mileage information according to this transformation relation then.
CNB2007100365509A 2007-01-18 2007-01-18 Intelligent vehicular vision device and global localization method thereof based on ground texture Expired - Fee Related CN100541121C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2007100365509A CN100541121C (en) 2007-01-18 2007-01-18 Intelligent vehicular vision device and global localization method thereof based on ground texture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2007100365509A CN100541121C (en) 2007-01-18 2007-01-18 Intelligent vehicular vision device and global localization method thereof based on ground texture

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN2009101382356A Division CN101566471B (en) 2007-01-18 2007-01-18 Intelligent vehicular visual global positioning method based on ground texture

Publications (2)

Publication Number Publication Date
CN101008566A true CN101008566A (en) 2007-08-01
CN100541121C CN100541121C (en) 2009-09-16

Family

ID=38697104

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2007100365509A Expired - Fee Related CN100541121C (en) 2007-01-18 2007-01-18 Intelligent vehicular vision device and global localization method thereof based on ground texture

Country Status (1)

Country Link
CN (1) CN100541121C (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101576384B (en) * 2009-06-18 2011-01-05 北京航空航天大学 Indoor movable robot real-time navigation method based on visual information correction
CN102519481A (en) * 2011-12-29 2012-06-27 中国科学院自动化研究所 Implementation method of binocular vision speedometer
CN102997910A (en) * 2012-10-31 2013-03-27 上海交通大学 Positioning and guiding system and method based on ground road sign
CN103759727A (en) * 2014-01-10 2014-04-30 大连理工大学 Navigation and positioning method based on sky polarized light distribution mode
CN106227212A (en) * 2016-08-12 2016-12-14 天津大学 The controlled indoor navigation system of precision based on grating map and dynamic calibration and method
CN106524952A (en) * 2016-12-22 2017-03-22 桂林施瑞德科技发展有限公司 Monocular camera 3D automobile wheel positioning instrument
CN106541945A (en) * 2016-11-15 2017-03-29 广州大学 A kind of unmanned vehicle automatic parking method based on ICP algorithm
CN106949847A (en) * 2017-03-06 2017-07-14 石家庄铁道大学 A kind of acquisition method and device of contactless three-dimensional road surface topography
CN106996777A (en) * 2017-04-21 2017-08-01 合肥井松自动化科技有限公司 A kind of vision navigation method based on ground image texture
CN107144285A (en) * 2017-05-08 2017-09-08 深圳地平线机器人科技有限公司 Posture information determines method, device and movable equipment
CN108226938A (en) * 2017-12-08 2018-06-29 华南理工大学 A kind of alignment system and method for AGV trolleies
CN108519771A (en) * 2018-03-01 2018-09-11 Ai机器人株式会社 For the localization method of haulage equipment, device, haulage equipment and storage medium
CN108519772A (en) * 2018-03-01 2018-09-11 Ai机器人株式会社 For the localization method of haulage equipment, device, haulage equipment and storage medium
CN108762165A (en) * 2018-06-28 2018-11-06 辽宁工业大学 A kind of vehicle condition investigating method
CN108984781A (en) * 2018-07-25 2018-12-11 北京理工大学 A kind of map edge detection method and device for planning of unmanned vehicle area research
CN110103873A (en) * 2019-05-20 2019-08-09 苏州赛格车圣导航科技有限公司 A kind of intelligent driving control system
CN110147094A (en) * 2018-11-08 2019-08-20 北京初速度科技有限公司 A kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system
CN110246182A (en) * 2019-05-29 2019-09-17 深圳前海达闼云端智能科技有限公司 Vision-based global map positioning method and device, storage medium and equipment
WO2020019117A1 (en) * 2018-07-23 2020-01-30 深圳前海达闼云端智能科技有限公司 Localization method and apparatus, electronic device, and readable storage medium
WO2020078064A1 (en) * 2018-10-19 2020-04-23 北京极智嘉科技有限公司 Ground texture image-based navigation method and device, apparatus, and storage medium
CN111288971A (en) * 2020-03-26 2020-06-16 北京三快在线科技有限公司 Visual positioning method and device
CN111856963A (en) * 2019-04-30 2020-10-30 北京初速度科技有限公司 Parking simulation method and device based on vehicle-mounted looking-around system
CN112150549A (en) * 2020-09-11 2020-12-29 珠海市一微半导体有限公司 Visual positioning method based on ground texture, chip and mobile robot
CN112150907A (en) * 2019-10-23 2020-12-29 王博 Method for constructing map based on earth texture and application

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101576384B (en) * 2009-06-18 2011-01-05 北京航空航天大学 Indoor movable robot real-time navigation method based on visual information correction
CN102519481A (en) * 2011-12-29 2012-06-27 中国科学院自动化研究所 Implementation method of binocular vision speedometer
CN102997910B (en) * 2012-10-31 2016-04-13 上海交通大学 A kind of based on road of ground surface target location guidance system and method
CN102997910A (en) * 2012-10-31 2013-03-27 上海交通大学 Positioning and guiding system and method based on ground road sign
CN103759727A (en) * 2014-01-10 2014-04-30 大连理工大学 Navigation and positioning method based on sky polarized light distribution mode
CN106227212A (en) * 2016-08-12 2016-12-14 天津大学 The controlled indoor navigation system of precision based on grating map and dynamic calibration and method
CN106227212B (en) * 2016-08-12 2019-02-22 天津大学 The controllable indoor navigation system of precision and method based on grating map and dynamic calibration
CN106541945B (en) * 2016-11-15 2019-02-12 广州大学 A kind of unmanned vehicle automatic parking method based on ICP algorithm
CN106541945A (en) * 2016-11-15 2017-03-29 广州大学 A kind of unmanned vehicle automatic parking method based on ICP algorithm
CN106524952A (en) * 2016-12-22 2017-03-22 桂林施瑞德科技发展有限公司 Monocular camera 3D automobile wheel positioning instrument
CN106949847A (en) * 2017-03-06 2017-07-14 石家庄铁道大学 A kind of acquisition method and device of contactless three-dimensional road surface topography
CN106996777A (en) * 2017-04-21 2017-08-01 合肥井松自动化科技有限公司 A kind of vision navigation method based on ground image texture
CN107144285A (en) * 2017-05-08 2017-09-08 深圳地平线机器人科技有限公司 Posture information determines method, device and movable equipment
CN107144285B (en) * 2017-05-08 2020-06-26 深圳地平线机器人科技有限公司 Pose information determination method and device and movable equipment
CN108226938A (en) * 2017-12-08 2018-06-29 华南理工大学 A kind of alignment system and method for AGV trolleies
CN108226938B (en) * 2017-12-08 2021-09-21 华南理工大学 AGV trolley positioning system and method
WO2019166026A1 (en) * 2018-03-01 2019-09-06 AIrobot株式会社 Positioning method and apparatus for transportation device, transportation device, and storage medium
WO2019166027A1 (en) * 2018-03-01 2019-09-06 AIrobot株式会社 Positioning method and apparatus for transportation device, transportation device, and storage medium
CN108519772B (en) * 2018-03-01 2022-06-03 Ai机器人株式会社 Positioning method and device for conveying equipment, conveying equipment and storage medium
CN108519771B (en) * 2018-03-01 2022-03-11 Ai机器人株式会社 Positioning method and device for conveying equipment, conveying equipment and storage medium
CN108519772A (en) * 2018-03-01 2018-09-11 Ai机器人株式会社 For the localization method of haulage equipment, device, haulage equipment and storage medium
CN108519771A (en) * 2018-03-01 2018-09-11 Ai机器人株式会社 For the localization method of haulage equipment, device, haulage equipment and storage medium
CN108762165A (en) * 2018-06-28 2018-11-06 辽宁工业大学 A kind of vehicle condition investigating method
WO2020019117A1 (en) * 2018-07-23 2020-01-30 深圳前海达闼云端智能科技有限公司 Localization method and apparatus, electronic device, and readable storage medium
CN108984781A (en) * 2018-07-25 2018-12-11 北京理工大学 A kind of map edge detection method and device for planning of unmanned vehicle area research
CN108984781B (en) * 2018-07-25 2020-11-10 北京理工大学 Map edge detection planning method and device for unmanned vehicle area exploration
US11644338B2 (en) 2018-10-19 2023-05-09 Beijing Geekplus Technology Co., Ltd. Ground texture image-based navigation method and device, and storage medium
WO2020078064A1 (en) * 2018-10-19 2020-04-23 北京极智嘉科技有限公司 Ground texture image-based navigation method and device, apparatus, and storage medium
CN110147094A (en) * 2018-11-08 2019-08-20 北京初速度科技有限公司 A kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system
CN111856963A (en) * 2019-04-30 2020-10-30 北京初速度科技有限公司 Parking simulation method and device based on vehicle-mounted looking-around system
CN111856963B (en) * 2019-04-30 2024-02-20 北京魔门塔科技有限公司 Parking simulation method and device based on vehicle-mounted looking-around system
CN110103873A (en) * 2019-05-20 2019-08-09 苏州赛格车圣导航科技有限公司 A kind of intelligent driving control system
CN110246182B (en) * 2019-05-29 2021-07-30 达闼机器人有限公司 Vision-based global map positioning method and device, storage medium and equipment
CN110246182A (en) * 2019-05-29 2019-09-17 深圳前海达闼云端智能科技有限公司 Vision-based global map positioning method and device, storage medium and equipment
CN112150907A (en) * 2019-10-23 2020-12-29 王博 Method for constructing map based on earth texture and application
CN111288971A (en) * 2020-03-26 2020-06-16 北京三快在线科技有限公司 Visual positioning method and device
CN112150549A (en) * 2020-09-11 2020-12-29 珠海市一微半导体有限公司 Visual positioning method based on ground texture, chip and mobile robot
CN112150549B (en) * 2020-09-11 2023-12-01 珠海一微半导体股份有限公司 Visual positioning method based on ground texture, chip and mobile robot

Also Published As

Publication number Publication date
CN100541121C (en) 2009-09-16

Similar Documents

Publication Publication Date Title
CN100541121C (en) Intelligent vehicular vision device and global localization method thereof based on ground texture
CN101566471B (en) Intelligent vehicular visual global positioning method based on ground texture
US11953340B2 (en) Updating road navigation model using non-semantic road feature points
US11755024B2 (en) Navigation by augmented path prediction
CN106441319B (en) A kind of generation system and method for automatic driving vehicle lane grade navigation map
CN106651953B (en) A kind of vehicle position and orientation estimation method based on traffic sign
US11573090B2 (en) LIDAR and rem localization
CN109446973B (en) Vehicle positioning method based on deep neural network image recognition
CN103226354A (en) Photoelectricity-navigation-based unmanned road recognition system
CN110146910A (en) A kind of localization method and device merged based on GPS with laser radar data
CN1323547C (en) Three-line calibration method for external parmeters of camera carried by car
CN106289285A (en) Map and construction method are scouted by a kind of robot associating scene
JP2023500993A (en) Vehicle navigation with pedestrians and identification of vehicle free space
DE112021002680T5 (en) SYSTEMS AND METHODS FOR DETECTING AN OPEN DOOR
CN106918312A (en) Pavement strip based on computer vision peels off area detecting device and method
US20230127230A1 (en) Control loop for navigating a vehicle
CN116202538B (en) Map matching fusion method, device, equipment and storage medium
US20230206608A1 (en) Systems and methods for analyzing and resolving image blockages
US20230136710A1 (en) Systems and methods for harvesting images for vehicle navigation
CN208360051U (en) Seven road integrated form visual perception systems of one kind and vehicle
Fang et al. Marker-based mapping and localization for autonomous valet parking
US20230205533A1 (en) Systems and methods for performing neural network operations
Gim et al. Drivable road recognition by multilayered LiDAR and Vision
Fan et al. Road perception and road line detection based on fusion of LiDAR and camera
Moon et al. Vision system of unmanned ground vehicle

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090916

Termination date: 20120118