CN102819263A - Multi-camera visual perception system for UGV (Unmanned Ground Vehicle) - Google Patents

Multi-camera visual perception system for UGV (Unmanned Ground Vehicle) Download PDF

Info

Publication number
CN102819263A
CN102819263A CN2012102662568A CN201210266256A CN102819263A CN 102819263 A CN102819263 A CN 102819263A CN 2012102662568 A CN2012102662568 A CN 2012102662568A CN 201210266256 A CN201210266256 A CN 201210266256A CN 102819263 A CN102819263 A CN 102819263A
Authority
CN
China
Prior art keywords
image
thread
perceptible
perception
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102662568A
Other languages
Chinese (zh)
Other versions
CN102819263B (en
Inventor
张典国
刘同林
汤晓磊
许朋飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
No 8357 Research Institute of Third Academy of CASIC
Original Assignee
No 8357 Research Institute of Third Academy of CASIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by No 8357 Research Institute of Third Academy of CASIC filed Critical No 8357 Research Institute of Third Academy of CASIC
Priority to CN201210266256.8A priority Critical patent/CN102819263B/en
Publication of CN102819263A publication Critical patent/CN102819263A/en
Application granted granted Critical
Publication of CN102819263B publication Critical patent/CN102819263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention belongs to the technical field of autonomous running and visual perception, particularly relates to a multi-camera visual perception system for a UGV (Unmanned Ground Vehicle) and aims to solve the problems of single perception algorithm, poor real-time performance, poor system expandability, inconvenience in debugging, limitation to the view field control and the like of an image processing system of the existing UGV and autonomous mobile robot. The system comprises a plurality of vehicle-mounted image processing industrial personal computers and one or a plurality of debugging monitoring computers, wherein each vehicle-mounted image processing industrial personal computer is connected with three cameras; one or a plurality of debugging monitoring computers are used for controlling the vehicle-mounted image processing industrial personal computers by a wireless router; and a control mode can be a redundancy control mode and also can be a respective control mode. In the system, an optimized visual perception method on the basis of multiple threads is adopted; and threads comprise a multi-camera video acquisition thread, a lane line perception thread, a barrier, cone bucket and signal lamp perception thread and the like. Compared with a conventional single-thread visual perception method, the optimized visual perception method on the basis of the multiple threads has higher real-time performance, stability and expandability.

Description

The unmanned car visually-perceptible of multi-cam system
Technical field
The invention belongs to and independently go and the visually-perceptible technical field, be specifically related to the unmanned car visually-perceptible of a kind of multi-cam system, can be used for the visually-perceptible and the Flame Image Process of pilotless automobile and autonomous moving vehicle.
Background technology
In recent years, various middle-size and small-size vehicles that can independently move and robot begin to get into fields such as commercial production, security protection and home services.And in order to study the technology of independently going of automobile, various micro-unmanned car simulation checking systems are also developed by some scientific research institutions.The visually-perceptible technology is autonomous vehicle and mobile robot's a gordian technique, has remarkable advantages in environment sensing and navigation field than other traditional sensors cognition technologies.The visually-perceptible technology has also obtained in fields such as commercial production, communications and transportation, medical services using widely, has good application prospects.
Because the restriction of middle-size and small-size autonomous moving vehicle and robot size and cost of development; Cause to use high-grade arithmetic processing system; The general microcontrollers such as single-chip microcomputer, dsp that adopt; Handling procedure is a single-threading program, can not satisfy the demand of the image processing system of multitask, and extensibility and real-time are not strong.
In traditional unmanned car image processing techniques, to methods such as the identification of lane line fitting process commonly used, edge search, these methods are not good for the situation treatment effect that the road route partly is blocked; Methods such as template matches and neural network are usually used in identification detection for barrier, awl bucket, traffic lights etc., consumes resources and influence system real time.
Current vision platform need use cable to connect the equipment on the car or pass through a spot of image information of wireless serial remote transmission when debugging, bring than big inconvenience for the debugging of system.
In addition, current autonomous moving vehicle or micro-unmanned truck system adopts common single camera to carry out visually-perceptible and navigation, and field range is narrower; And the system that adopts wide-angle lens need expend the distortion that big energy is handled the camera lens generation, and field range can't be adjusted; The resolution that also can't freely control images acquired in addition maximizes with implementation efficiency.
Summary of the invention
The objective of the invention is problems such as limitation single for the image processing system perception algorithm that overcomes existing micro-unmanned car and autonomous mobile robot and that real-time is strong, system extension property is poor, debugging inconvenience and field range are controlled; The unmanned car visually-perceptible of a kind of multi-cam system is proposed; This systematic economy, reliable; Favorable expandability; Perception and identification for the emulation traffic environment have good real-time, can be applicable to the environment sensing and the navigation of intelligence simulation vehicle and small-sized autonomous mobile robot.
The technical scheme that the present invention adopted is:
The unmanned car visually-perceptible of a kind of multi-cam system comprises vehicle-mounted visually-perceptible system, Wireless Communication Equipment and remote debugging supervisory system; Wherein vehicle-mounted visually-perceptible system comprises a plurality of vehicle-mounted Flame Image Process industrial computers, and each vehicle-mounted Flame Image Process industrial computer is connected to three cameras; The remote debugging supervisory system comprises that one or more debugging supervisory control comuters control vehicle-mounted Flame Image Process industrial computer through wireless router; Carry out the remote monitoring and the online program debug of vehicle operating through Telnet, said control mode can be that Redundant Control also can be to control respectively.
The unmanned car visually-perceptible of aforesaid a kind of multi-cam system; Wherein: before the operation of said system; With three cameras as required the field range of perception be fixed on the autonomous moving vehicle, the pixel distance parameter through several points in certain width of cloth image and corresponding actual range parameter are confirmed the corresponding relation matrix of photo coordinate system and world coordinate system then.
The unmanned car visually-perceptible of aforesaid a kind of multi-cam system, wherein: take a kind of visually-perceptible method of the optimization based on multithreading in the said system, comprise a plurality of threads; Be respectively main thread, camera A video acquisition thread, camera B video acquisition thread; Camera C video acquisition thread, lane line perception thread, barrier, awl bucket and signal lamp perception thread; Traffic sign perception thread, Comprehensive Control decision-making thread, image and operation result show thread.
The unmanned car visually-perceptible of aforesaid a kind of multi-cam system; Wherein: said main thread carry out system initialization and each minute thread establishment, initialization comprises statement, serial initialization setting, the statement of static variable parameter of image storage space; After each minute, thread was created, main thread got into waiting status, waited for that image and operation result show that thread finishes; Image and operation result show that thread constantly detects the instruction that the debugging supervisory control comuter sends, if receive exit instruction, then thread is jumped out loop ends, triggers main thread simultaneously and finishes and releasing memory.
The unmanned car visually-perceptible of aforesaid a kind of multi-cam system, wherein: said lane line perception thread carries out the perception of lane line and the calculating of crab angle, comprises the steps:
(4.1) the RGB image gray processing is obtained gray level image, the gray level image binaryzation is obtained bianry image;
(4.2) the expansion corrosion reduces noise, detects through the canny edge and obtains the edge image, reduces the calculated amount of back Hough transformation;
(4.3) the edge image is carried out the Hough straight-line detection, obtain the parameter of the polar equation of all straight lines in the image;
(4.4) travel through all straight lines, carry out coordinate transform and projection mapping under world coordinate system, find slope in setting range and apart from the nearest line of headstock as lane line.
The unmanned car visually-perceptible of aforesaid a kind of multi-cam system; Wherein: binarization method is in said (4.1) step: image is divided into 4 parts from top to bottom by row; Each part all finds maximum three gray-scale values and three minimum gray-scale values; Get its mean value as final threshold value, if the threshold calculations that the difference of minimax threshold value draws less than certain empirical value then by last branch; If image first contrast is relatively poor, then first use certain fixedly initial threshold TH begin to calculate, wait to ask threshold calculations following:
th = Σ i = 0 n ( high [ i ] + low [ i ] ) 2 n
Wherein, th is threshold value to be asked, and high [i], low [i] are respectively n minimum and maximum threshold values.The unmanned car visually-perceptible of aforesaid a kind of multi-cam system, wherein: said barrier, awl bucket and signal lamp perception thread specifically are divided into:
(5.1) the RGB image is converted into the HSV image;
(5.2) discern according to three kinds of targets concrete characteristic separately; According to lane line with the barrier in the place ahead be divided in the track with the track outside, whether be the barrier on the track according to color, area and position judgment object, hide for the barrier in the track; The position of awl bucket is within two lane lines, and each awl bucket has red, white, red three zones, and three zones reduce from top to bottom successively, and bore bucket and once place more than two, otherwise is not the awl bucket; Traffic lights is rendered as red circular UNICOM zone in image, in the middle of the zone the very high white portion of brightness is arranged, and traffic lights are arranged in the top of image.
The unmanned car visually-perceptible of aforesaid a kind of multi-cam system, wherein: said traffic sign perception thread is divided into following steps:
(6.1) image pre-service: original RGB image transitions is become the image in HSV space, and be partitioned into red, blue, yellow three kinds of color regions respectively;
(6.2) traffic sign classification: traffic sign is divided into red prohibitory sign, blue Warning Mark, yellow warning notice according to color, shape;
(6.3) carry out Traffic Sign Recognition through template matches:; Dwindle sorted image-region, with this zone convert to behind the bianry image with corresponding classification in template matches, find out the highest sign template of similarity.
The invention has the beneficial effects as follows:
(1) the present invention has used a kind of visually-perceptible method perception traffic environment of the optimization based on multithreading, and a plurality of perception thread parallels are carried out, and more traditional single-threaded visually-perceptible method has higher real-time, stability and extensibility.And thisly normally can reach the delivery flow rate of tens MB, and can carry out one-to-many or many-to-one debugging, be fit to very much the moving-vision image processing system based on the communication mode of wireless network.
(2) used the method that the canny edge detects and Hough transformation combines identification lane line, still can have been stablized identification by the situation of partial occlusion for lane line; Simultaneously, improved traditional bimodal method and carried out binaryzation, improved adaptability the light changing environment.Used two cameras that the both sides lane line is detected simultaneously,, guaranteed redundancy and reliability that lane line detects according to the calculating that two or single line calculate the car body deviation angle.
(3) through method cognitive disorders thing, awl bucket, traffic lights based on the concrete characteristic of color, shape, position and object, algorithm is succinctly reliable, and conventional method real-times such as template matches are good.
(4) the present invention is through a plurality of field ranges of a plurality of cameras difference perception, and more traditional single camera perception has bigger field range, and field range is adjustable flexibly.Different field ranges corresponding the different disposal thread, increased the dirigibility of program.
(5) image processing module has used the X86 industrial computer; Single-chip microcomputer, dsp image processing module that the autonomous moving vehicle of more traditional microminiature uses have better expansibility; And the developer can be placed on Algorithm design exploitation with energy and go up and need not be concerned about system hardware again, has shortened the development time.
(6) the debugging supervisory control comuter is through the vehicle-mounted Flame Image Process industrial computer of WLAN Telnet; Can be when moving vehicle moves long distance control system running status and on-line debugging update routine; More traditional microminiature moving vehicle uses wireless serial or needs the method for stube cable debugging convenient, and it is bigger to transmit data volume.
Description of drawings
Fig. 1 is the unmanned car visually-perceptible of an a kind of multi-cam provided by the invention system schematic;
Fig. 2 is a kind of thread synoptic diagram of visually-perceptible method of the optimization based on multithreading;
Fig. 3 is a tessellated original image;
Fig. 4 is the image after Fig. 3 mapping transformation;
Fig. 5 is lane line testing process figure;
Fig. 6 is traffic sign testing process figure;
Among the figure, 1. industrial computer, 2. camera, 3. wireless router, 4. debugging supervisory control comuter.
Embodiment
Below in conjunction with accompanying drawing and embodiment the unmanned car visually-perceptible of a kind of multi-cam provided by the invention system is introduced:
As shown in Figure 1, the unmanned car visually-perceptible of a kind of multi-cam system comprises a plurality of vehicle-mounted Flame Image Process industrial computers 1, and each vehicle-mounted Flame Image Process industrial computer is connected to three cameras 2; One or more debugging supervisory control comuters 4 are through the vehicle-mounted Flame Image Process industrial computer 2 of wireless router 3 controls, and control mode can be that Redundant Control also can be to control respectively.
Each vehicle-mounted Flame Image Process industrial computer 1 is formed WLANs with remote debugging supervisory control comuter 4, but just free communication between any computing machine in the WLAN.The remote monitoring computing machine is ordered with sending through the command line mode video data; Show the realtime graphic result through display server-client system.On vehicle-mounted Flame Image Process industrial computer 1, set up relevant file-sharing mechanism; Can also the intermediate treatment result data and the image of all threads be sent to the remote computer demonstration from other these truck-mounted computer online modification programs of computer remote login in the LAN.In the scope that network bears, can use the many chassis of computer remote debugging monitoring to carry Flame Image Process industrial computer 1; Also can use the remote monitoring simultaneously of many computing machines to debug a chassis and carry Flame Image Process industrial computer 1.
As shown in Figure 2, take a kind of visually-perceptible method of the optimization based on multithreading in the unmanned car visually-perceptible of the multi-cam system, comprise a plurality of threads; Be respectively main thread, camera A video acquisition thread, camera B video acquisition thread; Camera C video acquisition thread, lane line perception thread, barrier, awl bucket and signal lamp perception thread; Traffic sign perception thread, Comprehensive Control decision-making thread, image and operation result show thread; This cognitive method specifically comprises like the lower part:
Before system's operation; Should be earlier with three cameras as required the field range of perception be fixed on respectively on the autonomous moving vehicle; The corresponding relation matrix of confirming photo coordinate system and world coordinate system through the pixel distance parameter and the corresponding actual range parameter of several points in certain width of cloth image then, formula is following.
x y 1 = H X Y 1
(x is the coordinate of Chosen Point on the picture plane y), and this coordinate origin is the upper left corner of image, and unit is a pixel; (X Y) is world's planimetric coordinates, and initial point is chosen as the mid point of headstock, is respectively the positive dirction of Y, X axle forward to the right, and unit is centimetre.H is to be asked 3 * 3 corresponding relation matrix.The present invention uses cross-hatch pattern to look like to carry out the calculating of corresponding relation matrix, brings chessboard respectively into and can obtain the corresponding relation matrix H at the coordinate of the selected angle point of two coordinate systems.Fig. 3 is a tessellated original image, and Fig. 4 is the image after the mapping transformation.Be transformed into the about 1cm of error between distance value and the actual measurement numerical value of world coordinate system as the distance of the pixel in the plane.
Each thread parallel is carried out.Important operational data of some of each thread and processing result image all are stored in one section shared drive, so that the calling of other threads.Adopt the program structure of multithreading, in the thread execution cycle that can be different to the demand different set of real-time,, can maximize the service efficiency of cpu like this through letting the mode of thread dormancy control the intermittent operation of thread according to each thread.
Main thread mainly carry out system initialization and each minute thread establishment.Initialization mainly comprises statement, serial initialization setting, the statement of static variable parameter of image storage space etc.After each minute, thread was created, main thread got into waiting status, waited for that image shows that thread finishes.Image shows that thread can constantly detect the instruction that the remote monitoring computing machine sends, if receive exit instruction, then thread is jumped out loop ends, triggers main thread simultaneously and finishes and releasing memory, and whole procedure thoroughly finishes.
The cycle of three camera video acquisition threads depends primarily on the frame speed of camera.In order to maximize the efficient of program, improve real-time, during collection will sampling be fallen according to the function of each camera.The RGB image that collects is stored in the shared drive, and this image is carried out basic pre-service.
Lane line perception thread carries out the perception of lane line and the calculating of crab angle.Perception is in order to guarantee that autonomous moving vehicle operates in the middle of two lane lines always to lane line.Lane line detects the canny edge is detected and the Hough transformation combination, has promptly reduced calculated amount, can stable detection go out by the lane line of partial occlusion again.
As shown in Figure 5, the detection master of lane line comprises the steps:
(4.1) the RGB image gray processing is obtained gray level image, the gray level image binaryzation is obtained bianry image;
(4.2) the expansion corrosion reduces noise, detects through the canny edge and obtains the edge image, reduces the calculated amount of back Hough transformation;
(4.3) the edge image is carried out the Hough straight-line detection, obtain the parameter of the polar equation of all straight lines in the image;
(4.4) travel through all straight lines, carry out coordinate transform and projection mapping under world coordinate system, find slope in setting range and apart from the nearest line of headstock as lane line.
Because far and near light has bigger variation, the conventional bimodal method of use obtains the threshold value of the overall situation and is not suitable for the various piece of image.The present invention proposes a kind of improved method; Be about to image and be divided into 4 parts from top to bottom by row; Each part all finds maximum three gray-scale values and three minimum gray-scale values; Get its mean value as final threshold value, if the difference of minimax threshold value is less than certain empirical value (present embodiment is preferably 60) then by the threshold calculations that last branch draws, noise occurs to prevent white space; If image first contrast is relatively poor, then first use certain fixedly initial threshold TH begin to calculate.Algorithmic formula is following:
th = Σ i = 0 n ( high [ i ] + low [ i ] ) 2 n
Wherein, th is threshold value to be asked, and high [i], low [i] are respectively n minimum and maximum threshold values.
Sidecar diatom through a side camera collection obtains is calculated the distance of car body and this lane line, according to this distance calculation angle that goes out to go off course.For enhance system reliability and stability, dolly both sides lane line is detected simultaneously, even a sidecar diatom does not detect still the driftage angle that can correct calculation goes out vehicle like this.Simultaneously, also guaranteed to select driving path flexibly for the road conditions at bifurcated or interflow.
In the detection of barrier, awl bucket and traffic lights, specifically comprise the steps:
(5.1) three kinds of targets all have than strikingly color, and are especially bigger with the track colouring discrimination.Owing to carry out that in the HSV space effect is preferably arranged cutting apart of three kinds of color of objects, therefore at first will the RGB image be converted into the HSV image.
(5.2) discern according to three kinds of targets concrete characteristic separately.According to lane line can the barrier in the place ahead be divided in the track with the track outside, can judge that according to color, area and position whether this object is the barrier on the track, will hide for the barrier in the track; The characteristic of awl bucket is: the position is within two lane lines, and each awl bucket has red, white, red three zones, and three zones reduce from top to bottom successively, and bore bucket and generally once place more than two, otherwise does not think the awl bucket.Can discern the awl bucket fast and accurately through top characteristic; Traffic lights is rendered as very little red circular UNICOM zone in image, the very high white portion of brightness is arranged in the middle of the zone, and effectively traffic lights only can appear at the middle top of image.Through top method cognitive disorders thing, awl bucket and traffic lights very high real-time and stability are arranged.
In the traffic sign perception thread, traffic sign can be indicated characteristics such as the driving path, travel speed, road type of vehicle.Autonomous moving vehicle can be revised the running status of self according to the indication of traffic sign.The identification process of traffic sign is as shown in Figure 6, is divided into following steps:
(6.1) image pre-service: original RGB image transitions is become the image in HSV space, and be partitioned into red, blue, yellow three kinds of color regions respectively.
(6.2) traffic sign classification: traffic sign is divided into red prohibitory sign, blue Warning Mark, yellow warning notice according to color, shape.
(6.3) carry out Traffic Sign Recognition through template matches: further dwindle sorted image-region, with this zone convert to behind the bianry image with corresponding classification in template matches, find out the highest sign template of similarity.
In order to improve real-time, this thread method has been used less matching template; In addition, before the matching algorithm that is carrying out expensive source, carry out sufficient classification, only got final product only in a very little class scope, searching for during coupling, can increase exponentially the efficient of algorithm like this.
Comprehensive Control decision-making thread mainly carries out visually-perceptible result's analysis-by-synthesis and task priority queuing, carries out the switching of control system running status, and some result is sent to other hardware control unit on the car through serial ports.
Image and operation result show that thread sends to long-range pc demonstration with the intermediate result and the net result of each process Flame Image Process.Hand over some parameters (like the template of traffic sign thread extraction) to be saved in the file simultaneously so that observe and use.

Claims (8)

1. the unmanned car visually-perceptible of multi-cam system comprises vehicle-mounted visually-perceptible system, Wireless Communication Equipment and remote debugging supervisory system; Wherein vehicle-mounted visually-perceptible system comprises a plurality of vehicle-mounted Flame Image Process industrial computers (1), and each vehicle-mounted Flame Image Process industrial computer is connected to three cameras (2); The remote debugging supervisory system comprises that one or more debugging supervisory control comuters (4) are through wireless router (3) control vehicle-mounted Flame Image Process industrial computer (2); Carry out the remote monitoring and the online program debug of vehicle operating through Telnet, said control mode can be that Redundant Control also can be to control respectively.
2. the unmanned car visually-perceptible of a kind of multi-cam according to claim 1 system; It is characterized in that: before the operation of said system; With three cameras as required the field range of perception be fixed on the autonomous moving vehicle, the pixel distance parameter through several points in certain width of cloth image and corresponding actual range parameter are confirmed the corresponding relation matrix of photo coordinate system and world coordinate system then.
3. the unmanned car visually-perceptible of a kind of multi-cam according to claim 1 system is characterized in that: take a kind of visually-perceptible method of the optimization based on multithreading in the said system, comprise a plurality of threads; Be respectively main thread, camera A video acquisition thread, camera B video acquisition thread; Camera C video acquisition thread; Lane line perception thread, barrier, awl bucket and signal lamp perception thread, traffic sign perception thread; Comprehensive Control decision-making thread, image and operation result show thread.
4. the unmanned car visually-perceptible of a kind of multi-cam according to claim 3 system; It is characterized in that: said main thread carry out system initialization and each minute thread establishment, initialization comprises statement, serial initialization setting, the statement of static variable parameter of image storage space; After each minute, thread was created, main thread got into waiting status, waited for that image and operation result show that thread finishes; Image and operation result show that thread constantly detects the instruction that debugging supervisory control comuter (4) sends, if receive exit instruction, then thread is jumped out loop ends, trigger main thread simultaneously and finish and releasing memory.
5. the unmanned car visually-perceptible of a kind of multi-cam according to claim 3 system is characterized in that: said lane line perception thread carries out the perception of lane line and the calculating of crab angle, comprises the steps:
(4.1) the RGB image gray processing is obtained gray level image, the gray level image binaryzation is obtained bianry image;
(4.2) the expansion corrosion reduces noise, detects through the canny edge and obtains the edge image, reduces the calculated amount of back Hough transformation;
(4.3) the edge image is carried out the Hough straight-line detection, obtain the parameter of the polar equation of all straight lines in the image;
(4.4) travel through all straight lines, carry out coordinate transform and projection mapping under world coordinate system, find slope in setting range and apart from the nearest line of headstock as lane line.
6. the unmanned car visually-perceptible of a kind of multi-cam according to claim 5 system; It is characterized in that: binarization method is in said (4.1) step: image is divided into 4 parts from top to bottom by row; Each part all finds maximum three gray-scale values and three minimum gray-scale values; Get its mean value as final threshold value, if the threshold calculations that the difference of minimax threshold value draws less than certain empirical value then by last branch; If image first contrast is relatively poor, then first use certain fixedly initial threshold TH begin to calculate, wait to ask threshold calculations following:
th = Σ i = 0 n ( high [ i ] + low [ i ] ) 2 n
Wherein, th is threshold value to be asked, and high [i], low [i] are respectively n minimum and maximum threshold values.
7. the unmanned car visually-perceptible of a kind of multi-cam according to claim 3 system is characterized in that: said barrier, awl bucket and signal lamp perception thread specifically are divided into:
(5.1) the RGB image is converted into the HSV image;
(5.2) discern according to three kinds of targets concrete characteristic separately; According to lane line with the barrier in the place ahead be divided in the track with the track outside, whether be the barrier on the track according to color, area and position judgment object, hide for the barrier in the track; The position of awl bucket is within two lane lines, and each awl bucket has red, white, red three zones, and three zones reduce from top to bottom successively, and bore bucket and once place more than two, otherwise is not the awl bucket; Traffic lights is rendered as red circular UNICOM zone in image, in the middle of the zone the very high white portion of brightness is arranged, and traffic lights are arranged in the top of image.
8. the unmanned car visually-perceptible of a kind of multi-cam according to claim 3 system is characterized in that: said traffic sign perception thread is divided into following steps:
(6.1) image pre-service: original RGB image transitions is become the image in HSV space, and be partitioned into red, blue, yellow three kinds of color regions respectively;
(6.2) traffic sign classification: traffic sign is divided into red prohibitory sign, blue Warning Mark, yellow warning notice according to color, shape;
(6.3) carry out Traffic Sign Recognition through template matches:; Dwindle sorted image-region, with this zone convert to behind the bianry image with corresponding classification in template matches, find out the highest sign template of similarity.
CN201210266256.8A 2012-07-30 2012-07-30 Multi-camera visual perception system for UGV (Unmanned Ground Vehicle) Active CN102819263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210266256.8A CN102819263B (en) 2012-07-30 2012-07-30 Multi-camera visual perception system for UGV (Unmanned Ground Vehicle)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210266256.8A CN102819263B (en) 2012-07-30 2012-07-30 Multi-camera visual perception system for UGV (Unmanned Ground Vehicle)

Publications (2)

Publication Number Publication Date
CN102819263A true CN102819263A (en) 2012-12-12
CN102819263B CN102819263B (en) 2014-11-05

Family

ID=47303415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210266256.8A Active CN102819263B (en) 2012-07-30 2012-07-30 Multi-camera visual perception system for UGV (Unmanned Ground Vehicle)

Country Status (1)

Country Link
CN (1) CN102819263B (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150786A (en) * 2013-04-09 2013-06-12 北京理工大学 Non-contact type unmanned vehicle driving state measuring system and measuring method
CN103226354A (en) * 2013-02-27 2013-07-31 广东工业大学 Photoelectricity-navigation-based unmanned road recognition system
CN103335853A (en) * 2013-07-18 2013-10-02 中国科学院自动化研究所 Unmanned driving vehicle cognitive competence testing system and method
CN104121891A (en) * 2014-07-07 2014-10-29 北京理工大学 Five-camera-based intelligent robot visual device
CN104157160A (en) * 2014-08-08 2014-11-19 中国联合网络通信集团有限公司 Vehicle drive control method and device as well as vehicle
CN104702457A (en) * 2013-12-04 2015-06-10 北车大连电力牵引研发中心有限公司 Vehicle-mounted variable flow control system and vehicle-mounted variable flow debugging method
CN105607637A (en) * 2016-01-25 2016-05-25 重庆德新机器人检测中心有限公司 Unmanned vehicle autopilot system
CN105744139A (en) * 2014-12-09 2016-07-06 广东中星电子有限公司 Wireless debugging method and apparatus applied to high-definition web camera
WO2016106715A1 (en) * 2014-12-31 2016-07-07 SZ DJI Technology Co., Ltd. Selective processing of sensor data
CN106203341A (en) * 2016-07-11 2016-12-07 百度在线网络技术(北京)有限公司 A kind of Lane detection method and device of unmanned vehicle
CN106228635A (en) * 2016-07-15 2016-12-14 百度在线网络技术(北京)有限公司 Information processing method and device for automatic driving vehicle
CN103593671B (en) * 2013-11-25 2017-03-01 中国航天科工集团第三研究院第八三五七研究所 The wide-range lane line visible detection method worked in coordination with based on three video cameras
CN106627463A (en) * 2016-12-22 2017-05-10 深圳市招科智控科技有限公司 Unmanned bus visual perception system and work method for same
CN106627812A (en) * 2016-10-11 2017-05-10 深圳市招科智控科技有限公司 Pilotless bus
EP3168704A1 (en) 2015-11-12 2017-05-17 Hexagon Technology Center GmbH 3d surveying of a surface by mobile vehicles
CN106686119A (en) * 2017-01-21 2017-05-17 张小磊 USB device with encryption in cloud based on cloud computing information
CN106803064A (en) * 2016-12-26 2017-06-06 广州大学 A kind of traffic lights method for quickly identifying
CN107045355A (en) * 2015-12-10 2017-08-15 松下电器(美国)知识产权公司 Control method for movement, autonomous mobile robot
CN107071286A (en) * 2017-05-17 2017-08-18 上海杨思信息科技有限公司 Rotatable platform epigraph high-speed parallel collecting and transmitting method
CN107688174A (en) * 2017-08-02 2018-02-13 北京纵目安驰智能科技有限公司 A kind of image distance-finding method, system, storage medium and vehicle-mounted visually-perceptible equipment
CN108909707A (en) * 2018-07-26 2018-11-30 南京威尔瑞智能科技有限公司 A kind of unmanned vehicle brake gear and its method based on PID control
CN109032125A (en) * 2018-05-31 2018-12-18 上海工程技术大学 A kind of air navigation aid of vision AGV
CN109116846A (en) * 2018-08-29 2019-01-01 五邑大学 A kind of automatic Pilot method, apparatus, computer equipment and storage medium
CN109683603A (en) * 2017-10-18 2019-04-26 江苏卡威汽车工业集团股份有限公司 A kind of pilotless automobile self-control system
CN109683602A (en) * 2017-10-18 2019-04-26 江苏卡威汽车工业集团股份有限公司 A kind of intelligence driverless electric automobile
CN109866752A (en) * 2019-03-29 2019-06-11 合肥工业大学 Double mode parallel vehicles track following driving system and method based on PREDICTIVE CONTROL
CN110175561A (en) * 2019-05-24 2019-08-27 上海电机学院 A kind of detection of road signs and recognition methods
CN110569121A (en) * 2019-09-12 2019-12-13 华润万家有限公司 Multithreading concurrent processing method and device based on application robot
CN111812691A (en) * 2019-04-11 2020-10-23 北京初速度科技有限公司 Vehicle-mounted terminal and image frame detection processing method and device
CN112329670A (en) * 2020-11-11 2021-02-05 上海伯镭智能科技有限公司 Method for recognizing obstacle with irregular road condition of unmanned vehicle, computer program and storage medium
CN112446371A (en) * 2020-11-24 2021-03-05 上海海洋大学 Multi-camera underwater image recognition device and enhancement processing method thereof
CN112731925A (en) * 2020-12-21 2021-04-30 浙江科技学院 Conical barrel identification and path planning and control method for unmanned formula racing car
CN112970029A (en) * 2018-09-13 2021-06-15 辉达公司 Deep neural network processing for sensor blind detection in autonomous machine applications
CN113110169A (en) * 2021-04-14 2021-07-13 合肥工业大学 Vehicle-road cooperative algorithm verification platform based on intelligent miniature vehicle
CN113378735A (en) * 2021-06-18 2021-09-10 北京东土科技股份有限公司 Road marking line identification method and device, electronic equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10124730B2 (en) 2016-03-17 2018-11-13 Ford Global Technologies, Llc Vehicle lane boundary position

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0290633A1 (en) * 1987-05-09 1988-11-17 Carl Schenck Ag Method for detecting changes in the driving range of an unmanned vehicle
CN2674793Y (en) * 2003-12-26 2005-01-26 深圳市宏昌科技开发有限公司 Remote-moitor for money carrying vehicle
CN101608924A (en) * 2009-05-20 2009-12-23 电子科技大学 A kind of method for detecting lane lines based on gray scale estimation and cascade Hough transform
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
JP2011198098A (en) * 2010-03-19 2011-10-06 Ihi Aerospace Co Ltd Plane detecting method by stereo camera, and mobile robot employing the same
CN102541061A (en) * 2012-02-07 2012-07-04 清华大学 Micro intelligent vehicle based on visual and auditory information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0290633A1 (en) * 1987-05-09 1988-11-17 Carl Schenck Ag Method for detecting changes in the driving range of an unmanned vehicle
CN2674793Y (en) * 2003-12-26 2005-01-26 深圳市宏昌科技开发有限公司 Remote-moitor for money carrying vehicle
CN101608924A (en) * 2009-05-20 2009-12-23 电子科技大学 A kind of method for detecting lane lines based on gray scale estimation and cascade Hough transform
JP2011198098A (en) * 2010-03-19 2011-10-06 Ihi Aerospace Co Ltd Plane detecting method by stereo camera, and mobile robot employing the same
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN102541061A (en) * 2012-02-07 2012-07-04 清华大学 Micro intelligent vehicle based on visual and auditory information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李晓东等: "彩色数字仪表图像二值化技术研究", 《计算机技术与发展》 *

Cited By (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226354A (en) * 2013-02-27 2013-07-31 广东工业大学 Photoelectricity-navigation-based unmanned road recognition system
CN103150786B (en) * 2013-04-09 2015-04-22 北京理工大学 Non-contact type unmanned vehicle driving state measuring system and measuring method
CN103150786A (en) * 2013-04-09 2013-06-12 北京理工大学 Non-contact type unmanned vehicle driving state measuring system and measuring method
CN103335853A (en) * 2013-07-18 2013-10-02 中国科学院自动化研究所 Unmanned driving vehicle cognitive competence testing system and method
CN103335853B (en) * 2013-07-18 2015-09-16 中国科学院自动化研究所 A kind of automatic driving vehicle Cognitive Aptitude Test system and method
CN103593671B (en) * 2013-11-25 2017-03-01 中国航天科工集团第三研究院第八三五七研究所 The wide-range lane line visible detection method worked in coordination with based on three video cameras
CN104702457A (en) * 2013-12-04 2015-06-10 北车大连电力牵引研发中心有限公司 Vehicle-mounted variable flow control system and vehicle-mounted variable flow debugging method
CN104121891A (en) * 2014-07-07 2014-10-29 北京理工大学 Five-camera-based intelligent robot visual device
CN104121891B (en) * 2014-07-07 2016-04-27 北京理工大学 Based on the intelligent robot sighting device of 5 cameras
CN104157160B (en) * 2014-08-08 2016-08-17 中国联合网络通信集团有限公司 Vehicle travel control method, device and vehicle
CN104157160A (en) * 2014-08-08 2014-11-19 中国联合网络通信集团有限公司 Vehicle drive control method and device as well as vehicle
CN105744139A (en) * 2014-12-09 2016-07-06 广东中星电子有限公司 Wireless debugging method and apparatus applied to high-definition web camera
US9778661B2 (en) 2014-12-31 2017-10-03 SZ DJI Technology Co., Ltd. Selective processing of sensor data
WO2016106715A1 (en) * 2014-12-31 2016-07-07 SZ DJI Technology Co., Ltd. Selective processing of sensor data
US10802509B2 (en) 2014-12-31 2020-10-13 SZ DJI Technology Co., Ltd. Selective processing of sensor data
CN107209514A (en) * 2014-12-31 2017-09-26 深圳市大疆创新科技有限公司 The selectivity processing of sensing data
EP3168704A1 (en) 2015-11-12 2017-05-17 Hexagon Technology Center GmbH 3d surveying of a surface by mobile vehicles
CN107045355A (en) * 2015-12-10 2017-08-15 松下电器(美国)知识产权公司 Control method for movement, autonomous mobile robot
CN105607637A (en) * 2016-01-25 2016-05-25 重庆德新机器人检测中心有限公司 Unmanned vehicle autopilot system
CN106203341A (en) * 2016-07-11 2016-12-07 百度在线网络技术(北京)有限公司 A kind of Lane detection method and device of unmanned vehicle
CN106203341B (en) * 2016-07-11 2019-05-21 百度在线网络技术(北京)有限公司 A kind of Lane detection method and device of unmanned vehicle
CN106228635A (en) * 2016-07-15 2016-12-14 百度在线网络技术(北京)有限公司 Information processing method and device for automatic driving vehicle
CN106627812A (en) * 2016-10-11 2017-05-10 深圳市招科智控科技有限公司 Pilotless bus
CN106627463A (en) * 2016-12-22 2017-05-10 深圳市招科智控科技有限公司 Unmanned bus visual perception system and work method for same
CN106803064A (en) * 2016-12-26 2017-06-06 广州大学 A kind of traffic lights method for quickly identifying
CN106803064B (en) * 2016-12-26 2020-05-19 广州大学 Traffic light rapid identification method
CN106686119A (en) * 2017-01-21 2017-05-17 张小磊 USB device with encryption in cloud based on cloud computing information
CN106686119B (en) * 2017-01-21 2017-12-26 江苏开放大学 The unmanned automobile of high in the clouds Encrypted USB flash drive device based on cloud computing information is installed
CN107682340A (en) * 2017-01-21 2018-02-09 合肥龙精灵信息技术有限公司 The unmanned automobile of high in the clouds Encrypted USB flash drive device based on cloud computing information is installed
CN107682340B (en) * 2017-01-21 2020-07-28 凤阳聚梦信息科技有限责任公司 Unmanned vehicle provided with cloud computing information-based cloud encryption USB flash disk device
CN107071286A (en) * 2017-05-17 2017-08-18 上海杨思信息科技有限公司 Rotatable platform epigraph high-speed parallel collecting and transmitting method
CN107688174A (en) * 2017-08-02 2018-02-13 北京纵目安驰智能科技有限公司 A kind of image distance-finding method, system, storage medium and vehicle-mounted visually-perceptible equipment
CN109683602A (en) * 2017-10-18 2019-04-26 江苏卡威汽车工业集团股份有限公司 A kind of intelligence driverless electric automobile
CN109683603A (en) * 2017-10-18 2019-04-26 江苏卡威汽车工业集团股份有限公司 A kind of pilotless automobile self-control system
CN109032125A (en) * 2018-05-31 2018-12-18 上海工程技术大学 A kind of air navigation aid of vision AGV
CN108909707A (en) * 2018-07-26 2018-11-30 南京威尔瑞智能科技有限公司 A kind of unmanned vehicle brake gear and its method based on PID control
CN109116846A (en) * 2018-08-29 2019-01-01 五邑大学 A kind of automatic Pilot method, apparatus, computer equipment and storage medium
CN112970029A (en) * 2018-09-13 2021-06-15 辉达公司 Deep neural network processing for sensor blind detection in autonomous machine applications
CN109866752A (en) * 2019-03-29 2019-06-11 合肥工业大学 Double mode parallel vehicles track following driving system and method based on PREDICTIVE CONTROL
CN109866752B (en) * 2019-03-29 2020-06-05 合肥工业大学 Method for tracking running system of dual-mode parallel vehicle track based on predictive control
CN111812691A (en) * 2019-04-11 2020-10-23 北京初速度科技有限公司 Vehicle-mounted terminal and image frame detection processing method and device
CN111812691B (en) * 2019-04-11 2023-09-12 北京魔门塔科技有限公司 Vehicle-mounted terminal and image frame detection processing method and device
CN110175561A (en) * 2019-05-24 2019-08-27 上海电机学院 A kind of detection of road signs and recognition methods
CN110569121A (en) * 2019-09-12 2019-12-13 华润万家有限公司 Multithreading concurrent processing method and device based on application robot
CN110569121B (en) * 2019-09-12 2024-03-22 华润万家有限公司 Multithreading concurrent processing method and device based on application robot
CN112329670A (en) * 2020-11-11 2021-02-05 上海伯镭智能科技有限公司 Method for recognizing obstacle with irregular road condition of unmanned vehicle, computer program and storage medium
CN112446371A (en) * 2020-11-24 2021-03-05 上海海洋大学 Multi-camera underwater image recognition device and enhancement processing method thereof
CN112731925A (en) * 2020-12-21 2021-04-30 浙江科技学院 Conical barrel identification and path planning and control method for unmanned formula racing car
CN112731925B (en) * 2020-12-21 2024-03-15 浙江科技学院 Cone barrel identification and path planning and control method for formula car
CN113110169A (en) * 2021-04-14 2021-07-13 合肥工业大学 Vehicle-road cooperative algorithm verification platform based on intelligent miniature vehicle
CN113378735A (en) * 2021-06-18 2021-09-10 北京东土科技股份有限公司 Road marking line identification method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN102819263B (en) 2014-11-05

Similar Documents

Publication Publication Date Title
CN102819263B (en) Multi-camera visual perception system for UGV (Unmanned Ground Vehicle)
CN111757822B (en) Systems and methods for enhanced collision avoidance on logistical ground support devices using multisensor detection fusion
US11645852B2 (en) Traffic light detection and lane state recognition for autonomous vehicles
CN107226087B (en) A kind of structured road automatic Pilot transport vehicle and control method
CN103386975B (en) A kind of vehicle obstacle-avoidance method and system based on machine vision
US20190332875A1 (en) Traffic Signal State Classification for Autonomous Vehicles
US20190056736A1 (en) Configuring motion planning for a self-driving tractor unit
CN111459172B (en) Surrounding security unmanned patrol car autonomous navigation system
KR102522230B1 (en) Collaborative Vehicle Headlight Orientation
CN102608998A (en) Vision guiding AGV (Automatic Guided Vehicle) system and method of embedded system
Liu et al. Deep learning-based localization and perception systems: Approaches for autonomous cargo transportation vehicles in large-scale, semiclosed environments
CN110083099B (en) Automatic driving architecture system meeting automobile function safety standard and working method
CN103268072A (en) Miniature vehicle, miniature vehicle control system and control method based on machine vision
KR20220121824A (en) Collaborative vehicle headlight orientation
CN113085896B (en) Auxiliary automatic driving system and method for modern rail cleaning vehicle
KR102521012B1 (en) Collaborative Vehicle Headlight Orientation
US20190087674A1 (en) Method and apparatus for detecting braking behavior of front vehicle of autonomous vehicle
CN114397877A (en) Intelligent automobile automatic driving system
CN115661965B (en) Highway unmanned aerial vehicle intelligence inspection system of integration automatic airport
JP2020500389A (en) Method and system for detecting raised objects present in a parking lot
Ismail et al. Vision-based system for line following mobile robot
Jun et al. Autonomous driving system design for formula student driverless racecar
CN111429734A (en) Real-time monitoring system and method for inside and outside port container trucks
KR20220123238A (en) Collaborative vehicle headlight orientation
CN115129050A (en) Unmanned transportation short-falling system and method for port tractor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant