CN106803064B - Traffic light rapid identification method - Google Patents

Traffic light rapid identification method Download PDF

Info

Publication number
CN106803064B
CN106803064B CN201611214259.1A CN201611214259A CN106803064B CN 106803064 B CN106803064 B CN 106803064B CN 201611214259 A CN201611214259 A CN 201611214259A CN 106803064 B CN106803064 B CN 106803064B
Authority
CN
China
Prior art keywords
traffic light
image
steps
arrow
traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611214259.1A
Other languages
Chinese (zh)
Other versions
CN106803064A (en
Inventor
黄文恺
朱静
李儒国
陈文达
莫国志
李嘉锐
韩晓英
姚佳岷
温泉河
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN201611214259.1A priority Critical patent/CN106803064B/en
Publication of CN106803064A publication Critical patent/CN106803064A/en
Application granted granted Critical
Publication of CN106803064B publication Critical patent/CN106803064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a traffic light rapid identification method, which comprises the steps of obtaining an image of a road in front of a vehicle; rapidly positioning the position of the traffic light; denoising the image; dividing an image area; screening the images; confirming the position of the traffic light by adopting a cross verification method; and then further identifies whether the traffic light is circular or an arrow. The invention has high accuracy rate through multiple verification; the method has the advantages of small calculated amount, high recognition speed, capability of recognizing round and arrow-shaped traffic lights, small influence of environmental factors and convenience in application to unmanned automobiles and vehicle auxiliary driving systems.

Description

Traffic light rapid identification method
Technical Field
The invention relates to the field of vehicle active safety systems, in particular to a traffic light rapid identification method.
Background
Nowadays, most of the existing traffic light identification technologies use computer graphic processing technology, computer learning, neural network and other technologies. An algorithm directly using template matching or a traffic light recognition algorithm based on an SVM support vector machine is generally used. The template matching algorithm has the defects of low recognition accuracy, low recognition rate and the like; the traffic light recognition algorithm of the SVM requires a large amount of sample training according to different environments (such as day, night, cloudy day, light reflection and the like), and also requires training according to different road traffic intersection positions, so that the calculation amount is large, the recognition speed is low, and the influence of the environment is large.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a traffic light rapid identification method which is small in calculation amount, high in identification speed, high in identification accuracy, small in influence of environmental factors and convenient to apply to unmanned automobiles and auxiliary driving systems of vehicles.
The traffic light rapid identification method comprises the following steps:
firstly, the method comprises the following steps: acquiring a road information image in front of a vehicle;
II, secondly: performing quick jump positioning on the traffic light picture according to the color of the traffic light from the acquired front road information image to acquire an initial position of the traffic light;
thirdly, the method comprises the following steps: dividing the initial position of the traffic light obtained by the quick jump positioning into three channels of RGB; respectively separating the colors of the traffic lights, and binarizing the separation result; carrying out mean value denoising treatment;
fourthly, the method comprises the following steps: dividing the continuous pixel point region of the image processed in the third step;
fifthly: screening the segmented image by using a method with width/height being approximately equal to 1.0, roughly confirming the position of the traffic light, and recording the position of the traffic light in the original image, wherein the width refers to the width of the segmented image, and the height refers to the height of the segmented image;
sixthly, the method comprises the following steps: further confirming by using a cross verification method, and detecting whether a certain area is the position of the traffic light;
the cross verification method takes the image screened in the step five as a central position, a square with the same size as the central position is respectively extended along the upper direction, the lower direction, the left direction and the right direction of the central position, and the square in the four directions is detected and operated; if the color ratio of the traffic light with a certain direction as defined reaches 85% or more, judging that the central position is the position of the traffic light;
seventhly, the method comprises the following steps: detecting the area of the confirmed position of the traffic light, judging whether the area is circular or not, and if not, executing the step eight;
eighthly: acquiring the outline of the image by using canny, calculating an HU moment of the image, and matching the HU moment with a standard arrow HU moment; if the matching with the standard arrow HU moment is not successful, executing a ninth step;
nine: and (5) transmitting the pictures to an SVM classifier for classification and identification.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the invention firstly carries out the positioning of the suspicious region through quick jump, thereby greatly reducing the calculated amount of image processing. For example, in step S3, binarization processing needs to be performed on the image, all pixel points of the image need to be calculated if suspicious region localization is not performed, and only the suspicious region needs to be processed, so that the calculation time is shortened.
2. After the image is segmented, the position of the traffic light is confirmed by adopting a cross verification method, so that the accuracy of the algorithm is improved.
3. When discerning the traffic light, make full use of regular geometry carries out supplementary discernment, when meetting circular traffic signal lamp, can not carry out HU square calculation, also need not to use SVM classifier, has shortened image processing's time.
4. HU moments are introduced for matching, so that arrow-shaped traffic signal lamps can be effectively identified; the SVM classifier is used for improving the recognition rate when the image is not clear.
Drawings
FIG. 1 is a schematic flow chart of a traffic light rapid identification method of the present invention;
FIG. 2 is a photograph after color separation of a traffic light;
FIG. 3 is one of the traffic light signal pictures from FIG. 2 that have been image segmented and filtered;
fig. 4 is a second traffic light signal picture obtained by image segmentation and screening of fig. 2.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific embodiments, but the embodiments of the invention are not limited thereto.
Examples
Referring to fig. 1, the traffic light rapid identification method of the present invention includes the following steps:
s1: and acquiring an information image of the road in front of the vehicle by using the camera.
S2: and carrying out quick jump positioning on the traffic light picture from the acquired front road information image to acquire the initial position of the traffic light. The quick skipping is to determine the suspicious traffic light area by detecting the color of the traffic light, and a (5x,5y) mode can be selected, namely, the quick skipping is positioned by taking 5 pixels as a unit, and the suspicious traffic light area is quickly pre-positioned by reading image information, so that the purposes of reducing the image information and accelerating the processing speed are achieved. Wherein, the traffic light color comprises red, yellow, green and black.
S3: dividing the area of the traffic light initial position obtained by the quick jump positioning, dividing the area into three channels of RGB, separating the colors of the traffic lights respectively, and binarizing the separation result to ensure that each channel only has the value of 0 and 255; and carrying out mean value denoising treatment, and removing scattered pixel points. The results are shown in FIG. 2. When the colors of the traffic lights are separated, the colors are mainly calculated through three RGB channels and compared with a preset threshold value.
S4: and segmenting the continuous pixel point region of the image processed in the step S3. In the step, image continuous areas are calculated firstly, and then each image continuous area is divided into independent small images. When the two image parts are divided and have repeated parts, the two image parts are combined, and the maximum value of the length and the width is taken.
S5: the segmented image is screened by using a method with width/height ≈ 1.0 to remove pixels relatively independent on the image edge, roughly confirm the position of the traffic light, record the position of the traffic light on the original image, and mark 2R ═ width for each image meeting the conditions, wherein width refers to the width of the segmented image, and height refers to the height of the segmented image. This step can be used to process traffic lights, whether they are arrowheads or circles, with the results shown in figure 3 for arrowhead traffic lights and in figure 4 for circular traffic lights.
S6: and further confirming by using a cross verification method to detect whether a certain area is the position of the traffic light.
The traffic light rapid identification method is mainly applied to rapid identification of traffic lights of unmanned motor vehicles during starting, stopping, steering and the like on roads, and because the number of the traffic lights for indicating the starting, stopping and steering types of the vehicles is generally more than three, and the three traffic lights are transversely arranged or vertically arranged, at least one of the upper direction, the lower direction, the left direction and the right direction of a certain traffic light is generally adjacent to the traffic light. The step of the cross verification method is realized based on the premise.
Since the ratio of the height of the image to the width of the image screened in step S5 is close to 1:1, the "cross validation" method takes the region to be detected as the center position, and squares with the side length of 2R respectively extend along the upper, lower, left and right directions of the center position, that is, a region with the same size is respectively taken from the upper, lower, left and right directions of the image screened in step S5, and the squares in the four directions are detected and calculated. If it is detected that the ratio of the traffic light color (the traffic light is red, yellow, green or black) defined in a certain direction reaches 85% or more, the central position is considered as a traffic light signal; that is, if the percentage of the color of the traffic light in the extended square area is more than 85%, it is determined that the center position is the position of the traffic light.
S7: according to observation and statistics, most of the traffic light panels are provided with white iron rings, so that the traffic light position determined by the cross verification method can be further detected by the method, whether the white iron rings exist or not can be further detected, and the traffic light position can be finally confirmed. If the white iron ring outside the traffic light panel is not calculated, the process proceeds to step S8.
This step is not essential. The traffic light position can be further determined by further detecting whether or not a white area exists around the position where the traffic light is roughly confirmed by step S6. If there is a white area around, it is detected along the white area, and when the white area is closed and surrounds the position of the traffic light detected by the cross validation method in step S6, it is considered that a white iron ring exists.
S8: and detecting whether the area of the confirmed position of the traffic light is circular or not. In this step, when the area where the traffic light is located is detected to be circular, the method is as follows: calculating the area S of the graph and the perimeter C of the image, wherein the area and the perimeter of the circle have the following characteristics:
S/C=(πR2)/(2πR)=R/2
if S/C is (width + height)/8 at this time, the shape is considered to be circular. If the detected result is a circle, go to step S11, otherwise, go to step S9.
S9: acquiring the outline of the image by using canny, calculating an HU moment of the image, and matching the HU moment with a standard arrow HU moment; if matching with the standard arrow HU moment is successful, the process proceeds to step S11, otherwise, step S10 is executed.
In step S9, the HU moment of the graph is calculated using the formula:
I1=y20+y02
I2=(y20+y02)2+4y11 2
I3=(y30+3y)2+(3y21-y03)2
I4=(y30+y12)2+(y21+y03)2
I5=(y30-y12)(y30-y12)[(y30+y12)2-3(y21+y03)2]+(3y21-y03)(y21+y30)[3(y30+y12)2-(y21+y03)2]
I6=(y20-y02)[(y30+y12)2-(y21+y03)2]+4y11(y30+y12)(y21+y03)
I7=(3y21+y03)(y30+y12)[(y30+y12)2-3(y21+y03)2]+9y30-3y12)(y21+y30)[3(y30+y12)2-(y21+y03)2]
wherein, IkTo be constant moment, ypqIs (p + q) order normalized central moment, k, q and p are integers, k is more than or equal to 1 and less than or equal to 7, p is more than or equal to 0 and less than or equal to 3, and q is more than or equal to 0 and less than or equal to 3. The seven invariant moments are constructed by second-order and third-order central moments, and the HU moments of the seven invariant moments cannot change no matter which direction of the obtained arrow or the detected arrows are different in size, so that the aim of quick detection can be fulfilled.
S10: and (5) transmitting the pictures to an SVM classifier for classification and identification. Wherein, the SVM classifier needs to be trained. During training, an SVM classifier is adopted to extract the features of the picture to form multi-dimensional vector features, then the sample is manually marked as a left arrow, a right arrow and a forward arrow, and finally the SVM classifier stores the manual recognition result into a feature library.
S11: and sending out the operation result through a serial port.
Since steps S8, S9, and S10 are performed layer by layer, most of the results can be completed in steps S8 and S9 with a small amount of computation, and the recognition time can be reduced.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (3)

1. A traffic light rapid identification method is characterized by comprising the following steps:
firstly, the method comprises the following steps: acquiring a road information image in front of a vehicle;
II, secondly: performing quick jump positioning on the traffic light picture according to the color of the traffic light from the acquired front road information image to acquire an initial position of the traffic light;
thirdly, the method comprises the following steps: dividing the initial position of the traffic light obtained by the quick jump positioning into three channels of RGB; respectively separating the colors of the traffic lights, and binarizing the separation result; carrying out mean value denoising treatment;
fourthly, the method comprises the following steps: dividing the continuous pixel point region of the image processed in the third step;
fifthly: screening the segmented image by using a method with width/height being approximately equal to 1.0, roughly confirming the position of the traffic light, and recording the position of the traffic light in the original image, wherein the width refers to the width of the segmented image, and the height refers to the height of the segmented image;
sixthly, the method comprises the following steps: further confirming by using a cross verification method, and detecting whether a certain area is the position of the traffic light;
the cross verification method takes the image screened in the step five as a central position, a square with the same size as the central position is respectively extended along the upper direction, the lower direction, the left direction and the right direction of the central position, and the square in the four directions is detected and operated; if the color ratio of the traffic light with a certain direction as defined reaches 85% or more, judging that the central position is the position of the traffic light;
seventhly, the method comprises the following steps: detecting the area of the confirmed position of the traffic light, judging whether the area is circular or not, and if not, executing the step eight;
eighthly: acquiring the outline of the image by using canny, calculating an HU moment of the image, and matching the HU moment with a standard arrow HU moment; if the matching with the standard arrow HU moment is not successful, executing a ninth step;
nine: and (5) transmitting the pictures to an SVM classifier for classification and identification.
2. The traffic light rapid identification method according to claim 1, wherein the step nine SVM classifier needs to be trained; during training, an SVM classifier is adopted to extract the features of the picture to form multi-dimensional vector features, then the sample is manually marked as a left arrow, a right arrow and a forward arrow, and finally the SVM classifier stores the manual recognition result into a feature library.
3. The traffic light rapid identification method according to claim 1, characterized in that between the sixth step and the seventh step, further executed are:
detecting whether a white area exists around the position of the traffic light determined by the cross verification method; if a white area exists around the traffic light, detecting along the white area, and when the white area is closed and surrounds the position of the traffic light detected by the cross verification method, considering that a white iron ring exists.
CN201611214259.1A 2016-12-26 2016-12-26 Traffic light rapid identification method Active CN106803064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611214259.1A CN106803064B (en) 2016-12-26 2016-12-26 Traffic light rapid identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611214259.1A CN106803064B (en) 2016-12-26 2016-12-26 Traffic light rapid identification method

Publications (2)

Publication Number Publication Date
CN106803064A CN106803064A (en) 2017-06-06
CN106803064B true CN106803064B (en) 2020-05-19

Family

ID=58984236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611214259.1A Active CN106803064B (en) 2016-12-26 2016-12-26 Traffic light rapid identification method

Country Status (1)

Country Link
CN (1) CN106803064B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563301A (en) * 2017-08-09 2018-01-09 上海炬宏信息技术有限公司 Red signal detection method based on image processing techniques
CN109635640B (en) * 2018-10-31 2020-12-08 百度在线网络技术(北京)有限公司 Traffic light identification method, device and equipment based on point cloud and storage medium
CN110021176B (en) * 2018-12-21 2021-06-15 文远知行有限公司 Traffic light decision method, device, computer equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011216051A (en) * 2010-04-02 2011-10-27 Institute Of National Colleges Of Technology Japan Program and device for discriminating traffic light
CN102819263A (en) * 2012-07-30 2012-12-12 中国航天科工集团第三研究院第八三五七研究所 Multi-camera visual perception system for UGV (Unmanned Ground Vehicle)
CN103996017A (en) * 2014-02-24 2014-08-20 航天恒星科技有限公司 Ship detection method based on Hu invariant moment and support vector machine
CN104021378A (en) * 2014-06-07 2014-09-03 北京联合大学 Real-time traffic light recognition method based on space-time correlation and priori knowledge
CN104574960A (en) * 2014-12-25 2015-04-29 宁波中国科学院信息技术应用研究院 Traffic light recognition method
CN104766046A (en) * 2015-02-06 2015-07-08 哈尔滨工业大学深圳研究生院 Detection and recognition algorithm conducted by means of traffic sign color and shape features
CN104791113A (en) * 2015-03-20 2015-07-22 武汉理工大学 Automatic engine start and stop intelligent trigger method and system based on driving road condition
CN104851288A (en) * 2015-04-16 2015-08-19 宁波中国科学院信息技术应用研究院 Traffic light positioning method
US9442487B1 (en) * 2014-08-15 2016-09-13 Google Inc. Classifier hierarchies for traffic light and traffic indicator detection
CN106023623A (en) * 2016-07-28 2016-10-12 南京理工大学 Recognition and early warning method of vehicle-borne traffic signal and symbol based on machine vision

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011216051A (en) * 2010-04-02 2011-10-27 Institute Of National Colleges Of Technology Japan Program and device for discriminating traffic light
CN102819263A (en) * 2012-07-30 2012-12-12 中国航天科工集团第三研究院第八三五七研究所 Multi-camera visual perception system for UGV (Unmanned Ground Vehicle)
CN103996017A (en) * 2014-02-24 2014-08-20 航天恒星科技有限公司 Ship detection method based on Hu invariant moment and support vector machine
CN104021378A (en) * 2014-06-07 2014-09-03 北京联合大学 Real-time traffic light recognition method based on space-time correlation and priori knowledge
US9442487B1 (en) * 2014-08-15 2016-09-13 Google Inc. Classifier hierarchies for traffic light and traffic indicator detection
CN104574960A (en) * 2014-12-25 2015-04-29 宁波中国科学院信息技术应用研究院 Traffic light recognition method
CN104766046A (en) * 2015-02-06 2015-07-08 哈尔滨工业大学深圳研究生院 Detection and recognition algorithm conducted by means of traffic sign color and shape features
CN104791113A (en) * 2015-03-20 2015-07-22 武汉理工大学 Automatic engine start and stop intelligent trigger method and system based on driving road condition
CN104851288A (en) * 2015-04-16 2015-08-19 宁波中国科学院信息技术应用研究院 Traffic light positioning method
CN106023623A (en) * 2016-07-28 2016-10-12 南京理工大学 Recognition and early warning method of vehicle-borne traffic signal and symbol based on machine vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Real time recognition of traffic light and their signal count-down timings;Sathiya 等;《International Conference on Information Communication & Embedded Systems IEEE》;20151231;全文 *
智能车交通灯识别;李广亮 等;《杭州电子科技大学学报》;20140531;第80-82页 *

Also Published As

Publication number Publication date
CN106803064A (en) 2017-06-06

Similar Documents

Publication Publication Date Title
CN111212772B (en) Method and device for determining a driving strategy of a vehicle
US10133941B2 (en) Method, apparatus and device for detecting lane boundary
Siogkas et al. Traffic lights detection in adverse conditions using color, symmetry and spatiotemporal information
CN105678285B (en) A kind of adaptive road birds-eye view transform method and road track detection method
Gomez et al. Traffic lights detection and state estimation using hidden markov models
Chen et al. Nighttime brake-light detection by Nakagami imaging
CN107978165A (en) Intersection identifier marking and signal lamp Intellisense method based on computer vision
KR101094752B1 (en) Lane Classification Method Using Statistical Model of HSI Color Information
CN102298693B (en) Expressway bend detection method based on computer vision
CN107316486A (en) Pilotless automobile visual identifying system based on dual camera
CN101334836A (en) License plate positioning method incorporating color, size and texture characteristic
CN107886034B (en) Driving reminding method and device and vehicle
CN108090459B (en) Traffic sign detection and identification method suitable for vehicle-mounted vision system
CN104978746B (en) A kind of body color recognition methods of driving vehicle
CN101369312B (en) Method and equipment for detecting intersection in image
CN106803064B (en) Traffic light rapid identification method
KR101922852B1 (en) Method for Detecting Border of Grassland Using Image-Based Color Information
JP6527013B2 (en) Computer implemented system and method for extracting / recognizing alphanumeric characters from traffic signs
CN109886168B (en) Ground traffic sign identification method based on hierarchy
Devane et al. Lane detection techniques using image processing
CN110688907A (en) Method and device for identifying object based on road light source at night
CN106022268A (en) Identification method and device of speed limiting sign
CN107025796A (en) Automobile assistant driving vision early warning system and its method for early warning
CN112115800A (en) Vehicle combination recognition system and method based on deep learning target detection
KR101787292B1 (en) Method for Improving Performance of Line Recognition Rate Using Image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant