CN113101155A - Traffic light intersection blind guiding method and blind guiding device based on machine vision - Google Patents

Traffic light intersection blind guiding method and blind guiding device based on machine vision Download PDF

Info

Publication number
CN113101155A
CN113101155A CN202110345259.XA CN202110345259A CN113101155A CN 113101155 A CN113101155 A CN 113101155A CN 202110345259 A CN202110345259 A CN 202110345259A CN 113101155 A CN113101155 A CN 113101155A
Authority
CN
China
Prior art keywords
blind guiding
traffic light
image
signal lamp
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110345259.XA
Other languages
Chinese (zh)
Inventor
高大伟
李谊骏
高娟
史勤刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu College of University of Electronic Science and Technology of China
Original Assignee
Chengdu College of University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu College of University of Electronic Science and Technology of China filed Critical Chengdu College of University of Electronic Science and Technology of China
Priority to CN202110345259.XA priority Critical patent/CN113101155A/en
Publication of CN113101155A publication Critical patent/CN113101155A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/095Traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pain & Pain Management (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Rehabilitation Therapy (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a traffic light intersection blind guiding method and a blind guiding device based on machine vision, wherein the blind guiding method judges whether a user can pass through an intersection or not by acquiring an environment image and zebra crossing image information to be passed through the traffic light intersection, extracting signal light colors and countdown information, extracting the zebra crossing information to judge whether the user deviates from a correct route or not and prompting the user whether the user is suitable for passing through the intersection or not, and the blind guiding device comprises a picture frame, and a power supply module, a central processing module, a voice broadcasting module, an image acquisition module and a wireless communication module which are arranged on the picture frame; the invention can provide accurate guidance for the blind persons who need to pass through the traffic light intersection, ensures the personal safety of the blind persons in daily traffic, and has the advantages of high reliability, high accuracy, simple structure and convenient carrying.

Description

Traffic light intersection blind guiding method and blind guiding device based on machine vision
Technical Field
The invention belongs to the technical field of machine learning, and particularly relates to a traffic light intersection blind guiding method and device based on machine vision.
Background
With the improvement of national economic level and the vigorous development of urban traffic and automobile industries, a large number of crossroads are additionally arranged in many places, which causes great inconvenience for the group of the blind people, so that many blind people can go to the traffic light intersection and even avoid going out, and great inconvenience is caused to the life and work of the blind people.
The blind mainly comprises two groups, wherein one group is totally blind and refers to the people who lose reaction to light and cannot see things completely, and the other group is semi-blind and refers to the people with less than 60% of normal vision. According to statistics, by 2018, every 82 people in China have blind people, and life and work of people with large cardinality are greatly inconvenient for totally blind people, wherein inconvenience and potential safety hazards in the traffic process are one of the most important problems. At present, the blind people still rely on traffic light prompters, help of normal vision people, reference of footstep sounds of pedestrians on the road, automobile engine sounds stopped on a safety line and other traditional methods to judge road conditions when crossing the road. For semi-blind people, the road condition is judged by means of the fuzzy image mostly, and then the road condition is judged by following the shadow of the pedestrian to pass through the road.
However, the method has great safety, and mainly comprises the following aspects: the environment where the traffic light is located is more complex compared with other environments, and the sound is noisy, so that the blind person cannot conveniently listen to the engine sound of the parked automobile; secondly, the traffic lights are often subjected to the phenomenon of invincibility such as traffic light running, and if the pedestrian can pass through the traffic lights or not, wrong judgment is easy to occur, so that safety accidents are caused; at present, only a few main intersections of the city adopt traffic light prompters, and the traffic light prompters are not universal and cannot become a conventional basis for judging the traffic state; by means of the method, even if the current traffic light state is obtained, the remaining time of the red light and the green light cannot be judged, and great potential safety hazards are caused; and (V) no blind road is laid on the zebra crossing, so that the blind person is easy to deviate from the direction when passing through the zebra crossing, and even walks to a dangerous area.
Disclosure of Invention
The invention aims to provide a traffic light intersection blind guiding method and a blind guiding device based on machine vision, which can provide accurate guidance for blind persons who need to pass through the traffic light intersection, ensure personal safety of the blind persons in daily traffic, and have the advantages of high reliability, high accuracy, simple structure and convenience in carrying.
The technical scheme adopted by the invention is as follows:
a traffic light intersection blind guiding method based on machine vision comprises the following steps:
s1, collecting an environment image to pass through a traffic light intersection;
s2, extracting signal lamp real-time state information including distance information, signal lamp color information and signal lamp digital countdown information;
s3, judging whether the color of the signal lamp is green, if so, calculating the predicted maximum passing time t2 according to the distance information, and if not, broadcasting the real-time state information of the signal lamp by voice and prompting a user to forbid passing;
s4, calculating whether the predicted passing time meets a preset signal lamp passing rule, if so, entering a step S6, otherwise, broadcasting signal lamp real-time state information through voice and prompting a user to forbid passing;
the signal lamp passing rule is as follows: t2-t1 is more than or equal to t 0; t2 denotes the predicted maximum transit time, t1 denotes signal light remaining time information, and t0 denotes a transit time threshold;
s5, repeating the steps S1 to S4 with T as a period;
s6, collecting zebra crossing image information to be passed through, and calculating an angle difference A between the zebra crossing passing direction and the user advancing direction;
s7, judging whether the angle difference A meets a preset zebra crossing traffic rule, if so, entering the next step, otherwise, prompting the user to turn and returning to the step S6;
the zebra crossing traffic rule is as follows: a is less than or equal to a, and a represents a preset angle threshold value;
and S8, prompting the user to pass through the intersection.
Further, the step S2 adopts HSV color space and DNN neural network to extract the color information of the signal lamp; the method specifically comprises the following steps:
1.1: training a DNN neural network-based neural network model; the method specifically comprises the following steps:
initializing a weight matrix W and a bias parameter b and inputting a data set;
updating a weight matrix W and a bias parameter b through a forward propagation algorithm and a backward propagation algorithm of a neural network;
1.2: extracting a characteristic image of the environment image;
1.3: and inputting the characteristic image into the trained neural network model to obtain the color information of the signal lamp.
Further, in step S2, the method for detecting an edge is used to extract the remaining time information of the signal lamp, and the specific process is as follows:
2.1, sequentially carrying out Gaussian blur, grey-scale map conversion, Sobel operator, binarization and closing operation on the acquired environment image to obtain a characteristic image;
2.2, performing expansion and corrosion operations on the characteristic image;
2.3: performing median filtering to obtain the mean value of the gray value of each pixel;
2.4: extracting a digital edge through edge detection;
2.5: dividing the numbers and respectively matching the divided numbers with the templates;
2.6: and outputting a matching result.
Further, the step S6 specifically includes:
3.1: carrying out gray level image conversion, Gaussian filtering denoising and threshold processing operations on the acquired zebra crossing images in sequence, and extracting characteristic images;
3.2: corroding and expanding the characteristic image;
3.3: carrying out contour detection on the characteristic image, extracting a zebra crossing characteristic region and drawing a contour;
3.4: calculating a deviation angle A between the current position and the zebra crossing position through Hough linear transformation;
3.5: and outputting a calculation result.
The invention also discloses a blind guiding device adopted by the traffic light intersection blind guiding method based on machine vision, which comprises a picture frame, a power supply module, a central processing module, a voice broadcasting module, an image acquisition module and a wireless communication module, wherein the central processing module is connected with an external intelligent terminal through the wireless communication module, the input end of the central processing module is connected with the image acquisition module, and the control end of the central processing module is connected with the voice broadcasting module.
Furthermore, the voice broadcast module comprises a loudspeaker and an earphone.
Furthermore, a wire groove is formed in the glasses leg of the glasses frame, a connecting wire penetrates through the wire groove, the central processing module is arranged at the front end of the glasses leg, and the earphone is arranged at the rear end of the glasses leg and is connected with the central processing module through the connecting wire.
Further, the power module adopts a solar panel.
Further, the image acquisition module adopts high definition digtal camera and quantity be two, locates the lens mounted position of picture frame about two high definition digtal cameras.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
(1) the method for collecting the environment image of the intersection by combining the HSV color space and the DNN neural network is adopted, the real-time state information of the signal lamp and the zebra crossing image information are extracted, whether the intersection can be safely passed or not is judged, the user is informed in real time, safe, effective, reliable and accurate traffic guidance is provided for the user, and personal safety of the user when the user passes through the traffic light intersection is guaranteed.
(3) Through adopting leading blind device that picture frame, wireless communication module, voice broadcast module etc. constitute, the user of being convenient for carries the use, and simple structure, convenient and durable.
Drawings
FIG. 1 is a flow chart of a blind guiding method according to the present invention;
FIG. 2 is a flow chart of signal light color information extraction;
fig. 3 is a flowchart of signal lamp remaining time information extraction;
FIG. 4 is a flow chart of obtaining an angle difference between a zebra crossing traffic direction and a user traveling direction;
FIG. 5 is a grayscale image of a zebra crossing image;
FIG. 6 is a feature diagram of a zebra crossing image;
fig. 7 is a schematic structural diagram of the blind guiding device of the present invention.
The labels in the figure are:
1. a central processing module; 2. an earphone; 3. a speaker; 4. a high-definition camera; 5. a lens; 6. a wireless communication module; 8. a mirror frame; 9. a memory card; 10. and a power supply module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments.
A traffic light intersection blind guiding method based on machine vision comprises the following steps:
and S1, collecting the environmental image to be passed through the traffic light intersection.
The high-definition camera 4 collects image information of the current environment and transmits the image information to the central processing module 1.
And S2, extracting the signal lamp real-time state information including distance information, signal lamp color information and signal lamp digital countdown information.
The central processing module 1 extracts the color information of the signal lamp by adopting an HSV color space and a DNN neural network according to the information transmitted by the high-definition camera 4, extracts the residual numbers below the traffic lamp by adopting an edge detection method, and acquires the residual time information of the signal lamp.
As shown in fig. 2, the process of extracting the color information of the signal lamp specifically includes:
1.1: training a DNN neural network-based neural network model; the specific process is as follows:
initializing a weight matrix W and a bias parameter b, inputting a data set, performing loss calculation, updating the weight matrix W and the bias parameter b through a forward propagation algorithm and a backward propagation algorithm of the neural network, and further improving the accuracy of the neural network until an optimal model is obtained;
1.2: extracting a characteristic image of the environment image; the method specifically comprises the following steps:
zooming the image acquired by the high-definition camera 4 by adopting a Gaussian pyramid algorithm, then segmenting the image, separating the foreground and the background of the image, then carrying out histogram equalization to enhance the feature points in the image, then converting the color RGB space into HSV space, extracting the feature points in the image, and then removing noise points by adopting open operation processing;
1.3: and inputting the characteristic image into the trained neural network model, and acquiring the color information of the signal lamp as red, yellow or green.
As shown in fig. 3, the process of extracting the remaining time information of the signal lamp specifically includes:
2.1, firstly, carrying out Gaussian blur processing on the image acquired by the high-definition camera 4 to reduce the noise of the image, and then converting the image into a gray-scale image to facilitate subsequent edge detection operation; in order to enable the edge detection effect to be more obvious, the method uses a Sobel operator to execute binarization and closing operations, calculates the approximate value of the gray level of the image brightness function, and calculates the gray level vector of each pixel;
2.2: carrying out expansion and corrosion operations to enhance the characteristic points of the image;
2.3: performing median filtering to obtain the mean value of the gray value of each pixel;
2.4: extracting feature points of the digital contour through edge detection to obtain an image for displaying the contour edge;
2.5: dividing characters and respectively matching the divided numbers with templates;
2.6: and outputting a matching result.
And S3, judging whether the color of the signal lamp is green, if so, calculating the predicted maximum passing time t2 according to the distance information, and if not, broadcasting the real-time state information of the signal lamp by voice and prompting the user to forbid passing.
S4, calculating whether the predicted passing time meets a preset signal lamp passing rule, if so, sending a judgment result to the central processing module and entering the step S6, otherwise, broadcasting the real-time state information of the signal lamp by voice and prompting a user to forbid passing;
the signal lamp passing rule is as follows: t2-t1 is more than or equal to t 0; t2 denotes the predicted maximum transit time, t1 denotes signal light remaining time information, and t0 denotes a transit time threshold value.
S5, repeating the steps S1 to S4 with T as the period.
S6, collecting the zebra crossing image information to be passed through, and calculating the angle difference A between the zebra crossing passing direction and the user advancing direction.
As shown in fig. 4 to 6, the specific steps are:
3.1: carrying out gray level image conversion on the collected color image to avoid the influence of light on image identification, then adopting Gaussian filtering denoising to smooth pixels in the neighborhood of the image, endowing the pixels at different positions in the image field with different weights, keeping the gray level value of the image as much as possible, and then removing the image with smaller gray level value through threshold processing to reduce the interference of the surrounding environment on subsequent identification;
3.2: corroding and expanding the characteristic image to enhance the characteristics of the zebra crossing in the image;
3.3: carrying out contour detection on the characteristic image, extracting an interested region, namely a zebra crossing characteristic region, and then drawing a contour;
3.4: calculating a deviation angle A between the current position and the zebra crossing position through Hough linear transformation;
3.5: and outputting a calculation result.
And S7, judging whether the angle difference A meets the preset zebra crossing traffic rule, if so, entering the next step, otherwise, prompting the user to turn and returning to the step S6.
The zebra crossing traffic rule is as follows: a is less than or equal to a, and a represents a preset angle threshold value;
and S8, prompting the user to pass through the intersection.
In the process, the blind guiding device is wirelessly connected with a mobile phone and other terminals carried by the user, and the user is intensively reminded through external signals such as mobile phone vibration and the like when the countdown is finished according to the actual traffic condition, for example, signals are respectively transmitted to the mobile phone terminal when the remaining time of a green light is 15s, 10s and 5s, and the mobile phone vibration is used for intensively reminding the user.
The invention also discloses a blind guiding device adopted by the traffic light intersection blind guiding method based on machine vision, and as shown in fig. 7, the blind guiding device comprises a picture frame 8, a power supply module 10, a central processing module 1, a voice broadcasting module, an image acquisition module, a wireless communication module 6 and a memory card 9, wherein the central processing module 1 is connected with an external intelligent terminal through the wireless communication module 6, the input end of the central processing module 1 is connected with the image acquisition module, and the control end of the central processing module 1 is connected with the voice broadcasting module.
The power module 10 preferably adopts a solar panel for supplying power to each part of the blind guiding device, and the memory card 9 is used for storing processing data.
The voice broadcast module includes speaker 3 and earphone 2, and the mirror leg inside of picture frame 8 is equipped with the wire casing, wears to be equipped with the connecting wire in the wire casing, and mirror leg front end is located to central processing module 1, and mirror leg rear end is located and central processing module 1 is connected through the connecting wire to earphone 2, and mirror leg front end is located to speaker 3, can wear earphone 2 simultaneously smoothly when the user of being convenient for wears picture frame 8, can also conveniently listen to voice broadcast simultaneously.
The image acquisition module adopts a 1080 high-definition camera 4, the pixel value of which is 1980x1080, and an onboard wireless communication module 6. High definition digtal camera 4 quantity is two, locates the lens mounted position of picture frame 8 about two high definition digtal cameras 4, and high definition digtal camera 4's camera lens 5 is located picture frame 8 dead ahead, guarantees unanimous with user direction of travel.
The blind guiding method and the blind guiding and binding device can assist the blind users to judge whether the blind users can safely pass through the intersection or not and inform the blind users in real time, provide safe, effective, reliable and accurate traffic guidance for the users, ensure the personal safety of the users when the blind users pass through the traffic light intersection, and meanwhile, the blind guiding device formed by the picture frame, the wireless communication module, the voice broadcasting module and the like is convenient for the users to carry and use, has reasonable structure and ingenious design and has excellent popularization prospect.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. A traffic light intersection blind guiding method based on machine vision is characterized in that: the method comprises the following steps:
s1, collecting an environment image to pass through a traffic light intersection;
s2, extracting signal lamp real-time state information including distance information, signal lamp color information and signal lamp digital countdown information;
s3, judging whether the color of the signal lamp is green, if so, calculating the predicted maximum passing time t2 according to the distance information, and if not, broadcasting the real-time state information of the signal lamp by voice and prompting a user to forbid passing;
s4, calculating whether the predicted passing time meets a preset signal lamp passing rule, if so, entering a step S6, otherwise, broadcasting signal lamp real-time state information through voice and prompting a user to forbid passing;
the signal lamp passing rule is as follows: t2-t1 is more than or equal to t 0; t2 denotes the predicted maximum transit time, t1 denotes signal light remaining time information, and t0 denotes a transit time threshold;
s5, repeating the steps S1 to S4 with T as a period;
s6, collecting zebra crossing image information to be passed through, and calculating an angle difference A between the zebra crossing passing direction and the user advancing direction;
s7, judging whether the angle difference A meets a preset zebra crossing traffic rule, if so, entering the next step, otherwise, prompting the user to turn and returning to the step S6;
the zebra crossing traffic rule is as follows: a is less than or equal to a, and a represents a preset angle threshold value;
and S8, prompting the user to pass through the intersection.
2. The machine vision-based traffic light intersection blind guiding method according to claim 1, characterized in that: the step S2 adopts HSV color space and DNN neural network to extract the color information of the signal lamp; the method specifically comprises the following steps:
1.1: training a DNN neural network-based neural network model; the method specifically comprises the following steps:
initializing a weight matrix W and a bias parameter b and inputting a data set;
updating a weight matrix W and a bias parameter b through a forward propagation algorithm and a backward propagation algorithm of a neural network;
1.2: extracting a characteristic image of the environment image;
1.3: and inputting the characteristic image into the trained neural network model to obtain the color information of the signal lamp.
3. The machine vision-based traffic light intersection blind guiding method according to claim 1, characterized in that: in step S2, the edge detection method is used to extract the remaining time information of the signal lamp, and the specific process is as follows:
2.1, sequentially carrying out Gaussian blur, grey-scale map conversion, Sobel operator, binarization and closing operation on the acquired environment image to obtain a characteristic image;
2.2, performing expansion and corrosion operations on the characteristic image;
2.3: performing median filtering to obtain the mean value of the gray value of each pixel;
2.4: extracting a digital edge through edge detection;
2.5: dividing the numbers and respectively matching the divided numbers with the templates;
2.6: and outputting a matching result.
4. The machine vision-based traffic light intersection blind guiding method according to claim 1, characterized in that: the step S6 specifically includes:
3.1: carrying out gray level image conversion, Gaussian filtering denoising and threshold processing operations on the acquired zebra crossing images in sequence, and extracting characteristic images;
3.2: corroding and expanding the characteristic image;
3.3: carrying out contour detection on the characteristic image, extracting a zebra crossing characteristic region and drawing a contour;
3.4: calculating a deviation angle A between the current position and the zebra crossing position through Hough linear transformation;
3.5: and outputting a calculation result.
5. The blind guiding device adopted by the traffic light intersection blind guiding method based on the machine vision is characterized in that: including picture frame, power module, central processing module, voice broadcast module, image acquisition module and wireless communication module, the central processing module passes through wireless communication module and connects outside intelligent terminal, and image acquisition module is connected to the central processing module input, and voice broadcast module is connected to the central processing module control end.
6. The blind guiding device adopted by the machine vision-based traffic light intersection blind guiding method according to claim 5, is characterized in that: the voice broadcast module comprises a loudspeaker and an earphone.
7. The blind guiding device adopted by the machine vision-based traffic light intersection blind guiding method according to claim 6, is characterized in that: the inside wire casing that is equipped with of mirror leg of picture frame, wear to be equipped with the connecting wire in the wire casing, mirror leg front end is located to central processing module, and mirror leg rear end is located to the earphone and passes through the connecting wire and connect central processing module.
8. The blind guiding device adopted by the machine vision-based traffic light intersection blind guiding method according to claim 5, is characterized in that: the power module adopts a solar cell panel.
9. The blind guiding device adopted by the machine vision-based traffic light intersection blind guiding method according to claim 5, is characterized in that: the image acquisition module adopts high definition digtal camera and quantity to be two, locates the lens mounted position of picture frame about two high definition digtal cameras.
CN202110345259.XA 2021-03-31 2021-03-31 Traffic light intersection blind guiding method and blind guiding device based on machine vision Pending CN113101155A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110345259.XA CN113101155A (en) 2021-03-31 2021-03-31 Traffic light intersection blind guiding method and blind guiding device based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110345259.XA CN113101155A (en) 2021-03-31 2021-03-31 Traffic light intersection blind guiding method and blind guiding device based on machine vision

Publications (1)

Publication Number Publication Date
CN113101155A true CN113101155A (en) 2021-07-13

Family

ID=76712921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110345259.XA Pending CN113101155A (en) 2021-03-31 2021-03-31 Traffic light intersection blind guiding method and blind guiding device based on machine vision

Country Status (1)

Country Link
CN (1) CN113101155A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850232A (en) * 2021-11-09 2021-12-28 安徽农业大学 Virtual blind road monitoring system
CN115240404A (en) * 2022-09-19 2022-10-25 腾讯科技(深圳)有限公司 Vibration encoding method, vibration processing method, apparatus, device, and medium
WO2023197913A1 (en) * 2022-04-13 2023-10-19 华为技术有限公司 Image processing method and related device

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236156A (en) * 2013-05-06 2013-08-07 北京美尔斯通科技发展股份有限公司 Blind guide system based on traffic signal indicating lamp
CN203825313U (en) * 2013-12-16 2014-09-10 智博锐视(北京)科技有限公司 Blind navigation glasses
CN105787524A (en) * 2014-12-26 2016-07-20 中国科学院沈阳自动化研究所 License plate identification method based on OpenCV and license plate identification system based on OpenCV
CN105809095A (en) * 2014-12-31 2016-07-27 博世汽车部件(苏州)有限公司 Method for determining traffic crossing traffic state
CN106821694A (en) * 2017-01-18 2017-06-13 西南大学 A kind of mobile blind guiding system based on smart mobile phone
CN108108761A (en) * 2017-12-21 2018-06-01 西北工业大学 A kind of rapid transit signal lamp detection method based on depth characteristic study
CN108309708A (en) * 2018-01-23 2018-07-24 李思霈 Blind-man crutch
CN109481248A (en) * 2018-12-26 2019-03-19 浙江师范大学 A kind of smart guide glasses
CN110680686A (en) * 2019-11-06 2020-01-14 青岛港湾职业技术学院 Intelligent voice blind guiding system based on AI open platform and use method
US10600318B1 (en) * 2019-03-23 2020-03-24 Min-Yueh Chiang Blind guiding equipment in pedestrian crosswalk
CN110897840A (en) * 2019-12-02 2020-03-24 姜国宁 Intelligent navigation method and device for blind people going out
CN111027475A (en) * 2019-12-09 2020-04-17 南京富士通南大软件技术有限公司 Real-time traffic signal lamp identification method based on vision
CN212163426U (en) * 2020-04-29 2020-12-15 电子科技大学成都学院 Intelligent anti-collision device based on multiple sensing technologies
CN112263447A (en) * 2020-07-09 2021-01-26 长江师范学院 Intelligent voice glasses for blind people, control method, storage medium and program

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236156A (en) * 2013-05-06 2013-08-07 北京美尔斯通科技发展股份有限公司 Blind guide system based on traffic signal indicating lamp
CN203825313U (en) * 2013-12-16 2014-09-10 智博锐视(北京)科技有限公司 Blind navigation glasses
CN105787524A (en) * 2014-12-26 2016-07-20 中国科学院沈阳自动化研究所 License plate identification method based on OpenCV and license plate identification system based on OpenCV
CN105809095A (en) * 2014-12-31 2016-07-27 博世汽车部件(苏州)有限公司 Method for determining traffic crossing traffic state
CN106821694A (en) * 2017-01-18 2017-06-13 西南大学 A kind of mobile blind guiding system based on smart mobile phone
CN108108761A (en) * 2017-12-21 2018-06-01 西北工业大学 A kind of rapid transit signal lamp detection method based on depth characteristic study
CN108309708A (en) * 2018-01-23 2018-07-24 李思霈 Blind-man crutch
CN109481248A (en) * 2018-12-26 2019-03-19 浙江师范大学 A kind of smart guide glasses
US10600318B1 (en) * 2019-03-23 2020-03-24 Min-Yueh Chiang Blind guiding equipment in pedestrian crosswalk
CN110680686A (en) * 2019-11-06 2020-01-14 青岛港湾职业技术学院 Intelligent voice blind guiding system based on AI open platform and use method
CN110897840A (en) * 2019-12-02 2020-03-24 姜国宁 Intelligent navigation method and device for blind people going out
CN111027475A (en) * 2019-12-09 2020-04-17 南京富士通南大软件技术有限公司 Real-time traffic signal lamp identification method based on vision
CN212163426U (en) * 2020-04-29 2020-12-15 电子科技大学成都学院 Intelligent anti-collision device based on multiple sensing technologies
CN112263447A (en) * 2020-07-09 2021-01-26 长江师范学院 Intelligent voice glasses for blind people, control method, storage medium and program

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
俞新星: "基于数字图像处理的银行卡号识别算法研究", 《电脑与信息技术》 *
张奥祥: "车牌识别的设计与研究", 《电脑知识与技术》 *
王艳军: "车牌图像识别的研究", 《电脑知识与技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850232A (en) * 2021-11-09 2021-12-28 安徽农业大学 Virtual blind road monitoring system
CN113850232B (en) * 2021-11-09 2024-06-11 安徽农业大学 Virtual blind road monitoring system
WO2023197913A1 (en) * 2022-04-13 2023-10-19 华为技术有限公司 Image processing method and related device
CN115240404A (en) * 2022-09-19 2022-10-25 腾讯科技(深圳)有限公司 Vibration encoding method, vibration processing method, apparatus, device, and medium

Similar Documents

Publication Publication Date Title
CN113101155A (en) Traffic light intersection blind guiding method and blind guiding device based on machine vision
CN110069986B (en) Traffic signal lamp identification method and system based on hybrid model
CN109460699B (en) Driver safety belt wearing identification method based on deep learning
CN110660254B (en) Traffic signal lamp detection and intelligent driving method and device, vehicle and electronic equipment
CN107506760B (en) Traffic signal detection method and system based on GPS positioning and visual image processing
Yuan et al. Robust lane detection for complicated road environment based on normal map
CN109190523B (en) Vehicle detection tracking early warning method based on vision
CN103034836B (en) Road sign detection method and road sign checkout equipment
CN111563412B (en) Rapid lane line detection method based on parameter space voting and Bessel fitting
WO2021203717A1 (en) Method for recognizing roadside parking behavior in complex scenario on basis of video frames
CN109299674B (en) Tunnel illegal lane change detection method based on car lamp
CN104408424B (en) A kind of multi signal lamp recognition methods based on image procossing
CN102902957A (en) Video-stream-based automatic license plate recognition method
CN111178272B (en) Method, device and equipment for identifying driver behavior
CN105809138A (en) Road warning mark detection and recognition method based on block recognition
CN107316486A (en) Pilotless automobile visual identifying system based on dual camera
CN103208185A (en) Method and system for nighttime vehicle detection on basis of vehicle light identification
CN103971128A (en) Traffic sign recognition method for driverless car
CN108009548A (en) A kind of Intelligent road sign recognition methods and system
CN102298693A (en) Expressway bend detection method based on computer vision
CN110334625A (en) A kind of parking stall visual identifying system and its recognition methods towards automatic parking
CN110969647B (en) Method for integrating identification tracking and car lamp detection of vehicle
CN112669615B (en) Parking space detection method and system based on camera
CN107016362A (en) Vehicle based on vehicle front windshield sticking sign recognition methods and system again
CN114419603A (en) Automatic driving vehicle control method and system and automatic driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210713

RJ01 Rejection of invention patent application after publication