CN108761843B - A kind of blind person's auxiliary eyeglasses detected for the water surface and puddle - Google Patents

A kind of blind person's auxiliary eyeglasses detected for the water surface and puddle Download PDF

Info

Publication number
CN108761843B
CN108761843B CN201810532878.8A CN201810532878A CN108761843B CN 108761843 B CN108761843 B CN 108761843B CN 201810532878 A CN201810532878 A CN 201810532878A CN 108761843 B CN108761843 B CN 108761843B
Authority
CN
China
Prior art keywords
image
color
polarization
layer
color image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810532878.8A
Other languages
Chinese (zh)
Other versions
CN108761843A (en
Inventor
杨恺伦
程瑞琦
汪凯巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Vision Krypton Technology Co Ltd
Original Assignee
Hangzhou Vision Krypton Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Vision Krypton Technology Co Ltd filed Critical Hangzhou Vision Krypton Technology Co Ltd
Priority to CN201810532878.8A priority Critical patent/CN108761843B/en
Publication of CN108761843A publication Critical patent/CN108761843A/en
Application granted granted Critical
Publication of CN108761843B publication Critical patent/CN108761843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C11/00Non-optical adjuncts; Attachment thereof
    • G02C11/10Electronic devices other than hearing aids
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Abstract

The invention discloses blind person's auxiliary eyeglasses that a kind of water surface and puddle detect.Image is acquired using two color cameras and two linear polarizers, is handled using image of the compact processor to acquisition, exports the region of the water surface in image.The advantages of this method can detect large-scale water surface and small-sized road surface puddle simultaneously, have uniformity high, and real-time is high, not need ad hoc hypothesis, can meet the requirement that visually impaired people avoids the water surface and puddle in trip well.

Description

A kind of blind person's auxiliary eyeglasses detected for the water surface and puddle
Technical field
The invention belongs to polarization imaging technology, stereovision technique, mode identification technology, image processing techniques, computers Vision technique field is related to blind person's auxiliary eyeglasses of a kind of water surface and puddle detection.
Background technique
Visual information be the mankind identify ambient enviroment most important information source, the mankind obtain information 80% or so be from Vision system input.According to the statistics of the World Health Organization, there are 2.53 hundred million dysopia personages in the whole world to root.Visually impaired people has lost Normal vision, it is highly difficult to the understanding of color, shape.Now, many people in them are assisted using empty-handed cane or seeing-eye dog The daily life of oneself.Empty-handed cane is not enough to solve all difficulties during travelling.Seeing-eye dog can guide visually impaired people with The danger when walking on road is avoided, but because training seeing-eye dog needs very big cost, they cannot be used for all views Feel obstacle person.Therefore, the conventional tools such as walking stick, seeing-eye dog can not go on a journey for them and provide sufficient assistance.Since various electronics Since trip auxiliary (ETA) equipment development, a kind of effective method for assisting person visually impaired to go on a journey in varied situations has been considered as it. In order to help user to find access, many auxiliary system deployment depth cameras come detect can and path and obstacle, also have very much Auxiliary system realizes stair detection, pedestrian detection, vehicle detection etc. for blind person's auxiliary.But there is no methods to help blind person The danger zone in the water surface or puddle is avoided in trip.Therefore, the water surface and puddle detection can be unified in a frame by one Under be completed at the same time detection, and can be realized real time execution and the method that quickly exports by there is an urgent need to.
Summary of the invention
It is an object of the invention to be directed to the deficiency of prior art, blind person's reliever of a kind of water surface and puddle detection is provided Mirror.
The purpose of the present invention is what is be achieved through the following technical solutions: a kind of blind person's reliever of the water surface and puddle detection Mirror, including lens body, compact processor and battery module of the installation by adhering in a wherein temple, are fixed on frame Two cameras of side, and the handset module of temple tail portion is set;Two color cameras are with height, and optical axis is parallel to each other, Two camera front ends are provided with the color camera of polarizing film, and the polarization direction of two polarizing films is mutually perpendicular to.The small-sized place Being stored in reason device includes a trained neural network;Camera, bone conduction earphone are connected with compact processor respectively, electricity Pond module is connected with compact processor, and camera acquires the color image of surrounding scene in real time, and compact processor utilizes nerve net Network model handles color image Color, obtains semantic segmentation image Semantics, obtains and is divided the water surface area of going out Domain and road surface can traffic areas further according to polarization differential value detect puddle;Compact processor will test result and be converted into Voice signal, and it is transmitted to handset module, inform user.
Training obtains the neural network by the following method:
Training dataset, including m color image Color and one a pair are obtained from large-scale semantic segmentation data set The m answered tag image Label, the corresponding relationship are as follows: pixel unit and color image in tag image Label Pixel unit in Color corresponds, the pixel in pixel unit label color image Color in tag image Label The semantic label of unit.m≥10000.The pixel unit are as follows: the unit formed from all pixels point of same object, Same category of object is identified with a semantic label.
It is input with color image Color, tag image Label is output, is trained to semantic segmentation model, described Each layer network is as shown in the table in semantic segmentation model neural network based, the neural network model trained in advance.
Level number Type Export the dimension of characteristic pattern Export the resolution ratio of characteristic pattern
1 Down-sampling layer 16 320×240
2 Down-sampling layer 64 160×120
3-7 One-dimensional decomposition bottleneck layer 64 160×120
8 Down-sampling layer 128 80×60
9 One-dimensional decomposition bottleneck layer (expansion convolution rate 2) 128 80×60
10 One-dimensional decomposition bottleneck layer (expansion convolution rate 4) 128 80×60
11 One-dimensional decomposition bottleneck layer (expansion convolution rate 8) 128 80×60
12 One-dimensional decomposition bottleneck layer (expansion convolution rate 16) 128 80×60
13 One-dimensional decomposition bottleneck layer (expansion convolution rate 2) 128 80×60
14 One-dimensional decomposition bottleneck layer (expansion convolution rate 4) 128 80×60
15 One-dimensional decomposition bottleneck layer (expansion convolution rate 8) 128 80×60
16 One-dimensional decomposition bottleneck layer (expansion convolution rate 2) 128 80×60
17a The primitive character figure of 16th layer of output 128 80×60
17b The pond of the primitive character figure of 16th layer of output and convolution 32 80×60
17c The pond of the primitive character figure of 16th layer of output and convolution 32 40×30
17d The pond of the primitive character figure of 16th layer of output and convolution 32 20×15
17e The pond of the primitive character figure of 16th layer of output and convolution 32 10×8
17f 17a-17e layers of up-sampling and cascade 256 80×60
18 Convolutional layer Landform and target category number 80×60
19 Up-sample layer Landform and target category number 640×480
After color image Color to be detected is inputted neural network model, the 19th layer of obtained output characteristic pattern is Semantic segmentation image Semantics can be obtained by argmax function in the probability graph of each classification.
Further, testing process is as follows:
(1) it is provided with the color camera of polarizing film by two front ends, obtains a color image respectively.
(2) one of color image is input to neural network model trained in advance, obtains semantic segmentation image Semantics;
(3) semantic segmentation image Semantics is handled, obtaining divided water-surface areas and road surface out can pass through Region, road pavement can any pixel point (u, v) in traffic areas, calculate the pixel in polarization differential image Polarization In polarization differential value polarization, if polarization be greater than threshold value PolarizationThreshold, the point For puddle.
The calculation method of the polarization differential value polarization is as follows:
(3.1) two color image row binocular solids are matched, obtains a width anaglyph Disparity;
(3.2) corresponding points (u ', v) corresponding to pixel (u, v) are found from another color image, meet u-u '= Disparity, disparity are the parallax value of pixel (u, v) in anaglyph Disparity;
(3.3) brightness value of pixel (u, v), (u ', v), respectively V are calculatedL(u,v), VR(u′,v);Polarization differential value Polarization is | VL(u,v)-VR(u′,v)|。
Further, the one-dimensional decomposition bottleneck layer is replaced by using 3 × 1 convolution kernel and 1 × 3 convolution kernel Convolution, and be coupled finally by residual error formula as activation primitive using line rectification function ReLU, form the one-dimensional of an entirety Decompose bottleneck layer.
Further, the convolution in the one-dimensional decomposition bottleneck layer from 9 to 16 layer is all made of expansion convolution and completes, and expands convolution Rate is respectively 2,4,8,16,2,4,8,2.
Further, the feature with the maximum pond of process that the down-sampling layer is exported by using 3 × 3 convolution kernel Figure, is cascaded, exports the characteristic pattern of down-sampling.
Further, the up-sampling layer is completed using bilinear interpolation.
Beneficial effects of the present invention essentially consist in that:
Uniformity is high.The present invention, can due to having gathered polarization differential method and semantic segmentation method neural network based To obtain the large-scale water surface region and small-sized puddle region in image simultaneously.
Real-time is high.Semantic segmentation model of the invention is due to completing characteristic pattern using the one-dimensional stacking for decomposing bottleneck layer Extraction, maximumlly save the residual error number of layers for reaching same precision needs, therefore can support the semanteme of high real-time Segmentation and detection.Polarization differential detection method of the invention, it is only necessary to binocular image matching technique and polarization differential technology, it can be with Support the output of high real-time.
Ad hoc hypothesis is not needed.The present invention, can be directly from original due to using semantic segmentation method neural network based Feature is extracted in beginning data, ad hoc hypothesis is not needed upon and completes detection.
Good environmental adaptability.It is auxiliary to compare existing blind person compared to that can detect large-scale water surface and small-sized puddle simultaneously by the present invention Assistant engineer's tool, can support the trip of the different weathers such as fine day, rainy days.
Detailed description of the invention
Fig. 1 is the structural schematic diagram of auxiliary eyeglasses;
Fig. 2 is module connection diagram;
Fig. 3-Fig. 7 is the image of case 1, wherein
The left side Fig. 3 is left color image;The right side is right color image;
Fig. 4 is semantic segmentation image;
Fig. 5 is anaglyph;
Fig. 6 is polarization differential image;
Fig. 7 is the water surface and puddle testing result.
Fig. 8-Figure 12 is the image of case 2, wherein
The left side Fig. 8 is left color image;The right side is right color image;
Fig. 9 is semantic segmentation image;
Figure 10 is anaglyph;
Figure 11 is polarization differential image;
Figure 12 is the water surface and puddle testing result.
Figure 13 is the signal of one-dimensional decomposition bottleneck layer;
Figure 14 is down-sampling layer schematic diagram.
In figure, camera 1, compact processor 2, battery module 3, handset module 4.
Specific embodiment
The present invention relates to blind person's auxiliary eyeglasses that a kind of water surface and puddle detect, this method is based on one and is built in small-sized processing The neural network model of device realizes that the neural network model is obtained by method training:
Training dataset, including m color image Color and one a pair are obtained from large-scale semantic segmentation data set The m answered tag image Label, the corresponding relationship are as follows: pixel unit and color image in tag image Label Pixel unit in Color corresponds, the pixel in pixel unit label color image Color in tag image Label The semantic label of unit.m≥10000.The pixel unit are as follows: the unit formed from all pixels point of same object, Same category of object is identified with a semantic label.It include the pixel list of the water surface and road surface in the m color image Color Member.
Large-scale semantic segmentation data set can be with are as follows:
ADE20K data set: http://groups.csail.mit.edu/vision/datasets/ADE20K/;
Or Cityscapes data set: https: //www.cityscapes-dataset.com/;
Or Pascal data set: https: //www.cs.stanford.edu/~roozbeh/pascal-context/;
Or COCO10K data set: https: //github.com/nightrome/cocostuff;
Or Mapillary data set: https: //www.mapillary.com/dataset/vistas.
It is input with color image Color, tag image Label is output, is trained to semantic segmentation model, described Each layer network is as shown in the table in semantic segmentation model neural network based, the neural network model trained in advance.
Level number Type Export the dimension of characteristic pattern Export the resolution ratio of characteristic pattern
1 Down-sampling layer 16 320×240
2 Down-sampling layer 64 160×120
3-7 One-dimensional decomposition bottleneck layer 64 160×120
8 Down-sampling layer 128 80×60
9 One-dimensional decomposition bottleneck layer (expansion convolution rate 2) 128 80×60
10 One-dimensional decomposition bottleneck layer (expansion convolution rate 4) 128 80×60
11 One-dimensional decomposition bottleneck layer (expansion convolution rate 8) 128 80×60
12 One-dimensional decomposition bottleneck layer (expansion convolution rate 16) 128 80×60
13 One-dimensional decomposition bottleneck layer (expansion convolution rate 2) 128 80×60
14 One-dimensional decomposition bottleneck layer (expansion convolution rate 4) 128 80×60
15 One-dimensional decomposition bottleneck layer (expansion convolution rate 8) 128 80×60
16 One-dimensional decomposition bottleneck layer (expansion convolution rate 2) 128 80×60
17a The primitive character figure of 16th layer of output 128 80×60
17b The pond of the primitive character figure of 16th layer of output and convolution 32 80×60
17c The pond of the primitive character figure of 16th layer of output and convolution 32 40×30
17d The pond of the primitive character figure of 16th layer of output and convolution 32 20×15
17e The pond of the primitive character figure of 16th layer of output and convolution 32 10×8
17f 17a-17e layers of up-sampling and cascade 256 80×60
18 Convolutional layer Landform and target category number 80×60
19 Up-sample layer Landform and target category number 640×480
Wherein the one-dimensional decomposition bottleneck layer is as shown in figure 12, by the present invention in that with 3 × 1 convolution kernel and 1 × 3 volume Product core carries out alternately convolution, and is coupled as activation primitive finally by residual error formula using line rectification function ReLU, forms one A whole one-dimensional decomposition bottleneck layer.Extraction of the present invention due to completing characteristic pattern using the one-dimensional stacking for decomposing bottleneck layer, The residual error number of layers for reaching same precision needs is maximumlly saved, therefore can support the semantic segmentation and inspection of high real-time It surveys.
Wherein the convolution in the one-dimensional decomposition bottleneck layer from 9 to 16 layer is all made of expansion convolution and completes, extension convolution rate point It Wei 2,4,8,16,2,4,8,2.
Wherein the down-sampling layer is as shown in figure 13, by the present invention in that is exported with 3 × 3 convolution kernel is maximum with process The characteristic pattern in pond, is cascaded, and the characteristic pattern of down-sampling is exported.
Wherein the up-sampling layer is completed using bilinear interpolation.
After color image Color to be detected is inputted neural network model, the 19th layer of obtained output characteristic pattern is Semantic segmentation image Semantics can be obtained by argmax function in the probability graph of each classification.
Below by taking case 1 as an example, the present invention will be further described.
(1) it is provided with the color camera of polarizing film by two front ends, obtains a color image respectively, as shown in figure 3, its In, described two color cameras are with height, and optical axis is parallel to each other, and the polarization direction of two polarizing films is mutually perpendicular to.
(2) left cromogram is input to neural network model trained in advance, obtains semantic segmentation image Semantics, As shown in Figure 4.
(3) semantic segmentation image Semantics is handled, obtaining divided water-surface areas and road surface out can pass through Region, road pavement can any pixel point (u, v) in traffic areas, calculate the pixel in polarization differential image Polarization In polarization differential value polarization, if polarization be greater than threshold value PolarizationThreshold, the point For puddle, as shown in Figure 7.
The calculation method of the polarization differential value polarization is as follows:
(3.1) two color image row binocular solids are matched, a width anaglyph Disparity is obtained, such as Fig. 5 institute Show.
(3.2) corresponding points (u ', v) corresponding to pixel (u, v) are found from another color image, meet u-u '= Disparity, disparity are the parallax value of pixel (u, v) in anaglyph Disparity;
(3.3) brightness value of pixel (u, v), (u ', v), respectively V are calculatedL(u,v), VR(u′,v);Polarization differential value Polarization is | VL(u,v)-VR(u′,v)|;It may make up difference diagram as shown in FIG. 6 with polarization differential value.

Claims (6)

1. blind person's auxiliary eyeglasses of a kind of water surface and puddle detection, which is characterized in that including lens body, installation by adhering is at it In compact processor and battery module in a temple, two cameras being fixed on above frame, and being arranged in temple tail The handset module in portion;Two color cameras are with height, and optical axis is parallel to each other, and two camera front ends are provided with the colour of polarizing film The polarization direction of camera, two polarizing films is mutually perpendicular to;Being stored in the compact processor includes a trained mind Through network;Camera, bone conduction earphone are connected with compact processor respectively, and battery module is connected with compact processor, and camera is real-time Ground acquires the color image of surrounding scene, and compact processor handles color image Color using neural network model, obtains To semantic segmentation image Semantics, obtain be divided the water-surface areas and road surface can traffic areas, further according to polarization Difference value detects puddle;Compact processor will test result and be converted into voice signal, and be transmitted to handset module, inform user;
Training obtains the neural network by the following method:
Training dataset is obtained from large-scale semantic segmentation data set, including m opens color image Color and it is one-to-one M tag image Label, the corresponding relationship are as follows: in the pixel unit and color image Color in tag image Label Pixel unit correspond, the language of the pixel unit in pixel unit label color image Color in tag image Label Adopted label;m≥10000;The pixel unit are as follows: the unit formed from all pixels point of same object, same category Object be identified with a semantic label;
It is input with color image Color, tag image Label is output, is trained to semantic segmentation model, is obtained in advance Trained neural network model;The semantic segmentation model is neural network based, each layer network of neural network model It is as shown in the table:
After color image Color to be detected is inputted neural network model, the 19th layer of obtained output characteristic pattern is as each Semantic segmentation image Semantics can be obtained by argmax function in the probability graph of classification.
2. blind person's auxiliary eyeglasses according to claim 1, which is characterized in that testing process is as follows:
(1) it is provided with the color camera of polarizing film by two front ends, obtains a color image respectively;
(2) one of color image is input to neural network model trained in advance, obtains semantic segmentation image Semantics;
(3) semantic segmentation image Semantics is handled, obtaining divided water-surface areas and road surface out can FOH Domain, road pavement can any pixel point (u, v) in traffic areas, calculate the pixel in polarization differential image Polarization Polarization differential value polarization, if polarization is greater than threshold value PolarizationThreshold, which is Puddle;
The calculation method of the polarization differential value polarization is as follows:
(3.1) two color image row binocular solids are matched, obtains a width anaglyph Disparity;
(3.2) corresponding points (u ', v) corresponding to pixel (u, v) are found from another color image, meet u-u '= Disparity, disparity are the parallax value of pixel (u, v) in anaglyph Disparity;
(3.3) brightness value of pixel (u, v), (u ', v), respectively V are calculatedL (u, v), VR (u ', v);Polarization differential value Polarization is | VL (u, v)-VR (u ', v)|。
3. blind person's auxiliary eyeglasses according to claim 1, which is characterized in that the one-dimensional decomposition bottleneck layer is by using 3 × 1 convolution kernel and 1 × 3 convolution kernel carry out alternately convolution, and using line rectification function ReLU as activation primitive, finally It is coupled by residual error formula, forms the one-dimensional decomposition bottleneck layer an of entirety.
4. blind person's auxiliary eyeglasses according to claim 1, which is characterized in that in the one-dimensional decomposition bottleneck layer from 9 to 16 layer Convolution be all made of expansion convolution complete, expansion convolution rate be respectively 2,4,8,16,2,4,8,2.
5. blind person's auxiliary eyeglasses according to claim 1, which is characterized in that the down-sampling layer by using 3 × 3 volume The characteristic pattern with the maximum pond of process for accumulating core output, is cascaded, exports the characteristic pattern of down-sampling.
6. blind person's auxiliary eyeglasses according to claim 1, which is characterized in that the up-sampling layer is complete using bilinear interpolation At.
CN201810532878.8A 2018-05-29 2018-05-29 A kind of blind person's auxiliary eyeglasses detected for the water surface and puddle Active CN108761843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810532878.8A CN108761843B (en) 2018-05-29 2018-05-29 A kind of blind person's auxiliary eyeglasses detected for the water surface and puddle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810532878.8A CN108761843B (en) 2018-05-29 2018-05-29 A kind of blind person's auxiliary eyeglasses detected for the water surface and puddle

Publications (2)

Publication Number Publication Date
CN108761843A CN108761843A (en) 2018-11-06
CN108761843B true CN108761843B (en) 2019-11-22

Family

ID=64003719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810532878.8A Active CN108761843B (en) 2018-05-29 2018-05-29 A kind of blind person's auxiliary eyeglasses detected for the water surface and puddle

Country Status (1)

Country Link
CN (1) CN108761843B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050208457A1 (en) * 2004-01-05 2005-09-22 Wolfgang Fink Digital object recognition audio-assistant for the visually impaired
CN106389078A (en) * 2016-11-24 2017-02-15 贵州大学 Intelligent blind guiding glass system and blind guiding method thereof
CN107444665B (en) * 2017-07-24 2020-06-09 长春草莓科技有限公司 Unmanned aerial vehicle autonomous landing method
CN107424159B (en) * 2017-07-28 2020-02-07 西安电子科技大学 Image semantic segmentation method based on super-pixel edge and full convolution network
CN107817614B (en) * 2017-08-31 2019-06-07 杭州视氪科技有限公司 It is a kind of for hiding blind person's auxiliary eyeglasses of the water surface and barrier
CN108564587A (en) * 2018-03-07 2018-09-21 浙江大学 A kind of a wide range of remote sensing image semantic segmentation method based on full convolutional neural networks

Also Published As

Publication number Publication date
CN108761843A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
US20210125338A1 (en) Method and apparatus for computer vision
CN104036479B (en) Multi-focus image fusion method based on non-negative matrix factorization
Yang et al. Unifying terrain awareness through real-time semantic segmentation
CN108960287B (en) Blind person auxiliary glasses capable of realizing terrain and target detection
CN107397658B (en) Multi-scale full-convolution network and visual blind guiding method and device
Malūkas et al. Real time path finding for assisted living using deep learning
AU2021103300A4 (en) Unsupervised Monocular Depth Estimation Method Based On Multi- Scale Unification
Kumar et al. Artificial Intelligence Solutions for the Visually Impaired: A Review
Yang et al. Predicting polarization beyond semantics for wearable robotics
US20210124990A1 (en) Method and apparatus for computer vision
Wang et al. An environmental perception and navigational assistance system for visually impaired persons based on semantic stixels and sound interaction
Kaur et al. A scene perception system for visually impaired based on object detection and classification using multi-modal DCNN
Kaur et al. Scene perception system for visually impaired based on object detection and classification using multimodal deep convolutional neural network
CN108805882A (en) A kind of water surface and puddle detection method
Bhatt et al. A Real-Time Traffic Sign Detection and Recognition System on Hybrid Dataset using CNN
CN108761843B (en) A kind of blind person's auxiliary eyeglasses detected for the water surface and puddle
CN116258756B (en) Self-supervision monocular depth estimation method and system
Akbari et al. A vision-based zebra crossing detection method for people with visual impairments
Zhang et al. Perception framework through real-time semantic segmentation and scene recognition on a wearable system for the visually impaired
Zhou et al. Underwater occlusion object recognition with fusion of significant environmental features
Siddiqui et al. Multi-modal depth estimation using convolutional neural networks
Yang et al. Semantic perception of curbs beyond traversability for real-world navigation assistance systems
Zhang et al. Infrastructure 3D Target detection based on multi-mode fusion for intelligent and connected vehicles
CN113609993A (en) Attitude estimation method, device and equipment and computer readable storage medium
CN108764146A (en) A kind of landform and object detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant