CN102090947B - Blind visual compensation method and system for implementing same - Google Patents

Blind visual compensation method and system for implementing same Download PDF

Info

Publication number
CN102090947B
CN102090947B CN 201110033786 CN201110033786A CN102090947B CN 102090947 B CN102090947 B CN 102090947B CN 201110033786 CN201110033786 CN 201110033786 CN 201110033786 A CN201110033786 A CN 201110033786A CN 102090947 B CN102090947 B CN 102090947B
Authority
CN
China
Prior art keywords
blind
template
sign
image
periphery
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201110033786
Other languages
Chinese (zh)
Other versions
CN102090947A (en
Inventor
朱珍民
唐熊
陈援非
何哲
叶剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN 201110033786 priority Critical patent/CN102090947B/en
Publication of CN102090947A publication Critical patent/CN102090947A/en
Application granted granted Critical
Publication of CN102090947B publication Critical patent/CN102090947B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a blind visual compensation method and a system for implementing the same. The system comprises a template blind identification resource unit, a mobile gray image capturing unit, a blind identification positioning unit, a blind identification recognition unit and a voice synthesis output unit, wherein the template blind identification resource unit is used for pre-storing feature data of different blind identification to represent different information; the mobile gray image capturing unit is used for capturing identification images through handheld mobile equipment; the blind identification positioning unit is used for positioning the blind identification in the captured images; the blind identification recognition unit is used for recognizing the positioned blindidentification; and the voice synthesis output unit is used for outputting the information included in the positioned blind identification in a voice form. The system can still ensure good instantaneity at a terminal platform with weak calculation performance.

Description

The compensation method of a kind of blind person's vision and the system that realizes this method
Technical field
The system that the present invention relates to the compensation method of a kind of blind person's vision and realize this method, especially relate to a kind of blind person's vision bucking-out system based on rapid image identification and compensation method and be used for providing scene Recognition analysis anywhere or anytime and guide service to blind users, proposed a kind of implementation in the scene perceived direction of the accessible research field of people with disability's information.
Background technology
Image recognition technology can help blind person's perception life scene, can bring into play the guide effect of helping the disabled under mobile computing environment more.Date processing such as equalization, noise reduction, sharpening, texture detection are passed through in traditional image recognition usually, and huge like this amount of calculation is suitable for PC or DSP platform.For being the mobile computing environment of representative with the mobile phone, the traditional images recognition technology is applied to the guide field of helping the disabled and realizes difficulty.
Number of patent application 200910053318.5, name is called " DSP guideboard recognition blind-guide device and method thereof " and discloses a kind of system and method for realizing guiding function.This method is used histogram equalization, binaryzation, is removed the point-like noise, gradient sharpening, texture detect, the method for image rotation is carried out pretreatment, the guideboard image-region that the location has the arrow feature to the image of camera collection; The guideboard image-region is carried out Character segmentation; The character of cutting apart is carried out the feature templates coupling; Voice suggestion guideboard character.
Above-mentioned patented method is used for the guideboard of prompting direction on the identification road, and identifying object is single; Used histogram equalization, binaryzation, removed the point-like noise, digital signal processing algorithm such as gradient sharpening, texture detection, image rotation, and need carry out template matching to each character, amount of calculation is big, is applicable to DSP or PC platform.
Summary of the invention
It is a kind of based on blind simple with mark location, recognizer that problem to be solved by this invention is to provide, and can identify the life scene of plurality of classes, and possess extensibility; Can guarantee blind guiding system and the blind-guiding method of good real-time equally at the more weak terminal platform of calculated performance.
A kind of blind person's vision bucking-out system for realizing that the object of the invention provides comprises:
The blind identifying resource unit of using of template is used for storing in advance the blind characteristic with sign of different templates, and these templates are blind in the used different object of the corresponding blind person's environment-identification of sign;
Catch the gray level image unit, be used for the blind person and use handheld mobile device to take the surrounding enviroment image, and the image of taking is processed into gray level image;
The periphery blind identify unit of using in location is used at all peripheries of the described gray level image in location blind blind in relative direction and distance between sign and blind person with sign and described periphery;
The blind identify unit of use of identification periphery is used for oriented peripheral blind blind in the sign contrast with sign and template of described gray level image, with confirm that blind person's environment of living in comprises can corresponding object information blind the using of template identify;
The phonetic synthesis output unit, it is blind in identify unit and the blind object information that provides with identify unit of identification periphery to be used to the blind person to obtain above-mentioned location periphery with form of sound.
Template is blind to be comprised with the identifying resource unit:
Multi-class template is blind in sign, wherein comprises numeric class, traffic mark class, daily class, safety instruction class, stadiums facility class, public place facility class;
The blind marking instrument of using of template is used for generating with described multi-class template blind in sign characteristic of correspondence data;
Template is blind uses the identification characteristics data base, is used for that storage is described blindly to comprise eigenvalue, energy value, blind characteristic with sign title, classification with what marking instrument obtained.
Described mobile capture gray level image unit comprises:
The raw image data acquisition module is used for taking blind person's surrounding enviroment image;
The gray level image image processing module is used for the surrounding enviroment image that photographs is processed into gray level image.
The blind identify unit of using of described identification periphery, its feature recognition algorithms step is: blind in identified areas trimming circle to periphery; Utilize the peripheral blind sign figure of using after interpolation algorithm dwindles trimming circle, be normalized to 64x64; Calculate the energy after the normalization figure goes average; It is blind in the figure cross-correlation coefficient among the identification characteristics data base to calculate 64x64 pixel image and template, selects the template of cross correlation value maximum blind in identifying as recognition result.
Described phonetic synthesis output unit comprises:
Text to be synthesized, the text is corresponding with the blind characteristic information with sign of template;
Speech synthesis engine synthesizes voice messaging with text to be synthesized;
Voice-output device is delivered to voice messaging in blind person's ear with the sound wave form.
The present invention also provides the compensation method of a kind of blind person's vision, comprising:
Step 1: the use template is blind stores the blind characteristic with sign of different templates in advance with the identifying resource unit, and these templates are blind in the used different object information of the corresponding blind person's environment-identification of sign;
Step 2: the blind person uses handheld mobile device to take the surrounding enviroment image, and the image of taking is processed into gray level image;
Step 3: the use location is blind locatees blind relative direction and the distance of using between sign and blind person of periphery in the described gray level image with identify unit;
Step 4: use identification blind in identify unit peripheral blind blind in the sign contrast with sign and template with in the described gray level image, with confirm that blind person's environment of living in comprises can corresponding object information template blind in identifying;
Step 5: it is blind in identify unit and the blind object information that provides with identify unit of identification periphery to use the phonetic synthesis output unit to obtain the location periphery as the blind person with form of sound.
Described step 1 comprises the steps:
Step 1.1: the design numeric class, the traffic mark class, daily class, the safety instruction class, stadiums facility class, the multi-class template of public place facility class is blind in sign;
Step 1.2: use blind the generation with blind the using of described multi-class template with marking instrument of template to identify the characteristic of correspondence data;
Step 1.3: the use template is blind stores the described blind eigenvalue that comprises with the marking instrument generation, energy value, blind characteristic with sign title, classification with the identification characteristics data base.
Described step 2 comprises:
Step 2.1: use the raw image data acquisition module to take blind person's surrounding enviroment image;
Step 2.2: use the gray level image image processing module that the surrounding enviroment image that photographs is processed into gray level image.
Described step 3 comprises:
At first execution in step 301 is from the middle row begin column scanning of black white image; Judge and find effective frame point 302, if find two effective frame points, just blind in sign 303 to locate periphery since the centre position column scan of two points, judge whether then that if not effective frame point scanning arrives image boundary 304, if execution in step 305 then, return step 301 if not carrying out the next round line scanning, execution in step 305 scans from black white image middle column begin column till image boundary; Judge whether to find effective frame point 306; If find two effective frame points, then the centre position begin column from two points scans to locate the blind sign 307 of using of periphery, judge whether then that if not effective frame point scanning arrives image boundary 308, words that no, carry out the next round column scan and return step 305, the blind mark location result that uses of execution in step 309 output peripheries till image boundary.
Described step 4 comprises:
Step 4.1: blind in identified areas trimming circle to periphery;
Step 4.2: utilize the peripheral blind sign figure of using after interpolation algorithm dwindles trimming circle, be normalized to 64x64;
Step 4.3: calculate the energy after the normalization figure goes average;
Step 4.4: calculate the blind cross-correlation coefficient with the eigenvalue figure among the identification characteristics data base of 64x64 pixel image and template, select the template of cross correlation value maximum blind in identifying as recognition result.
Described step 5 comprises:
Step 5.1: use speech synthesis engine that text to be synthesized is synthesized voice messaging;
Step 5.2: use voice-output device that voice messaging is delivered in blind person's ear with the sound wave form.
Description of drawings
Fig. 1 is system diagram of the present invention;
Fig. 2 is entire process flow chart of the present invention;
Fig. 3-the 5th, templates dissimilar among the present invention is blind in sign;
Fig. 6 is that more eurypalynous template is blind in sign among the present invention;
Fig. 7 handles blind flow chart with the sign picture among the present invention;
Fig. 8 is that the template of different angles is blind in sign among the present invention;
Fig. 9 is gray scale picture catching flow process ginseng figure among the present invention;
Figure 10 is the interaction figure between mastery routine and the dynamic link library among the present invention;
Figure 11 is location algorithm flow process among the present invention;
Figure 12 is that periphery is blind in sign nine palace trrellis diagrams among the present invention;
Figure 13 is phonetic synthesis output flow chart among the present invention.
The specific embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, to a kind of blind person's vision of the present invention compensation method and realize that the system of this method is further elaborated.Should be appreciated that specific embodiment described herein only in order to explaining the present invention, and be not used in restriction the present invention.
According to function, whole system and method can be divided into five part accompanying drawings 1 and be system diagram of the present invention, and Fig. 2 is entire process flow chart of the present invention:
It is blind in identification characteristics storehouse 100 to set up template
Template is blind identifies scene, accompanying drawing 3,4,5 by blind with sign with identifying resource unit 1.Different templates is blind can to represent men's lavatory with the different scenes of sign representative such as the man, and aircraft can represent the airport, and book can represent library, and phone can represent phonebooth etc.In actual the use, template is blind must to be appeared in the scene of its representative with sign.By the blind marking instrument SignSample that uses, can obtain the blind feature with sign of more template, these features are added in the feature database, can be identified by native system with sign thereby more template is blind, as icon in the accompanying drawing 6.By blind with the marking instrument processing template blind with the sign picture be input template blind with the sign 101, the template characteristic data 102 that obtain and preserve 64x64.Use flow process such as accompanying drawing 7.Dispose, characteristic is with one group of data structure tabular form:
Struct?SsignSample{
wchar_t*name;
wchar_t*class;
short?image[64*64];
double?energy;
}
Name is that template is blind in the sign title, and class is that template is blind in the sign classification, and the image array has comprised that template is blind in the sign gray level image, and to be that template is blind remove energy after the average with identification image to energy.Each template is blind to have 4 groups of data structures with sign, upwards corresponding, and left, to the right, the characteristics of image of four direction, so template downwards is blind can be identified with sign rotation 90,180,270.As shown in Figure 8.
Gray level image catches 200
Gray level image capture unit 2 is carried out this process and is comprised a mastery routine 210 and a dynamic link library 220, and mastery routine has comprised image processing module, algoritic module and voice output module, and dynamic link library comprises Image Acquisition and photographic head control module.Gray level image catches flow process with reference to accompanying drawing 9.At first webcam driver 201, carry out video acquisition 202 then, obtain image data stream thereby carry out view data processing 203 at last.Dynamic link library provides five interfaces to mastery routine: set up data session, destroy data session, data session begins streams data, and data session suspends streams data, the call back function of registration mastery routine.Between mastery routine and the dynamic link library alternately as shown in Figure 10.At first the initialization interface 211, set up data session 212 then, registered callbacks function 213, activation data session 214, image processing algorithm identification 215.Wherein image processing algorithm identification module call back function from dynamic link library 220 obtains image/video stream.In the step of setting up data session 212 and activation data session 214, that has used that dynamic link library provides sets up data session order and the order of beginning data session.If gather the RGB data, identification blind with identified areas before, need obtain gray level image data, Y=0.30R+0.59G+0.11B from the RGB data.
The location periphery is blind in sign 300
The periphery blind identify unit 3 of using in location is when carrying out this step, and at first execution in step 301 is from the middle row begin column scanning of black white image; Judge and find effective frame point 302, if find two effective frame points, just blind in sign 303 to locate periphery since the centre position column scan of two points, judge whether then that if not effective frame point scanning arrives image boundary 304, if execution in step 305 then, return step 301 if not carrying out the next round line scanning, execution in step 305 scans from black white image middle column begin column till image boundary; Judge whether to find effective frame point 306; If find two effective frame points, then the centre position begin column from two points scans to locate the blind sign 307 of using of periphery, judge whether then that if not effective frame point scanning arrives image boundary 308, words that no, carry out the next round column scan and return step 305, the blind mark location result that uses of execution in step 309 output peripheries till image boundary.
Suppose image size 800*600, successively scanogram (300 ± n*5) row (n=0,1,, 59), the blind identified areas of using of location periphery, if periphery is blind in the unrecognized cell processing mistake of identified areas, then the intercepting periphery is blind in sign, and standard turns to the image of 64*64 size; All blind signs of using in order to ensure navigating in the image in kind remake a column scan.Successively scanogram the (400 ± n*5) row (and n=0,1 ..., 79), locate the blind identified areas of using, if blindly handled with identify unit with the unrecognized periphery of identified areas is blind, then intercept this zone by identification module, standard turns to the image of 64*64 size; Calculate that the intercepting periphery is blind to remove the average cross-correlation coefficient that standardizes with sign and each template are blind with what identify; Select the template of correlation coefficient maximum blind with sign as a result of.The location algorithm flow process is with reference to accompanying drawing 11.Intercepting and normalized images zone, algorithm is as follows:
The upper left corner, source images zone (x1, y1);
The upper right corner, source images zone (x2, y2);
The lower right corner, source images zone (x3, y3);
The lower left corner, source images zone (x4, y4);
Target image (x, y) (x=0,1 ..., 63; Y=0,1 ..., 63);
Calculate source images top interpolation point: (x1+ (x/64) * (x2-x1), y1+ (y/64) * (y2-y1));
Calculate the following interpolation point of source images: (x4+ (x/64) * (x4-x3), y4+ (y/64) * (y4-y3));
Calculate the process linear equation L1 of interpolation point up and down;
Calculate source images left side interpolation point: (x1+ (x/64) * (x4-x1), y1+ (y/64) * (y4-y1));
Calculate source images the right interpolation point: (x2+ (x/64) * (x3-x2), y2+ (y/64) * (y3-y2));
Calculating is through the linear equation L2 of left and right sides interpolation point;
Calculate on the source images, the intersection point of straight line L1 and L2 (x ', y ');
Bilinear interpolation, calculate the gray scale of source images (x ', y '): (x y) locates gray scale as target image.
Calculate blind in the following mode of identified areas azimuth-range:
The blind usefulness identified areas upper left corner (x1, y1);
The blind usefulness identified areas upper right corner (x2, y2);
The blind usefulness identified areas lower right corner (x3, y3);
The blind usefulness identified areas lower left corner (x4, y4);
Blind usefulness identified areas central point (x1+x2)/2 (y1+y4)/2, utilizes the JiuGongTu algorithm to determine the orientation of central point, identifies the orientation of relative blind users as blind usefulness with the orientation of central point.Described nine palace lattice are with reference to accompanying drawing 12.
The identification periphery is blind in sign 400
The identification periphery is blind to intercept all peripheral blind identified areas of using in the image that is obtained by locating module with identify unit 4, and standard turns to the image of 64*64 size.Image removes energy theorem MeanEnergy after the average:
Σ i = 1 n ( V i - V ‾ ) 2 , V ‾ = Σ i = 1 n V i / n .
Two images remove average normalized crosscorrelation coefficient NCC:
If: template image T (x, y), (x=0,1 ..., 63; Y=0,1 ..., 63)
Truncated picture S (x, y), (x=0,1 ..., 63; Y=0,1 ..., 63)
Then: NCC (T S) is:
NCC ( S , T ) = Σ x Σ y ( T ( x , y ) - T ′ ) ( S ( x , y ) - S ′ ) Σ x Σ y ( T ( x , y ) - T ′ ) 2 × Σ x Σ y ( S ( x , y ) - S ′ ) 2
T '=∑ wherein xyT (x, y)/4096, S '=∑ xyS (x, y)/4096.
The template that generates maximum cross correlation coefficient NCC is blind in identifying as recognition result.
Phonetic synthesis output 500
Speech data provides scene Recognition analysis anywhere or anytime and guide service to blind users thereby the result that phonetic synthesis output unit 5 will be identified is sent to the speech synthesis engine generation.Phonetic synthesis output flow chart is participated in accompanying drawing 13.At first text to be synthesized 510 is input to speech synthesis engine 520 at last with synthetic voice messaging outut device 530.
In conjunction with the drawings to the description of the specific embodiment of the invention, other side of the present invention and feature are apparent to those skilled in the art.
More than specific embodiments of the invention are described and illustrate it is exemplary that these embodiment should be considered to it, and be not used in and limit the invention, the present invention should separate according to appended claim.

Claims (2)

1. blind person's vision bucking-out system comprises:
The blind identifying resource unit of using of template is used for storing in advance the blind characteristic with sign of different templates, and described template is blind in the different object information in the corresponding blind person's environment-identification of sign;
Catch the gray level image unit, be used for the blind person and use handheld mobile device to take the surrounding enviroment image, and the image of taking is processed into gray level image;
The blind identify unit of using of location periphery, all peripheries that are used for the described gray level image in location are blind blind in relative direction and distance between sign and blind person with sign and described periphery;
The blind identify unit of using of identification periphery, for the oriented periphery of described gray level image is blind blind in identifying contrast with sign and template, blind in sign to confirm that blind person's environment of living in comprises;
The phonetic synthesis output unit, it is blind in identify unit and the blind object information that provides with identify unit of identification periphery to be used to the blind person to obtain above-mentioned location periphery with form of sound;
Described template is blind to be comprised with the identifying resource unit:
Multi-class template is blind in sign, wherein comprises numeric class, traffic mark class, daily class, safety instruction class, stadiums facility class, public place facility class;
The blind marking instrument of using of template is used for generating with described multi-class template blind in sign characteristic of correspondence data;
Template is blind uses the identification characteristics data base, is used for that the described template of storage is blind to comprise eigenvalue figure, energy value, blind characteristic with sign title, classification with what marking instrument obtained.
2. blind person's vision bucking-out system according to claim 1 is characterized in that described seizure gray level image unit comprises:
The raw image data acquisition module is used for taking blind person's surrounding enviroment image;
The gray level image image processing module is used for the surrounding enviroment image that photographs is processed into gray level image.
3. blind person's vision bucking-out system according to claim 1 is characterized in that, the blind identify unit of using of described identification periphery, and its feature recognition algorithms step is: blind in identified areas trimming circle to periphery; Utilize the peripheral blind sign figure of using after interpolation algorithm dwindles trimming circle, be normalized to the 64x64 pixel graphics; Calculate the energy after the normalization figure goes average; Calculate the blind cross-correlation coefficient with the eigenvalue figure among the identification characteristics data base of 64x64 pixel graphics and template, select the template of cross correlation value maximum blind in identifying as recognition result.
4. blind person's vision bucking-out system according to claim 1 is characterized in that, described phonetic synthesis output unit comprises:
Text to be synthesized, the text is corresponding with the blind characteristic information with sign of template;
Speech synthesis engine synthesizes voice messaging with text to be synthesized;
Voice-output device is delivered to voice messaging in blind person's ear with the sound wave form.
5. blind person's vision compensation method comprises:
Step 1: the use template is blind stores the blind characteristic with sign of different templates in advance with the identifying resource unit, and these templates are blind in the used different object information of the corresponding blind person's environment-identification of sign;
Step 2: the blind person uses handheld mobile device to take the surrounding enviroment image, and the image of taking is processed into gray level image;
Step 3: using the location periphery, blind to locate in the described gray level image periphery with identify unit blind in relative direction and distance between sign and blind person;
Step 4: use the identification periphery blind in identify unit peripheral blind blind in the sign contrast with sign and template with in the described gray level image, blind in identifying with the template of confirming the corresponding object information that blind person's environment of living in comprises;
Step 5: it is blind in identify unit and the blind object information that provides with identify unit of identification periphery to use the phonetic synthesis output unit to obtain the location periphery as the blind person with form of sound;
Described step 1 comprises the steps:
Step 1.1: the design numeric class, the traffic mark class, daily class, the safety instruction class, stadiums facility class, the multi-class template of public place facility class is blind in sign;
Step 1.2: use blind the generation with blind the using of described multi-class template with marking instrument of template to identify the characteristic of correspondence data;
Step 1.3: use that template is blind stores with the identification characteristics data base that described template is blind to comprise the eigenvalue figure with what marking instrument generated, energy value, blind in the characteristic that identifies title, classification.
6. blind person's vision according to claim 5 compensation method is characterized in that, described step 2 comprises:
Step 2.1: use the raw image data acquisition module to take blind person's surrounding enviroment image;
Step 2.2: use the gray level image image processing module that the surrounding enviroment image that photographs is processed into gray level image.
7. blind person's vision according to claim 5 compensation method is characterized in that, described step 3 comprises:
At first execution in step 301, from the middle row begin column scanning of black white image;
Step 302 is judged and is found effective frame point, if find two effective frame points, execution in step 303; Otherwise execution in step 304;
Step 303, blind in sign to locate periphery since the centre position column scan of two points;
Step 304 judges whether that scanning arrives image boundary, if execution in step 305 is then returned step 301 otherwise carry out the next round line scanning, till image boundary;
Step 305 scans from black white image middle column begin column;
Step 306 judges whether to find effective frame point, if find two effective frame points, then execution in step 307; Otherwise execution in step 308;
Step 307, it is blind in sign to scan to locate periphery from the centre position begin column of two points;
Step 308 judges whether that scanning arrives image boundary, if not, carry out the next round column scan and return step 305, till image boundary; Otherwise execution in step 309;
Step 309, the blind mark location result that uses of output periphery.
8. blind person's vision according to claim 5 compensation method is characterized in that, described step 4 comprises:
Step 4.1: blind in identified areas trimming circle to periphery;
Step 4.2: utilize the peripheral blind sign figure of using after interpolation algorithm dwindles trimming circle, be normalized to the 64x64 pixel graphics;
Step 4.3: calculate the energy after the normalization figure goes average;
Step 4.4: calculate the blind cross-correlation coefficient with the eigenvalue figure among the identification characteristics data base of 64x64 pixel graphics and template, select the template of cross correlation value maximum blind in identifying as recognition result.
9, blind person's vision according to claim 5 compensation method is characterized in that, described step 5 comprises:
Step 5.1: use speech synthesis engine that text to be synthesized is synthesized voice messaging;
Step 5.2: use voice-output device that voice messaging is delivered in blind person's ear with the sound wave form.
CN 201110033786 2011-01-31 2011-01-31 Blind visual compensation method and system for implementing same Active CN102090947B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110033786 CN102090947B (en) 2011-01-31 2011-01-31 Blind visual compensation method and system for implementing same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110033786 CN102090947B (en) 2011-01-31 2011-01-31 Blind visual compensation method and system for implementing same

Publications (2)

Publication Number Publication Date
CN102090947A CN102090947A (en) 2011-06-15
CN102090947B true CN102090947B (en) 2013-09-11

Family

ID=44124328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110033786 Active CN102090947B (en) 2011-01-31 2011-01-31 Blind visual compensation method and system for implementing same

Country Status (1)

Country Link
CN (1) CN102090947B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101040809A (en) * 2007-04-19 2007-09-26 上海交通大学 Method for replacing seeing based on the cognizing and target identification
CN101227539A (en) * 2007-01-18 2008-07-23 联想移动通信科技有限公司 Blind guiding mobile phone and blind guiding method
CN101584624A (en) * 2009-06-18 2009-11-25 上海交通大学 Guideboard recognition blind-guide device and method thereof based on DSP

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101227539A (en) * 2007-01-18 2008-07-23 联想移动通信科技有限公司 Blind guiding mobile phone and blind guiding method
CN101040809A (en) * 2007-04-19 2007-09-26 上海交通大学 Method for replacing seeing based on the cognizing and target identification
CN101584624A (en) * 2009-06-18 2009-11-25 上海交通大学 Guideboard recognition blind-guide device and method thereof based on DSP

Also Published As

Publication number Publication date
CN102090947A (en) 2011-06-15

Similar Documents

Publication Publication Date Title
CN102906810B (en) Augmented reality panorama supporting visually impaired individuals
JP6549898B2 (en) Object detection system, object detection method, POI information creation system, warning system, and guidance system
US8687887B2 (en) Image processing method, image processing apparatus, and image processing program
EP2704102A3 (en) Portable augmented reality device and method
Rajesh et al. Text recognition and face detection aid for visually impaired person using Raspberry PI
CN103283225A (en) Multi-resolution image display
AlSaid et al. Deep learning assisted smart glasses as educational aid for visually challenged students
CN110399810B (en) Auxiliary roll-call method and device
Tripathy et al. Voice for the mute
CN109670458A (en) A kind of licence plate recognition method and device
CN116129129B (en) Character interaction detection model and detection method
CN114677644A (en) Student seating distribution identification method and system based on classroom monitoring video
CN111539408A (en) Intelligent point reading scheme based on photographing and object recognizing
CN104361357A (en) Photo set classification system and method based on picture content analysis
US8611698B2 (en) Method for image reframing
Reda et al. Svbicomm: sign-voice bidirectional communication system for normal,“deaf/dumb” and blind people based on machine learning
CN102090947B (en) Blind visual compensation method and system for implementing same
CN116630163A (en) Method for reconstructing super-resolution of self-adaptive endoscope image
CN111199050B (en) System for automatically desensitizing medical records and application
CN114220175B (en) Motion pattern recognition method and device, equipment, medium and product thereof
CN116862920A (en) Portrait segmentation method, device, equipment and medium
CN112784631A (en) Method for recognizing face emotion based on deep neural network
Muralidharan et al. Reading aid for visually impaired people
CN110602479A (en) Video conversion method and system
Shaikh et al. Identification and Recognition of Text and Face Using Image Processing for Visually Impaired

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant