CN107157651A - A kind of visual pattern sensory perceptual system and method based on sonic stimulation - Google Patents

A kind of visual pattern sensory perceptual system and method based on sonic stimulation Download PDF

Info

Publication number
CN107157651A
CN107157651A CN201710441277.1A CN201710441277A CN107157651A CN 107157651 A CN107157651 A CN 107157651A CN 201710441277 A CN201710441277 A CN 201710441277A CN 107157651 A CN107157651 A CN 107157651A
Authority
CN
China
Prior art keywords
sound
signal
node
dimension
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710441277.1A
Other languages
Chinese (zh)
Inventor
王宁远
丁鼐
苏乃婓
孙晓安
黄穗
张晓薇
田春
李方波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Nurotron Neural Electronic Technology Co Ltd
Original Assignee
Zhejiang Nurotron Neural Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Nurotron Neural Electronic Technology Co Ltd filed Critical Zhejiang Nurotron Neural Electronic Technology Co Ltd
Priority to CN201710441277.1A priority Critical patent/CN107157651A/en
Publication of CN107157651A publication Critical patent/CN107157651A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/08Devices or methods enabling eye-patients to replace direct visual perception by another kind of perception
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/033Voice editing, e.g. manipulating the voice of the synthesiser
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Vascular Medicine (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of visual pattern sensory perceptual system based on sonic stimulation and method, wherein system includes spectacle frame, external machine and earphone, wherein sets camera on spectacle frame;External machine includes image processing module, sound rendering module and sound broadcasting module, the camera is connected with the image processing module of external machine, camera obtains two dimension or 3-D view, image processing module carries out two dimension or 3-D view after depth detection, binaryzation, contours extract and character recognition, export and carried out to sound rendering module after acoustic processing, transmission of sound signals is played to sound broadcasting module by binary channels conductance or osophone.The present invention can help blind person by the profile of stereo sound perceptual object, shape, or even distance, facilitate them to carry out simple object identification, can bring great help to the life of blind person.

Description

A kind of visual pattern sensory perceptual system and method based on sonic stimulation
Technical field
The invention belongs to field of signal processing, more particularly to a kind of visual pattern sensory perceptual system and side based on sonic stimulation Method.
Background technology
The medical product that in the market can aid in patient total blindness to live is few.Pass through U.S. FDA certification at present Visually impaired auxiliary equipment is broadly divided into two classes:The first kind is to need the Human Visual System being implanted into that performs the operation, and it first passes through camera and adopted Collect the image information in front of implantation person, then coding generation electric impulse signal, finally again with the electricity being implanted in patient's retina Pole array stimulates optic nerve, so as to help patient to recover certain light sensation.The subject matter of this kind equipment is, with certain Operation risk, and some patients are not suitable for progress operation implantation (optic nerve injury etc.), along with expensive price is (a set of to want More than 100000 dollars) so that most patients do not have the equipment that condition uses artificial vision's class.Another kind equipment is by two dimension Image information, blind person is passed to by other perceptual channels, blind person again in brain by the information arrived carry out " translation ", so as to obtain Know certain image information.Such as stimulate skin or tongue with electrod-array, by haptic reception two-dimensional image information, then by its It is converted into " visually-perceptible ".This kind equipment feature is no risk, and price is relatively cheap, but uses and not enough facilitate and not beautiful enough See, it is necessary to which electrode paste on scalp or is contained in the mouth.
The content of the invention
In view of this, it is an object of the invention to provide a kind of implantation that need not both perform the operation, cost is low, easy to use, again The system that image information can be transmitted to blind person, it will solve the difficulty on many blind person's basic livings, benefit the society.
To reach above-mentioned purpose, the invention provides a kind of visual pattern sensory perceptual system based on sonic stimulation, including eye Mirror holder, external machine and earphone, wherein set camera on spectacle frame;External machine includes image processing module, sound rendering module And sound broadcasting module,
The camera is connected with the image processing module of external machine, and camera is obtained at two dimension or 3-D view, image Manage module to carry out two dimension or 3-D view after depth detection, binaryzation, contours extract and character recognition, export to sound rendering Module is carried out after acoustic processing, and transmission of sound signals is played to sound broadcasting module by conductance or osophone.
Preferably, described image processing module at least includes the depth detection unit, binarization unit, profile being sequentially connected Extraction unit and character recognition unit.
Preferably, the camera includes one or two AF camera lens.
Preferably, the sound rendering module includes head related transfer function filter unit.
Preferably, the sound rendering module is handled according to following formula for the corresponding sound rendering of image:
A (i)=S*H (i) * G (i)
A=A (1) → A (2) → A (3) → A (4) → ... → A (n)
Wherein, on the basis of S sound frequency-region signal;H (i) is the head related transfer function corresponding to i-th point in plane; G (i) is the gain size of i-th of sound, is judged by the distance of object;A (i) is i-th of sound in acoustic space;A is After all corresponding sound in plane are continuously played, the voice signal for transmitting current two dimension or 3-D view of formation.
Based on above-mentioned purpose, present invention also offers a kind of visual pattern sense based on sonic stimulation of use said system Perception method, comprises the following steps:
Camera collection obtains two dimension or 3-D view, carries out image procossing, obtains and simplifies two dimension or 3-D view;
According to two dimension or 3-D view is simplified, sound rendering processing is carried out;
Transfer voice after processing is played out to earphone.
Preferably, described image processing comprises the following steps:
Pretreatment, carries out depth detection by two dimension or 3-D view, binaryzation and denoising is then carried out after gray processing;
Contours extract, carries out image cut, image thinning and compression of images successively;
Character recognition, the two dimension or character or edge contour in 3-D view of output collection.
Preferably, the sound rendering is carried out according to head related transfer function.
Preferably, the sound rendering processing comprises the following steps:
The first step:Assuming that the dimension of plane is n rows n row, then just the node from the most upper left corner is begun stepping through, and is designated as the 1st Circle, coordinate position be expressed as (1, n), if signal then carries out second step, without signal then enter the step of step the four;
Second step:When traversing the node of signal, this node is set to present node, the sound of the node is played first Message number, the node traverses for then selecting residing orientation consistent with current traversal direction, the repeat step second step if having signal, Until traversal is completed;Enter the step of step the three if no signal;
3rd step:Due to being in the node no signal of current traversal direction, then just then surrounded clockwise from this node Present node rotation traversal;If running into the node for having signal, into second step, if not having, into the 4th step;
4th step:If traversing the i-th circle before, then continue to travel through i+1 circle, node coordinate is followed successively by (i, n), (i, n-1), (i, n-2) ... (i, n-i+1), (i-1, n-i+1), (i-2, n-i+1) ... (1, n-i+1), if these nodes In there is signal then to continue second step, no signal then continues to travel through the i-th+2 circle of more lateral, until run into the node for having signal, or All nodes of plane are traveled through.
Preferably, the sound rendering processing is according to following formula:
A (i)=S*H (i) * G (i)
A=A (1) → A (2) → A (3) → A (4) → ... → A (n)
Wherein, on the basis of S sound frequency-region signal;H (i) is the head related transfer function corresponding to i-th point in plane; G (i) is the gain size of i-th of sound, is judged by the distance of object;A (i) is i-th of sound in acoustic space;A is After all corresponding sound in plane are continuously played, the voice signal for transmitting current two dimension or 3-D view of formation.
The beneficial effects of the present invention are:The present invention uses head related transfer functions, can for any one sound The sensation transmitted from specific direction is processed into, and patient is played to by earphone.It means that simple for any one Two dimensional image, can be by way of this continuous processing sound, one same image of generation in auditory space, and by Perceive.Sound is namely become into one-pen, the figure desired by you is sketched the contours of.And blind person is given birth to due to long-term by the sense of hearing It is living, so being generally better than ordinary person in terms of listening sound to distinguish position, so the present invention can play more preferable effect to help blind person to lead to The profile of perception of sound object, shape, or even distance are crossed, facilitates them to carry out simple object identification, can be to blind person Life brings great help.
Brief description of the drawings
In order that the purpose of the present invention, technical scheme and beneficial effect are clearer, the present invention provides drawings described below and carried out Explanation:
Fig. 1 is a kind of visual pattern sensory perceptual system structural representation based on sonic stimulation of the embodiment of the present invention 1;
Fig. 2 is a kind of visual pattern sensory perceptual system structural representation based on sonic stimulation of the embodiment of the present invention 2;
Fig. 3 is a kind of step flow chart of visual pattern cognitive method based on sonic stimulation of the embodiment of the present invention 1;
Fig. 4 is a kind of step flow chart of visual pattern cognitive method based on sonic stimulation of the embodiment of the present invention 2;
Fig. 5 is a kind of visual pattern sensory perceptual system based on sonic stimulation of the embodiment of the present invention after image processing module Visual pattern.
Embodiment
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are described in detail.
Embodiment 1
Referring to Fig. 1, a kind of visual pattern sensory perceptual system based on sonic stimulation of the embodiment of the present invention 1 is shown, including Camera is set on spectacle frame 10, external machine 20 and earphone 30, wherein spectacle frame 10;External machine 20 includes image processing module 210th, sound rendering module 220 and sound broadcasting module 230,
The camera is connected with the image processing module 210 of external machine 20, and camera obtains two dimension or 3-D view, figure As processing module 210 carries out two dimension or 3-D view after depth detection, binaryzation, contours extract and character recognition, export to Sound rendering module 220 is carried out after acoustic processing, and transmission of sound signals is played to sound broadcasting module 230 by earphone 30.
Embodiment 2
On the basis of embodiment 1, referring to Fig. 2, a kind of vision based on sonic stimulation of the embodiment of the present invention 2 is shown Image perception system, image processing module 210 at least include be sequentially connected depth detection unit 214, binarization unit 211, Contours extract unit 212 and character recognition unit 213.
In specific embodiment, a camera is needed during collection two dimensional image, two shootings are needed during collection 3-D view Head, camera includes AF camera lenses.
Sound rendering module 220 includes head related transfer function filter unit, because (effect is similar for the organ such as head and auricle In wave filter) presence, cause sound that different directions transmit can be by influence different in frequency, therefore, according to previous Experience, the brain of people can identify the direction that sound is transmitted automatically according to the frequency change of sound.In specific acoustic processing Voice of Cheng Zhong, a standard source selected first, such as pure tone, complex tone, white noise or people etc., then for two dimensional surface The point of upper diverse location, carries out corresponding head related transfer function to this sound and filters, just can so be produced in auditory space The sound experience of raw relevant position.Then again by the sound after all such processing, quick continuous broadcasting makes one to produce acoustically Profile sense.
Sound rendering module 220 is for the processing of image corresponding sound rendering according to following formula:
A (i)=S*H (i) * G (i)
A=A (1) → A (2) → A (3) → A (4) → ... → A (n)
Wherein, on the basis of S sound frequency-region signal;H (i) is the head related transfer function corresponding to i-th point in plane; G (i) is the gain size of i-th of sound, is judged by the distance of object;A (i) is i-th of sound in acoustic space;A is After all corresponding sound in plane are continuously played, the voice signal for transmitting current two dimension or 3-D view of formation.
It is corresponding with said system, present invention also offers a kind of visual pattern cognitive method based on sonic stimulation, its The flow chart of embodiment 1 comprises the following steps referring to Fig. 3:
S10, camera collection obtains two dimension or 3-D view, carries out image procossing, obtains and simplifies two dimension or 3-D view;
S20, according to two dimension or 3-D view is simplified, carries out sound rendering processing;
S30, the transfer voice after processing is played out to earphone.
Embodiment of the method 2, referring to Fig. 4, the sound rendering in image procossing and S20 in S10 is handled, including following step Suddenly:
Two dimension or 3-D view are carried out carrying out binaryzation and denoising after depth detection, then gray processing by S101, pretreatment;
S102, contours extract carries out image cut, image thinning and compression of images successively;
S103, character recognition, the two dimension or character or edge contour in 3-D view of output collection.
S201, it is assumed that the planar dimensions of acoustic space are n rows n row, then just the node from the most upper left corner is begun stepping through, note For the 1st circle, coordinate position be expressed as (1, n), if signal then carries out S202, without signal then enter step S204;
S202, when traversing the node of signal, is set to present node by this node, the sound of the node is played first The node is simultaneously set to no signal to avoid repeating playing by signal, the section for then selecting residing orientation consistent with current traversal direction Point traversal, the repeat step S202 if having signal, until traversal is completed;Enter step S203 if no signal;
Wherein, current traversal direction refers to the closure for the adjacent node for having signal that the first two is continuously traversed, Such as from (1, n) traverse that (2, n), and the two nodes have signal, then current traversal direction is just set as the positive right side, also It is to say the adjacent node for now answering first traversal present node front-right, by that analogy, traversal direction is defaulted as front-right;
S203, due to being in the node no signal of current traversal direction, then just then surround and work as clockwise from this node Front nodal point rotation traversal;If running into the node for having signal, into S202, if not having, into S204;
S204, if traversing the i-th circle before, then continue to travel through i+1 circle, node coordinate is followed successively by (i, n), (i, N-1), (i, n-2) ... (i, n-i+1), (i-1, n-i+1), (i-2, n-i+1) ... (1, n-i+1), if met in ergodic process Then continue S202 to the node for having signal, no signal then continues to travel through the i-th+2 circle of more lateral, until running into the section for having signal Point, or traveled through all nodes of plane.
In specific embodiment, sound rendering processing is according to following formula in S20:
A (i)=S*H (i) * G (i)
A=A (1) → A (2) → A (3) → A (4) → ... → A (n)
Wherein, on the basis of S sound frequency-region signal;H (i) is the head related transfer function corresponding to i-th point in plane; G (i) is the gain size of i-th of sound, is judged by the distance of object;A (i) is i-th of sound in acoustic space;A is After all corresponding sound in plane are continuously played, the voice signal for transmitting current two dimension or 3-D view of formation.
Referring to a kind of visual pattern sensory perceptual system based on sonic stimulation that Fig. 5 is the embodiment of the present invention through image procossing mould Visual pattern after block.
Camera has collected the image of " 8 ", the contour images after image procossing for Fig. 5, can in specific embodiment To play sound successively from the lower left corner, quick inswept whole " 8 " character wheel is wide, and the profile of so whole door generates the brain in people In.
Finally illustrate, preferred embodiment above is merely illustrative of the technical solution of the present invention and unrestricted, although logical Cross above preferred embodiment the present invention is described in detail, it is to be understood by those skilled in the art that can be Various changes are made to it in form and in details, without departing from claims of the present invention limited range.

Claims (10)

1. a kind of visual pattern sensory perceptual system based on sonic stimulation, it is characterised in that including spectacle frame, external machine and earphone, Camera wherein is set on spectacle frame;External machine includes image processing module, sound rendering module and sound broadcasting module,
The camera is connected with the image processing module of external machine, and camera obtains two dimension or 3-D view, image procossing mould Block carries out two dimension or 3-D view after depth detection, binaryzation, contours extract and character recognition, exports and gives sound rendering module Carry out after acoustic processing, transmission of sound signals is played to sound broadcasting module by binary channels conductance or osophone.
2. the visual pattern sensory perceptual system according to claim 1 based on sonic stimulation, it is characterised in that at described image Reason module at least includes depth detection unit, binarization unit, contours extract unit and the character recognition unit being sequentially connected.
3. the visual pattern sensory perceptual system according to claim 1 based on sonic stimulation, it is characterised in that the camera Including one or two AF camera lens.
4. the visual pattern sensory perceptual system according to claim 1 based on sonic stimulation, it is characterised in that the sound is closed Include head related transfer function filter unit into module.
5. the visual pattern sensory perceptual system according to claim 1 based on sonic stimulation, it is characterised in that the sound is closed Into module for the processing of image corresponding sound rendering according to following formula:
A (i)=S*H (i) * G (i)
A=A (1) → A (2) → A (3) → A (4) → ... → A (n)
Wherein, on the basis of S sound frequency-region signal;H (i) is the head related transfer function corresponding to i-th point in plane;G(i) For the gain size of i-th of sound, judged by the distance of object;A (i) is i-th of sound in acoustic space;A is will be flat After all corresponding sound are continuously played on face, the voice signal for transmitting current two dimension or 3-D view of formation.
6. a kind of visual pattern cognitive method based on sonic stimulation of one of use claim 1-5 system, its feature exists In comprising the following steps:
Camera collection obtains two dimension or 3-D view, carries out image procossing, obtains and simplifies two dimension or 3-D view;
According to two dimension or 3-D view is simplified, sound rendering processing is carried out;
Transfer voice after processing is played out to earphone.
7. method according to claim 6, it is characterised in that described image processing comprises the following steps:
Pretreatment, two dimension or 3-D view are carried out carrying out binaryzation and denoising after depth detection, then gray processing;
Contours extract, carries out image cut, image thinning and compression of images successively;
Character recognition, the two dimension or character or edge contour in 3-D view of output collection.
8. method according to claim 6, it is characterised in that the sound rendering is carried out according to head related transfer function.
9. method according to claim 6, it is characterised in that the sound rendering processing comprises the following steps:
The first step:Assuming that the planar dimensions of acoustic space are n rows n row, then just the node from the most upper left corner is begun stepping through, and is designated as 1st circle, coordinate position be expressed as (1, n), if signal then carries out second step, without signal then enter the step of step the four;
Second step:When traversing the node of signal, this node is set to present node, the sound letter of the node is played first Number and the node is set to no signal to avoid repeating playing, then select the residing orientation node consistent with current traversal direction Traversal, the repeat step second step if having signal, until traversal is completed;Enter the step of step the three if no signal;
3rd step:Due to being in the node no signal of current traversal direction, then just from this node then clockwise around current Node rotation traversal;If running into the node for having signal, into second step, if not having, into the 4th step;
4th step:If traversing the i-th circle before, then continue to travel through i+1 circle, node coordinate is followed successively by (i, n), (i, n- 1), (i, n-2) ... (i, n-i+1), (i-1, n-i+1), (i-2, n-i+1) ... (1, n-i+1), if run into ergodic process The node for having signal then continues second step, and no signal then continues to travel through the i-th+2 circle of more lateral, until running into the section for having signal Point, or traveled through all nodes of plane.
10. method according to claim 6, it is characterised in that the sound rendering processing is according to following formula:
A (i)=S*H (i) * G (i)
A=A (1) → A (2) → A (3) → A (4) → ... → A (n)
Wherein, on the basis of S sound frequency-region signal;H (i) is the head related transfer function corresponding to i-th point in plane;G(i) For the gain size of i-th of sound, judged by the distance of object;A (i) is i-th of sound in acoustic space;A is will be flat After all corresponding sound are continuously played on face, the voice signal for transmitting current two dimension or 3-D view of formation.
CN201710441277.1A 2017-06-13 2017-06-13 A kind of visual pattern sensory perceptual system and method based on sonic stimulation Pending CN107157651A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710441277.1A CN107157651A (en) 2017-06-13 2017-06-13 A kind of visual pattern sensory perceptual system and method based on sonic stimulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710441277.1A CN107157651A (en) 2017-06-13 2017-06-13 A kind of visual pattern sensory perceptual system and method based on sonic stimulation

Publications (1)

Publication Number Publication Date
CN107157651A true CN107157651A (en) 2017-09-15

Family

ID=59825338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710441277.1A Pending CN107157651A (en) 2017-06-13 2017-06-13 A kind of visual pattern sensory perceptual system and method based on sonic stimulation

Country Status (1)

Country Link
CN (1) CN107157651A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109157738A (en) * 2018-07-23 2019-01-08 浙江诺尔康神经电子科技股份有限公司 Artificial retina amplitude-frequency based on deep vision regulates and controls method and system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003084784A (en) * 2001-09-12 2003-03-19 Kaisen Baitai Kenkyusho:Kk Shape transmission device
CN101040809A (en) * 2007-04-19 2007-09-26 上海交通大学 Method for replacing seeing based on the cognizing and target identification
JP2008023237A (en) * 2006-07-25 2008-02-07 Miki Yasuma Navigation system for visually handicapped person
CN101385677A (en) * 2008-10-16 2009-03-18 上海交通大学 Blind guiding method and device based on moving body track
CN101584624A (en) * 2009-06-18 2009-11-25 上海交通大学 Guideboard recognition blind-guide device and method thereof based on DSP
WO2011036288A1 (en) * 2009-09-28 2011-03-31 Siemens Aktiengesellschaft Device and method for assisting visually impaired individuals, using three-dimensionally resolved object identification
JP2011067479A (en) * 2009-09-28 2011-04-07 Masahiro Kuroda Image auralization apparatus
CN102688120A (en) * 2012-06-08 2012-09-26 綦峰 Colored audio and video guide method and colored audio and video guide device
CN103908365A (en) * 2014-04-09 2014-07-09 天津思博科科技发展有限公司 Electronic travel assisting device
CN105761235A (en) * 2014-12-19 2016-07-13 天津市巨海机电设备安装有限公司 Vision auxiliary method converting vision information to auditory information

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003084784A (en) * 2001-09-12 2003-03-19 Kaisen Baitai Kenkyusho:Kk Shape transmission device
JP2008023237A (en) * 2006-07-25 2008-02-07 Miki Yasuma Navigation system for visually handicapped person
CN101040809A (en) * 2007-04-19 2007-09-26 上海交通大学 Method for replacing seeing based on the cognizing and target identification
CN101385677A (en) * 2008-10-16 2009-03-18 上海交通大学 Blind guiding method and device based on moving body track
CN101584624A (en) * 2009-06-18 2009-11-25 上海交通大学 Guideboard recognition blind-guide device and method thereof based on DSP
WO2011036288A1 (en) * 2009-09-28 2011-03-31 Siemens Aktiengesellschaft Device and method for assisting visually impaired individuals, using three-dimensionally resolved object identification
JP2011067479A (en) * 2009-09-28 2011-04-07 Masahiro Kuroda Image auralization apparatus
CN102688120A (en) * 2012-06-08 2012-09-26 綦峰 Colored audio and video guide method and colored audio and video guide device
CN103908365A (en) * 2014-04-09 2014-07-09 天津思博科科技发展有限公司 Electronic travel assisting device
CN105761235A (en) * 2014-12-19 2016-07-13 天津市巨海机电设备安装有限公司 Vision auxiliary method converting vision information to auditory information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109157738A (en) * 2018-07-23 2019-01-08 浙江诺尔康神经电子科技股份有限公司 Artificial retina amplitude-frequency based on deep vision regulates and controls method and system
CN109157738B (en) * 2018-07-23 2022-02-15 浙江诺尔康神经电子科技股份有限公司 Artificial retina amplitude modulation control method and system based on depth vision

Similar Documents

Publication Publication Date Title
Balakrishnan et al. Wearable real-time stereo vision for the visually impaired.
CN109585021B (en) Mental state evaluation method based on holographic projection technology
JP6771548B2 (en) A portable system that allows the blind or visually impaired to interpret the surrounding environment by voice or touch.
CN106137532A (en) The image processing apparatus of visual cortex prosthese and method
Macpherson Sensory substitution and augmentation: An introduction
CN103680231B (en) Multi information synchronous coding learning device and method
CN105761235A (en) Vision auxiliary method converting vision information to auditory information
Balakrishnan et al. A stereo image processing system for visually impaired
DE112021001516T5 (en) HEARING AID UNIT WITH INTELLIGENT AUDIO FOCUS CONTROL
EP3058926A1 (en) Method of transforming visual data into acoustic signals and aid device for visually impaired or blind persons
CN107157651A (en) A kind of visual pattern sensory perceptual system and method based on sonic stimulation
Hoang et al. Obstacle detection and warning for visually impaired people based on electrode matrix and mobile Kinect
CN203724273U (en) Cochlear implant
TWI398243B (en) Electrode simulating method and system thereof
CN102688120B (en) Colored audio and video guide method and colored audio and video guide device
CN113674593A (en) Head-wearing forehead machine system for touch display
CN103479449B (en) Acquired blindness human brain marine origin is as the system and method in the perception external world
Valencia et al. A computer-vision based sensory substitution device for the visually impaired (See ColOr).
CN205681580U (en) The perceived distance device of synthetic eye
Ye et al. A wearable vision-to-audio sensory substitution device for blind assistance and the correlated neural substrates
CN106888420B (en) A kind of audio collecting device
Tóth et al. Autoencoding sensory substitution
CN101866481B (en) Image processing method based on binocular stereoscopic psychological perception
CN101114336A (en) Artificial visible sensation image processing process based on wavelet transforming
CN113050917A (en) Intelligent blind-aiding glasses system capable of sensing environment three-dimensionally

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170915