CN112669679B - Social contact device and method for visually impaired people and mobile terminal - Google Patents

Social contact device and method for visually impaired people and mobile terminal Download PDF

Info

Publication number
CN112669679B
CN112669679B CN202011344067.9A CN202011344067A CN112669679B CN 112669679 B CN112669679 B CN 112669679B CN 202011344067 A CN202011344067 A CN 202011344067A CN 112669679 B CN112669679 B CN 112669679B
Authority
CN
China
Prior art keywords
instruction
unit
information
voice
visually impaired
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011344067.9A
Other languages
Chinese (zh)
Other versions
CN112669679A (en
Inventor
任刚
孟卫东
王刚
洪鑫烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University of Technology
Original Assignee
Xiamen University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University of Technology filed Critical Xiamen University of Technology
Priority to CN202011344067.9A priority Critical patent/CN112669679B/en
Publication of CN112669679A publication Critical patent/CN112669679A/en
Application granted granted Critical
Publication of CN112669679B publication Critical patent/CN112669679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a social device, a social method and a mobile terminal for visually impaired people, wherein the social method is to construct an environment model of the visually impaired people; determining speaker positioning information; mapping the positioning information to the environmental model; invoking the speaker personal information; playing and displaying the personal information; the environmental model, the personal information of the utterances, utterance information are stored. The invention combines the constructed environment model in a double positioning mode to realize positioning aiming at the environment where the visually impaired person is located, and ensures that the visually impaired person realizes normal 'staring' in the social process; and the mobile terminal is connected with the social device, so that the intelligent and convenient use of the social device is realized.

Description

Social contact device and method for visually impaired people and mobile terminal
Technical Field
The invention relates to the technical field of image-voice processing, in particular to a social device and method for visually impaired people and a mobile terminal.
Background
In the prior art, vision-impaired people cannot always communicate with the speaking person in the daily and other communication process due to visual reasons, and meanwhile, the vision-impaired people cannot effectively know the environment and effectively 'gaze' and pay attention to a certain direction or a certain object. The existing auxiliary tool cannot be used well in an intelligent mode.
Therefore, it is necessary to provide a technical solution to solve the above technical problems.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a social device and a social method for visually impaired people and a mobile terminal.
The first aspect of the embodiment of the invention provides a social device for visually impaired people, which comprises an environment model unit, a first positioning unit, a voice unit, a processor unit and a storage unit:
the environment model is used for constructing an environment model of the visually impaired person;
the first positioning unit is used for performing first positioning on the vision-impaired person; the second positioning unit performs second positioning on the speaker and transmits the first positioning information and the second positioning information to the processor unit;
the processor unit is configured to map the positioning information to the environmental model; simultaneously invoking personal information of the speaker to the speech unit;
the voice unit is used for playing the personal information of the utterances, wherein the personal information comprises the names of the utterances;
the memory unit is used for storing the environment model, the personal information of the speaker and the speaking information.
Preferably, in the present invention, the social device further comprises an image acquisition unit,
the image acquisition unit acquires direction image positioning information, color image information and article image information of the utterances, and transmits the information acquired by the image acquisition unit to the environment model.
Preferably, in the present invention, the social device further comprises a rotation unit,
the voice unit is also used for receiving voice instructions of the visually impaired people; wherein the voice instruction comprises at least one of a direction instruction, a speaker name instruction and an object position instruction;
the processor unit is used for analyzing the voice instruction into a first rotation instruction and sending the first rotation instruction to the rotation unit;
the rotation unit is used for rotating the social device to be positioned in other specified directions according to the first rotation instruction sent by the processor.
Preferably, in the present invention, the social device further includes a first updating unit and a second updating unit;
the first updating unit is used for quickly constructing a corresponding model and updating the corresponding model to the environment model when other people appear in the environment model or people in the environment model are transformed.
The second updating unit is used for updating the environment model by combining the environment model when the position of the visually impaired person is transformed.
Preferably, in the invention, the social device further comprises a conversion unit and a touch unit;
the conversion unit is used for converting the speaking information of the speaker into braille information;
the touch unit is used for displaying the braille information and is used for touch reading of visually impaired people.
Preferably, in the present invention, the social device further comprises a wireless unit,
the wireless unit is used for receiving a second instruction sent by the mobile terminal, wherein the second instruction comprises at least one of a second rotation instruction, a recording instruction, a conversion instruction and a voice playing instruction;
and the processor unit analyzes the second instruction into corresponding instruction operation and controls the related units to execute the instruction operation.
A second aspect of the embodiment of the present invention provides a social method for visually impaired people, which is characterized in that: according to the social device of the visually impaired person, the following method is executed:
constructing an environment model of the visually impaired person;
determining speaker positioning information;
mapping the positioning information to the environmental model;
invoking the speaker personal information;
playing and displaying the personal information;
the environmental model, the personal information of the utterances, utterance information are stored.
Preferably, in the invention, receiving a voice command of the visually impaired person; wherein the voice instruction comprises at least one of a direction instruction, a speaker name instruction and an object position instruction;
resolving the voice command into a first rotation command;
and according to the first rotation instruction, rotating the social device to be positioned in other specified directions.
Preferably, in the present invention, a second instruction sent by the mobile terminal is received, where the second instruction includes at least one of a second rotation instruction, a recording instruction, a conversion instruction, and a voice playing instruction;
and analyzing the second instruction into corresponding instruction operation and executing the instruction operation.
A third aspect of the embodiment of the present invention provides a mobile terminal, where the mobile terminal includes a mobile voice module, a mobile operation module, and a mobile wireless transmission module;
the mobile voice module is used for collecting operation voice information of an operator;
the mobile operation module is used for collecting mobile operation information of the operator;
the wireless transmission module is used for transmitting at least one of the operation voice information and the mobile operation information to a social device of visually impaired people; and simultaneously, receiving feedback information of the social device.
Compared with the prior art, the embodiment of the invention has the beneficial effects that:
the invention combines the constructed environment model in a double positioning mode to realize positioning aiming at the environment where the visually impaired person is located, and ensures that the visually impaired person realizes normal 'staring' in the social process; and the mobile terminal is connected with the social device, so that the intelligent and convenient use of the social device is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of the overall structure of a social device for visually impaired people according to an embodiment of the present invention;
FIG. 2 is a flow chart of a social method for visually impaired people according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a mobile terminal according to a third embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
Referring to fig. 1, an overall structure diagram of a social device for visually impaired people according to an embodiment of the present invention is shown. As shown in fig. 1, the overall structure of the social device for visually impaired people includes: the system comprises an environment model unit, a first positioning unit, a second positioning unit, a voice unit, a processor unit and a storage unit:
the social device can be a common wearing type such as a helmet type, a glasses type and the like.
The environment model unit is used for constructing an environment model of the visually impaired person; for the environment model, the social device for the visually impaired people can store the environment model commonly used by a plurality of visually impaired people, and can upload the environment model to the cloud as a template of the environment model. When vision-impaired personnel need, the processor unit can schedule the environment model to the cloud server in time and update the environment model by combining with a subsequent first updating module so as to quickly and accurately construct the environment module.
Further, the environment model is used for quickly constructing the whole environment where the visually impaired person is located at the first time, so that the visually impaired person is guaranteed to use quickly, and the whole environment comprises people, main objects and the like in the environment.
The first positioning unit is used for performing first positioning on the vision-impaired person; the second positioning unit performs second positioning on the speaker and transmits the first positioning information and the second positioning information to the processor unit;
in the environment model, when the first positioning unit determines the position relationship, the direction angle, the distance and the like corresponding to the front face of the speaker relative to the visually impaired person and the social device according to the sound and the image, the positioning information is sent to the processor unit.
The processor unit is configured to map the positioning information to the environmental model; simultaneously invoking personal information of the speaker to the speech unit;
wherein the processor unit maps the positioning information to the environment model comprises scheduling relevant person, object information, in particular personal information about the utterances, in the environment model based on the positioning information to the speech unit for informing visually impaired persons of the basic information of the utterances at a first time.
The voice unit is used for playing the personal information of the utterances, wherein the personal information comprises the names of the utterances;
the voice unit plays the personal information of the speaker, which mainly comprises a voice mode or informs visually impaired people in a readable mode such as braille through a rear touch device by a processor.
The memory unit is used for storing the environment model, the personal information of the speaker and the speaking information.
When the environment model unit forms the environment model for the first time, the information of the person and the object corresponding to the environment model can be stored in the memory, so that the personal information of the speaker can be corresponding to the first time.
According to the invention, through the arrangement of the environment model unit, the first positioning unit, the second positioning unit, the voice unit, the processor unit and the storage unit, visually impaired people can quickly and effectively distinguish the utterances, so that the visual sense of the utterances can be better communicated with the utterances, and meanwhile, the 'gazing' of the objects can be moved in the environment.
Preferably, in the present invention, the social device further comprises an image acquisition unit,
the image acquisition unit acquires direction image positioning information, color image information and article image information of the utterances, and transmits the information acquired by the image acquisition unit to the environment model.
The social device is provided with an image acquisition unit, and particularly a first image acquisition unit can be arranged at the right and center of the social device to accurately position the speaker and the object; the second image acquisition unit and the third image acquisition unit are respectively arranged at the left and right extreme sides of the social device so as to accurately acquire the specific azimuth of the utterances, and the detailed positioning of the utterances is realized by a left and right lateral mode on the basis of basic positioning.
Preferably, in the present invention, the social device further comprises a rotation unit,
the voice unit is also used for receiving voice instructions of the visually impaired people; wherein the voice instruction comprises at least one of a direction instruction, a speaker name instruction and an object position instruction;
the processor unit is used for analyzing the voice instruction into a first rotation instruction and sending the first rotation instruction to the rotation unit;
the rotation unit is used for rotating the social device to be positioned in other specified directions according to the first rotation instruction sent by the processor.
The rotating unit is arranged, so that the head of the visually impaired person can be driven to accurately twist to the corresponding angle direction in response at the first time under the condition that the visually impaired person actively looks at other words or objects according to personal wish or needs. Of course, when the first positioning unit monitors that the vision-impaired person changes, the processor unit can calculate the real-time rotation angle at the first time, so that the social device rotates in a changing mode under the assistance of the rotating unit.
Preferably, in the present invention, the social device further includes a first updating unit, a second updating unit
The first updating unit is used for quickly constructing a corresponding model and updating the corresponding model to the environment model when other people appear in the environment model or people in the environment model are transformed.
The second updating unit is used for updating the environment model by combining the environment model when the position of the visually impaired person is transformed.
The first updating unit can add and subtract the updated environment model to the environment model; the second updating unit can update the relative position relation in the new environment model under the condition of movement taking the visually impaired person as a main visual angle.
Preferably, in the invention, the social device further comprises a conversion unit and a touch unit;
the conversion unit is used for converting the speaking information of the speaker into braille information;
the touch unit is used for displaying the braille information and is used for touch reading of visually impaired people.
The conversion unit and the touch unit are mainly used for converting the personal information, the speaking content and the like, and the converted content is used for 'reading' by visually impaired people in a touch mode.
Preferably, in the present invention, the social device further comprises a wireless unit,
the wireless unit is used for receiving a second instruction sent by the mobile terminal, wherein the second instruction comprises at least one of a second rotation instruction, a recording instruction, a conversion instruction and a voice playing instruction;
and the processor unit analyzes the second instruction into corresponding instruction operation and controls the related units to execute the instruction operation.
The wireless unit comprises a wireless receiving unit and a wireless transmitting unit, wherein the wireless receiving unit is used for receiving a second instruction transmitted by the mobile terminal; the wireless transmitting unit is used for transmitting the feedback information to the mobile terminal.
The second instruction comprises a second rotation instruction, a recording instruction, a conversion instruction and a voice playing instruction, and is used for enabling the follow-up mobile terminal to conduct diversified and intelligent control on the social device.
Further, the second rotation instruction is input by the mobile terminal through voice, operation gestures and the like, received by the wireless unit, analyzed by the processor unit and sent to the rotation unit for executing the second rotation instruction, so that the intention or the purpose of an operator is achieved.
Based on the same way, the social device can also execute a recording instruction, a conversion instruction and a voice playing instruction; the recording instruction is to operate the social device to record the speaking content of the speaker so as to facilitate subsequent use and the like; the conversion instruction is used for operating the social device to convert recorded voice and the like into braille and the like so as to facilitate 'reading' by visually impaired people; the voice playing instruction is to operate the social device to conduct 'read-look' broadcasting on all people and objects in the range of view, which are right facing to the front face of the social device, so that visually impaired people can know the facing people and objects.
By adopting a mode of mobile terminal connection control, intelligent control of the social device can be realized, and convenient use of visually impaired people is facilitated.
A second aspect of the embodiment of the present invention provides a social method for visually impaired people, which is characterized in that: according to the social device of the visually impaired person, the following method is executed:
constructing an environment model of the visually impaired person;
determining first positioning information of the visually impaired person and second positioning information of the speaker;
mapping the positioning information to the environmental model;
invoking the speaker personal information;
playing and displaying the personal information;
the environmental model, the personal information of the utterances, utterance information are stored.
Preferably, in the invention, receiving a voice command of the visually impaired person; wherein the voice instruction comprises at least one of a direction instruction, a speaker name instruction and an object position instruction;
resolving the voice command into a first rotation command;
and according to the first rotation instruction, rotating the social device to be positioned in other specified directions.
Wherein, the rotation is set so that the vision-impaired person can actively watch other utterances and objects according to personal wish or under the condition of needing to watch the other utterances and objects, can make the reaction in the first time, drive the head of vision-impaired personnel to twist reverse to corresponding angular direction accurately. Of course, when the first positioning unit monitors that the vision-impaired person changes, the processor unit can calculate the real-time rotation angle at the first time, so that the social device rotates in a changing mode under the assistance of the rotating unit.
Preferably, in the present invention, a second instruction sent by the mobile terminal is received, where the second instruction includes at least one of a second rotation instruction, a recording instruction, a conversion instruction, and a voice playing instruction;
and analyzing the second instruction into corresponding instruction operation and executing the instruction operation.
The second instruction comprises a second rotation instruction, a recording instruction, a conversion instruction and a voice playing instruction, and is used for enabling the follow-up mobile terminal to conduct diversified and intelligent control on the social device.
Further, the second rotation instruction is input by the mobile terminal through voice, operation gestures and the like, received by the wireless unit, analyzed by the processor unit and sent to the rotation unit for executing the second rotation instruction, so that the intention or the purpose of an operator is achieved.
Based on the same way, the social device can also execute a recording instruction, a conversion instruction and a voice playing instruction; the recording instruction is to operate the social device to record the speaking content of the speaker so as to facilitate subsequent use and the like; the conversion instruction is used for operating the social device to convert recorded voice and the like into braille and the like so as to facilitate 'reading' by visually impaired people; the voice playing instruction is to operate the social device to conduct 'read-look' broadcasting on all people and objects in the range of view, which are right facing to the front face of the social device, so that visually impaired people can know the facing people and objects.
A third aspect of the embodiment of the present invention provides a mobile terminal, where the mobile terminal includes a mobile voice module, a mobile operation module, and a mobile wireless transmission module;
the mobile voice module is used for collecting operation voice information of an operator;
the mobile operation module is used for collecting mobile operation information of the operator;
the wireless transmission module is used for transmitting at least one of the operation voice information and the mobile operation information to a social device of visually impaired people; and simultaneously, receiving feedback information of the social device.
The social device can be controlled by adopting a mobile terminal, and the user of the mobile terminal can be a visually impaired person or other normal persons; and the mobile terminal can be a specific terminal machine, a mobile phone, a pad and other common communication tools. The mobile terminal has an independent voice module, an operation module and a wireless transmitting and receiving module, and can communicate instructions and information with a wireless unit in the social device.
Further, the corresponding information is converted into electronic information by adopting operation communication modes such as voice and other gestures and is sent to the social device, the electronic information is analyzed by the processor unit, and the analyzed information is sent to the corresponding unit for control.
For example, when the mobile voice module receives a rotating voice instruction of an operator, if the operator needs to rotate to face a person, the name of the person is spoken, the wireless sending module forwards the name of the person to the wireless unit of the social device, and after receiving the instruction of the name of the person, the processor unit controls the rotating unit to rotate the front surface of the social device by analyzing the specific azimuth angle of the person relative to the visually impaired person and the like, so that the visually impaired person can look at the person.
The social device can be controlled by adopting the mobile terminal, and the mobile terminal can also control and construct an environment model and the like to realize intelligent control of the social device.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (9)

1. The social device for visually impaired people is characterized by comprising an environment model unit, a first positioning unit, a second positioning unit, a voice unit, a processor unit, a storage unit and a rotation unit:
the environment model is used for constructing an environment model of the visually impaired person;
the first positioning unit is used for performing first positioning on the vision-impaired person; the second positioning unit performs a second on the utterances
Positioning and transmitting the first positioning and the second positioning to the processor unit;
the processor unit is configured to map the first location, the second location, to the environmental model; simultaneously invoking personal information of the speaker to the speech unit;
the voice unit is used for playing personal information of the utterances, wherein the personal information comprises the names of the utterances;
the voice unit is also used for receiving voice instructions of the visually impaired people; wherein the voice command comprises a direction command and a speaking command
At least one of speaker name instruction and object position instruction;
the processor unit is used for resolving the voice instruction into a first rotation instruction and sending the first rotation instruction to the voice processing unit
The rotating unit;
the rotation unit is used for rotating the social device to be positioned to other according to the first rotation instruction sent by the processor
Designating a direction;
the storage unit is used for storing the environment model, the personal information of the speaker and the speaking information;
the method has the advantages that the double positioning mode is adopted to combine with the constructed environment model, so that the positioning of the environment where the visually impaired person is located is realized, and the normal 'staring' of the visually impaired person in the social process is ensured; and the mobile terminal is connected with the social device, so that the intelligent and convenient use of the social device is realized.
2. The vision-impaired person social device of claim 1, wherein: the social device further comprises an image acquisition unit, wherein the image acquisition unit acquires direction image positioning information, color image information and article image information of the utterances, and transmits the information acquired by the image acquisition unit to the environment model.
3. The vision-impaired person social device of claim 1, wherein: the social device further comprises a first updating unit and a second updating unit;
the first updating unit is used for quickly constructing a corresponding model and updating the corresponding model to the environment model when other people appear in the environment model or people in the environment model are transformed;
the second updating unit is used for updating the environment model in combination with the environment model when the position of the visually impaired person changes.
4. The vision-impaired person social device of claim 1, wherein: the social device further comprises a conversion unit and a touch unit;
the conversion unit is used for converting the speaking information of the speaker into braille information;
the touch unit is used for displaying the braille information and is used for touch reading of visually impaired people.
5. The vision-impaired person social device of claim 1, wherein: the social device further comprises a wireless unit, wherein the wireless unit is used for receiving a second instruction sent by the mobile terminal, and the second instruction comprises a second rotation instruction,
At least one of a recording instruction, a conversion instruction and a voice playing instruction;
the processor unit analyzes the second instruction into corresponding instruction operation and controls the related units to execute the instruction operation
And (3) doing so.
6. The social contact method for visually impaired people is characterized by comprising the following steps of: the vision-impaired person social device according to any one of claims 1-5, performing the method of:
constructing an environment model of the visually impaired person;
determining first positioning information of the visually impaired person and second positioning information of the speaker;
mapping the positioning information to the environmental model;
invoking the speaker personal information;
playing and displaying the personal information;
the environmental model, the personal information of the utterances, utterance information are stored.
7. The vision-impaired person social method of claim 6, wherein:
receiving a voice instruction of the visually impaired person; wherein the voice instruction comprises a direction instruction, a speaker name instruction and an object
At least one of the location instructions;
resolving the voice command into a first rotation command;
and according to the first rotation instruction, rotating the social device to be positioned in other specified directions.
8. The vision-impaired person social method of claim 7, wherein:
receiving a second instruction sent by the mobile terminal, wherein the second instruction comprises a second rotation instruction, a recording instruction and a conversion instruction
At least one of the command and the voice playing command;
and analyzing the second instruction into corresponding instruction operation and executing the instruction operation.
9. A mobile terminal, characterized by: the visual impaired person social device of any of claims 1-5, further comprising a mobile voice module, a mobile operator module, a mobile wireless transmitter module;
the mobile voice module is used for collecting an operation voice information instruction of an operator; the voice information instruction comprises at least one of a direction instruction, a speaker name instruction and an object position instruction;
the mobile operation module is used for collecting mobile operation information of the operator;
the wireless transmission module is used for transmitting at least one of the operation voice information instruction and the mobile operation information to a social device of visually impaired people; and simultaneously, receiving feedback information of the social device.
CN202011344067.9A 2020-11-26 2020-11-26 Social contact device and method for visually impaired people and mobile terminal Active CN112669679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011344067.9A CN112669679B (en) 2020-11-26 2020-11-26 Social contact device and method for visually impaired people and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011344067.9A CN112669679B (en) 2020-11-26 2020-11-26 Social contact device and method for visually impaired people and mobile terminal

Publications (2)

Publication Number Publication Date
CN112669679A CN112669679A (en) 2021-04-16
CN112669679B true CN112669679B (en) 2023-08-15

Family

ID=75403652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011344067.9A Active CN112669679B (en) 2020-11-26 2020-11-26 Social contact device and method for visually impaired people and mobile terminal

Country Status (1)

Country Link
CN (1) CN112669679B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113274257A (en) * 2021-05-18 2021-08-20 北京明略软件系统有限公司 Intelligent visual impairment guiding method and system, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050098147A (en) * 2004-04-06 2005-10-11 에스케이 텔레콤주식회사 Road guidance apparatus for the blind
CN104427960A (en) * 2011-11-04 2015-03-18 马萨诸塞眼科耳科诊所 Adaptive visual assistive device
US9429446B1 (en) * 2015-03-16 2016-08-30 Conley Searle Navigation device for the visually-impaired
CA2898387A1 (en) * 2015-07-27 2017-01-27 Alexander M. Deans Hand-held portable navigation system for visually impaired persons
CN109764889A (en) * 2018-12-06 2019-05-17 深圳前海达闼云端智能科技有限公司 Blind guiding method and device, storage medium and electronic equipment
CN110478206A (en) * 2019-09-10 2019-11-22 李少阳 A kind of intelligent blind guiding system and equipment
CN111144287A (en) * 2019-12-25 2020-05-12 Oppo广东移动通信有限公司 Audio-visual auxiliary communication method, device and readable storage medium
FR3089785A1 (en) * 2018-12-17 2020-06-19 Pierre Briand Medical device to aid in the perception of the environment for blind or visually impaired users

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050098147A (en) * 2004-04-06 2005-10-11 에스케이 텔레콤주식회사 Road guidance apparatus for the blind
CN104427960A (en) * 2011-11-04 2015-03-18 马萨诸塞眼科耳科诊所 Adaptive visual assistive device
US9429446B1 (en) * 2015-03-16 2016-08-30 Conley Searle Navigation device for the visually-impaired
CA2898387A1 (en) * 2015-07-27 2017-01-27 Alexander M. Deans Hand-held portable navigation system for visually impaired persons
CN109764889A (en) * 2018-12-06 2019-05-17 深圳前海达闼云端智能科技有限公司 Blind guiding method and device, storage medium and electronic equipment
FR3089785A1 (en) * 2018-12-17 2020-06-19 Pierre Briand Medical device to aid in the perception of the environment for blind or visually impaired users
WO2020128173A1 (en) * 2018-12-17 2020-06-25 Pierre Briand Medical device for improving environmental perception for blind or visually impaired users
CN110478206A (en) * 2019-09-10 2019-11-22 李少阳 A kind of intelligent blind guiding system and equipment
CN111144287A (en) * 2019-12-25 2020-05-12 Oppo广东移动通信有限公司 Audio-visual auxiliary communication method, device and readable storage medium

Also Published As

Publication number Publication date
CN112669679A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
EP2842055B1 (en) Instant translation system
WO2021027267A1 (en) Speech interaction method and apparatus, terminal and storage medium
CN102812417B (en) The wireless hands-free with the detachable accessory that can be controlled by motion, body gesture and/or verbal order calculates headset
CN111724775B (en) Voice interaction method and electronic equipment
US20200219490A1 (en) Information Retrieval According To A User Interest Model
US8855719B2 (en) Wireless hands-free computing headset with detachable accessories controllable by motion, body gesture and/or vocal commands
US10924147B2 (en) Wearable device for transmitting a message comprising strings associated with a state of a user
CN106328132A (en) Voice interaction control method and device for intelligent equipment
WO2017034736A2 (en) Personal translator
CN102016975A (en) Handheld wireless display device having high-resolution display suitable for use as a mobile internet device
KR20150058286A (en) Leveraging head mounted displays to enable person-to-person interactions
US9640178B2 (en) User configurable speech commands
CN112669679B (en) Social contact device and method for visually impaired people and mobile terminal
CN111919248A (en) System for processing user utterances and control method thereof
US20230108256A1 (en) Conversational artificial intelligence system in a virtual reality space
US9640199B2 (en) Location tracking from natural speech
CN107223224A (en) A kind of amblyopia householder method and device
US8558893B1 (en) Head-up security display
CN206479952U (en) A kind of Intelligent worn device
CN207836933U (en) Intelligence is led the way bracelet
CN115985309A (en) Voice recognition method and device, electronic equipment and storage medium
US20150220506A1 (en) Remote Document Annotation
CN111522142A (en) AR intelligence glasses
US20230230293A1 (en) Method and system for virtual intelligence user interaction
CN106251426A (en) A kind of drive recorder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant