CN201440733U - Mobile speech communication terminal suitable for person with language barrier - Google Patents

Mobile speech communication terminal suitable for person with language barrier Download PDF

Info

Publication number
CN201440733U
CN201440733U CN2009201663228U CN200920166322U CN201440733U CN 201440733 U CN201440733 U CN 201440733U CN 2009201663228 U CN2009201663228 U CN 2009201663228U CN 200920166322 U CN200920166322 U CN 200920166322U CN 201440733 U CN201440733 U CN 201440733U
Authority
CN
China
Prior art keywords
text message
sign language
chip
error correction
call terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009201663228U
Other languages
Chinese (zh)
Inventor
陈绍君
龚云云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Coship Electronics Co Ltd
Original Assignee
Shenzhen Coship Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Coship Electronics Co Ltd filed Critical Shenzhen Coship Electronics Co Ltd
Priority to CN2009201663228U priority Critical patent/CN201440733U/en
Application granted granted Critical
Publication of CN201440733U publication Critical patent/CN201440733U/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The utility model provides a mobile speech communication terminal suitable for a person with language barrier. A camera lens captures a sign language image, a sign language resolving chip set resolves the sign language image into a picture track and matches the picture track with that of standard sign language in a storage device so as to obtain vague text information corresponding to the matched picture track, a text information error-correcting chip set utilizes grammar and word combination parameters in the storage device to carry out error correction on the vague text information, a CPU (central processing unit) converts the correct text information into an audio signal and sends the audio signal to the opposite mobile speech communication terminal, and then the audio signal sent by the opposite mobile speech communication terminal is received and is output to a display screen for displaying after being converted into the text information. By applying the above mobile speech communication terminal, the person with language barrier can use sign language to carry out synchronous and correct speech communication with a user at an opposite terminal of the speech communication.

Description

A kind of mobile call terminal that is applicable to the aphasis personage
Technical field
The utility model relates to mobile communication technology, particularly a kind of mobile call terminal that is applicable to the aphasis personage.
Background technology
Along with developing rapidly of mobile communication technology, mobile call terminal becomes the main tool of carrying out communication exchange in people's daily life.
At present the mobile call terminal of widespread usage all is that voice signal with the local terminal user carries out radio reception, be transferred to the opposite end after being converted to the signal that is suitable for the mobile communications network transmission then, carry out reverse conversion generation voice signal by the opposite end mobile call terminal again and play to the opposite end user, the mobile call terminal of obvious this widespread usage is not suitable for the aphasis personage and uses.
Number of patent application is that 200610121174.9 Chinese patent application has proposed a solution, the content that aphasis personage will converse is imported the mobile call terminal of local terminal in the mode of short message, the mobile call terminal of one route local terminal sends to this content the mobile call terminal of opposite end in the mode of short message, short message audio converter in the mobile call terminal of another route local terminal converts the mobile call terminal that audio signal sends to the opposite end to, when the user of opposite end also is the aphasis personage, can understand dialog context, when the user of opposite end is normal personage, can understand dialog context by listening to audio by read short messages.Though this scheme has also solved the problem that aphasis personage uses mobile call terminal to converse, still need word for word inputting word information when conversing, particularly will bother very much more for a long time at dialog context.
Number of patent application has proposed a kind of device that data transaction and transmission are provided for the aphasis personnel for 200510047240.8 Chinese patent, this device is applied to television receiver, store the text message and the voice messaging of sign language video correspondence, when receiving user's sign language video by shooting part, request according to the user is converted to text message or voice messaging with this sign language video, can also be further the text message after the conversion and voice messaging be transferred to such as terminals such as mobile call terminal or computers to show or play.Though this scheme has provided sign language video is converted to text message or voice messaging, than direct inputting word information, made things convenient for use, but at first this device is arranged in the television receiver, if the user of communicating pair is positioned at the different location, not only both sides need to prepare to be built-in with the television receiver of this device, also need prepare the terminal that other receive information simultaneously, equally very trouble and both sides' interchange is also asynchronous, only simply utilized the corresponding relation that prestores when secondly this device is converted to text message or voice messaging with sign language video, suppose user's the lack of standardization or indivedual word input sequence mistakes of sign language input, then will directly change text message or the voice messaging that makes mistake, the accuracy of conversion is lower.
The utility model content
The utility model provides a kind of mobile call terminal that is applicable to the aphasis personage, make the aphasis personage can use sign language with the conversation opposite end the user carry out synchronously, accurately the conversation exchange.
The technical solution of the utility model is achieved in that
A kind of mobile call terminal that is applicable to the aphasis personage, key are that this mobile call terminal comprises: central processing unit, display screen, memory, camera, sign language analysis chip group and text message error correction chip group;
Described camera is connected with sign language analysis chip group, and the sign language video of catching is transferred to sign language video analysis chip group;
Described sign language analysis chip group is connected respectively with text message error correction chip group with memory, the sign language video that camera is caught resolve to graphics track and with memory in advance the graphics track of the standard sign language of storage mate, draw the pairing fuzzy text message of the graphics track that is complementary and be transferred to text message error correction chip group;
Described text message error correction chip group is connected respectively with central processing unit with memory, utilizes grammer and the group word parameter stored in advance in the memory that described fuzzy text message is carried out error correction, obtains accurate text message and is transferred to central processing unit;
Described central processing unit is connected respectively with display screen with camera, connects in conversation and sets up the back to camera transmission triggering signal; Described accurate text message is converted to audio signal to be sent to the opposite end mobile call terminal; Receive the audio signal that the opposite end mobile call terminal sends, be converted to and output to display screen behind the text message and show.
Preferably, described sign language analysis chip group comprises: analysis chip and matching chip;
Described analysis chip is connected respectively with matching chip with camera, and the sign language video that camera is caught resolves to a plurality of graphics tracks that comprise hand-type profile and movement tendency, and is transferred to matching chip in order;
Described matching chip is connected respectively with text message error correction chip group with memory, in the graphics track that memory is stored in advance, find the graphics track that a plurality of graphics tracks of parsing with analysis chip are complementary respectively, from memory, read the corresponding fuzzy text message of the graphics track institute that is complementary again and be transferred to text message error correction chip group.
Preferably, described text message error correction chip group comprises: parameter reads chip and chip is carried out in error correction;
Described parameter reads chip and is connected respectively with memory and error correction execution chip, and the grammer of reading pre-stored and group word parameter are transferred to error correction and carry out chip from described memory;
Described error correction is carried out chip and is connected respectively with central processing unit with sign language analysis chip group, utilizes described grammer and group word parameter, and described fuzzy text message is carried out error correction, obtains accurate text message and is transferred to central processing unit.
Preferably, also comprise in this mobile call terminal: microphone, audio process and loud speaker;
Described central processing unit further is connected respectively with audio process with microphone, reception is from the mode select signal of described display screen input, when the indication of described mode select signal is current when converse for the aphasis mode of operation, continue the described operation of execution to camera transmission triggering signal; When the indication of described mode select signal is current when conversing for normal mode of operation, to microphone and audio process transmission triggering signal;
Described microphone is connected with audio process, collects the external voice signal and is transferred to audio process;
Described audio process is connected with loud speaker, and the voice signal that described microphone is collected carries out being transferred to the opposite end mobile call terminal after the analog-to-digital conversion; Receive the audio signal that the opposite end mobile call terminal sends, carrying out being transferred to loudspeaker plays after the digital-to-analogue conversion.
As seen, if the conversation originating end is the aphasis personage, the mobile call terminal of the originating end of then conversing can be converted to aphasis personage's sign language video text message, be converted to audio signal transmission to the opposite end mobile call terminal again; If talking receiving terminal is the aphasis personage, the mobile call terminal of talking receiving terminal can be converted to the audio signal of receiving text message and show; If the conversation originating end is normal personage, the mobile call terminal of the originating end of then conversing can directly will be transferred to the opposite end mobile call terminal after the voice signal conversion; If talking receiving terminal is normal personage, then the mobile call terminal of talking receiving terminal can directly be converted to sound playing with audio signal.Mobile call terminal in the utility model has just been realized the purpose that aphasis personage uses sign language and opposite end user to converse synchronously and exchange like this, on the other hand, the mobile call terminal that the utility model provides is in the process that sign language video is changed, not only utilized the fuzzy text message of the standard sign language correspondence that prestores, also utilize the grammer and the group word parameter that prestore that fuzzy text message is carried out error correction, thereby the content that the communicating pair conversation is exchanged is more accurate.
Description of drawings
Fig. 1 is for being applicable to the structural representation of aphasis personage's mobile call terminal in the utility model.
Embodiment
For making the purpose of this utility model and advantage clearer, the utility model is described in further detail below in conjunction with drawings and Examples.
Fig. 1 is for being applicable to the structural representation of aphasis personage's mobile call terminal in the utility model, this mobile call terminal comprises: memory 11, camera 12, sign language analysis chip group 13, text message error correction chip group 14, central processing unit 15 and display screen 16.
Wherein, store the graphics track and the corresponding fuzzy text message thereof of grammer and group word parameter and standard sign language in the memory 11.The graphics track of above-mentioned standard sign language comprises hand-type profile and movement tendency, each everyday words (for example " we " etc.) is stored as a width of cloth picture, certainly this only is a kind of for example concrete of graphics track and storage mode thereof, and is not precluded within other modes of employing under other application scenarioss.Above-mentioned grammer and the group word parameter, comprise grammer and the group speech common rule, for example: the group word order of declarative sentence is subject, predicate and object; Everyday words etc.
Camera 12 is triggered by central processing unit 15, catches sign language video and is transferred to sign language video analysis chip group 13.
The sign language video that sign language analysis chip group 13 is caught camera 12 resolve to graphics track and with memory 11 in the graphics track of storage mate, draw the pairing fuzzy text message of the graphics track that is complementary and be transferred to text message error correction chip group 14.This sign language analysis chip group 13 comprises: analysis chip 131 and matching chip 132.
Wherein, the sign language video that analysis chip 131 is caught camera 12 resolves to a plurality of graphics tracks that comprise hand-type profile and movement tendency, because the user is when the input sign language, have certain pause between per two everyday words, therefore also there is certain pause in camera 12 sign language video of catching and being transferred to analysis chip 131 between per two everyday words, and the parsing here just can be carried out according to these pause points.Graphics track after analysis chip 131 will decompose is transferred to matching chip 132 in order.
Matching chip 132 finds the graphics track that a plurality of graphics tracks of parsing with analysis chip 131 are complementary respectively in the graphics track of memory 11 storage, read the pairing fuzzy text message of the graphics track that is complementary again and be transferred to text message error correction chip group 14 from memory 11.During the graphics track of the graphics track coupling that matching chip 132 decomposites with analysis chip 131 in determining memory 11, at first according to the principle of mating fully, if do not find, secondly then according near matching principle, the graphics track that immediate that width of cloth has been stored is as the graphics track of coupling.
Text message error correction chip group 14 utilizes the grammer of storage in the memory 11 and group word parameter that fuzzy text message is carried out error correction, obtains accurate text message and is transferred to central processing unit 15.Text message error correction chip group 14 comprises: parameter reads chip 141 and chip 142 is carried out in error correction.
Wherein, parameter reads chip 141 and read grammer and group word parameter from memory 11.
Error correction is carried out chip 142 and is utilized parameter to read grammer and group word parameter that chip 141 reads, and fuzzy text message is carried out error correction, obtains being transferred to central processing unit 15 behind the accurate text message.For example: suppose out of position with object and predicate of user's sign language when input, then error correction execution chip 142 can be come the out of position of these two speech according to syntax rule, suppose when certain speech is imported in user's sign language also lack of standardization again, make sign language analysis chip group 13 resolve to another one and make the unclear and coherent fully speech of whole word, then error correction execution chip 142 can use verbal association to pick out correct speech from the dictionary of everyday words, has so just in time corrected the mistake in the fuzzy text message.
Central processing unit 15 connects the foundation back in conversation and triggers camera 12; Accurate text message is converted to audio signal to be sent to the opposite end mobile call terminal; Receive the audio signal that the opposite end mobile call terminal sends, be converted to and output to display screen 16 behind the text message and show.
In order to make mobile call terminal in the utility model both be applicable to the aphasis personage, also can possess call function under the normal condition, this mobile call terminal also comprises microphone 17, audio process 18 and the loud speaker 19 that common mobile call terminal all can possess, and wherein microphone 17 is collected the external voice signal and is transferred to audio process 18; The voice signal that audio process 18 is collected microphone carries out being transferred to the opposite end mobile call terminal after the analog-to-digital conversion, and the audio signal that the opposite end mobile call terminal is sent is carried out being transferred to loud speaker 19 after the digital-to-analogue conversion and play.In this case, mobile call terminal will have two kinds of mode of operations, be normal mode of operation and aphasis mode of operation, can offer user's model selection application interface on the display screen 16, central processing unit 15 is behind the mode select signal that receives user's input, if this mode select signal indication is current to be the aphasis mode of operation, then continue to carry out the operation that triggers camera 12, if this mode select signal indication is current to be normal mode of operation, then trigger microphone 17 and audio process 18 executable operations.
Sign language analysis chip group 13 in the utility model and text message error correction chip group 14, can adopt common physical components to realize as integrated chip, after the algorithm routine burning of preprogrammed is in these physical components, promptly can carry out work according to the mode of introducing in the utility model.Different with common mobile call terminal, the mainboard of the mobile call terminal in the utility model the time is that sign language analysis chip group 13 and text message error correction chip group 14 have been reserved patch location in design, and finishes they and being connected of miscellaneous part by wires design.
Introduce the course of work of the mobile call terminal that the utility model provides below, being the aphasis personage with communicating pair is example, this moment, both call sides all was chosen as the aphasis mode of operation with the mode of operation of mobile call terminal, in below describing for simplicity, initiation one side that will converse is called initiator's terminal, reception one side that will converse is called receiving side terminal, and this course of work may further comprise the steps:
1) the camera collection aphasis personage's of initiator's terminal sign language video;
2) the sign language analysis chip group of initiator's terminal resolves to graphics track with sign language video, and obtains in the memory the pairing fuzzy text message of the graphics track that prestores with this graphics track coupling;
3) the text message error correction chip group of initiator's terminal obtains accurate text message after above-mentioned fuzzy text message is carried out error correction;
4) central processing unit of initiator's terminal sends to receiving side terminal after above-mentioned accurate text message is converted to audio signal;
5) central processing unit of receiving side terminal is converted to text message with above-mentioned audio signal, and shows text information on the display screen of receiving side terminal.
If communicating pair one side is the aphasis personage, the opposing party is normal personage, then normal personage one side is chosen as normal mode of operation with the mode of operation of mobile call terminal, and the mode of operation that aphasis personage one side will mobile call terminal is chosen as the aphasis mode of operation.
When this side of initiator's finger speech speech obstacle personage, other steps in the above-mentioned course of work are identical, only the 5th) in the step, the audio process that the central processing unit of receiving side terminal triggers in the receiving side terminal carries out digital-to-analogue conversion to the audio signal that receives, and gives normal personage by loudspeaker plays then.
When the initiator made a comment or criticism this side of ordinary person scholar, the course of work may further comprise the steps:
1) voice signal of the normal personage's dialog context of the central processing unit of initiator's terminal triggering microphone carries out radio reception;
2) audio process of initiator's terminal carries out analog-to-digital conversion with the voice signal of microphone collection, sends to receiving side terminal then;
3) central processing unit of receiving side terminal is converted to text message with the audio signal that initiator's terminal sends, and is shown to the aphasis personage on display screen.
In sum, more than be preferred embodiment of the present utility model only, be not to be used to limit protection range of the present utility model.All within spirit of the present utility model and principle, any modification of being done, be equal to replacement, improvement etc., all should be included within the protection range of the present utility model.

Claims (4)

1. a mobile call terminal that is applicable to the aphasis personage is characterized in that, this mobile call terminal comprises: central processing unit, display screen, memory, camera, sign language analysis chip group and text message error correction chip group;
Described camera is connected with sign language analysis chip group, and the sign language video of catching is transferred to sign language video analysis chip group;
Described sign language analysis chip group is connected respectively with text message error correction chip group with memory, the sign language video that camera is caught resolve to graphics track and with memory in advance the graphics track of the standard sign language of storage mate, draw the pairing fuzzy text message of the graphics track that is complementary and be transferred to text message error correction chip group;
Described text message error correction chip group is connected respectively with central processing unit with memory, utilizes grammer and the group word parameter stored in advance in the memory that described fuzzy text message is carried out error correction, obtains accurate text message and is transferred to central processing unit;
Described central processing unit is connected respectively with display screen with camera, connects in conversation and sets up the back to camera transmission triggering signal; Described accurate text message is converted to audio signal to be sent to the opposite end mobile call terminal; Receive the audio signal that the opposite end mobile call terminal sends, be converted to and output to display screen behind the text message and show.
2. mobile call terminal as claimed in claim 1 is characterized in that, described sign language analysis chip group comprises: analysis chip and matching chip;
Described analysis chip is connected respectively with matching chip with camera, and the sign language video that camera is caught resolves to a plurality of graphics tracks that comprise hand-type profile and movement tendency, and is transferred to matching chip in order;
Described matching chip is connected respectively with text message error correction chip group with memory, in the graphics track that memory is stored in advance, find the graphics track that a plurality of graphics tracks of parsing with analysis chip are complementary respectively, from memory, read the corresponding fuzzy text message of the graphics track institute that is complementary again and be transferred to text message error correction chip group.
3. mobile call terminal as claimed in claim 1 is characterized in that, described text message error correction chip group comprises: parameter reads chip and chip is carried out in error correction;
Described parameter reads chip and is connected respectively with memory and error correction execution chip, and the grammer of reading pre-stored and group word parameter are transferred to error correction and carry out chip from described memory;
Described error correction is carried out chip and is connected respectively with central processing unit with sign language analysis chip group, utilizes described grammer and group word parameter, and described fuzzy text message is carried out error correction, obtains accurate text message and is transferred to central processing unit.
4. mobile call terminal as claimed in claim 1 is characterized in that, also comprises in this mobile call terminal: microphone, audio process and loud speaker;
Described central processing unit further is connected respectively with audio process with microphone, reception is from the mode select signal of described display screen input, when the indication of described mode select signal is current when converse for the aphasis mode of operation, continue the described operation of execution to camera transmission triggering signal; When the indication of described mode select signal is current when conversing for normal mode of operation, to microphone and audio process transmission triggering signal;
Described microphone is connected with audio process, collects the external voice signal and is transferred to audio process;
Described audio process is connected with loud speaker, and the voice signal that described microphone is collected carries out being transferred to the opposite end mobile call terminal after the analog-to-digital conversion; Receive the audio signal that the opposite end mobile call terminal sends, carrying out being transferred to loudspeaker plays after the digital-to-analogue conversion.
CN2009201663228U 2009-07-31 2009-07-31 Mobile speech communication terminal suitable for person with language barrier Expired - Fee Related CN201440733U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009201663228U CN201440733U (en) 2009-07-31 2009-07-31 Mobile speech communication terminal suitable for person with language barrier

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009201663228U CN201440733U (en) 2009-07-31 2009-07-31 Mobile speech communication terminal suitable for person with language barrier

Publications (1)

Publication Number Publication Date
CN201440733U true CN201440733U (en) 2010-04-21

Family

ID=42545388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009201663228U Expired - Fee Related CN201440733U (en) 2009-07-31 2009-07-31 Mobile speech communication terminal suitable for person with language barrier

Country Status (1)

Country Link
CN (1) CN201440733U (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9576460B2 (en) 2015-01-21 2017-02-21 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable smart device for hazard detection and warning based on image and audio data
US9578307B2 (en) 2014-01-14 2017-02-21 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US9586318B2 (en) 2015-02-27 2017-03-07 Toyota Motor Engineering & Manufacturing North America, Inc. Modular robot with smart device
US9677901B2 (en) 2015-03-10 2017-06-13 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for providing navigation instructions at optimal times
US9712666B2 (en) 2013-08-29 2017-07-18 Unify Gmbh & Co. Kg Maintaining audio communication in a congested communication channel
CN107231289A (en) * 2017-04-19 2017-10-03 王宏飞 Information interchange device, information exchanging system and method
US9811752B2 (en) 2015-03-10 2017-11-07 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable smart device and method for redundant object identification
US9898039B2 (en) 2015-08-03 2018-02-20 Toyota Motor Engineering & Manufacturing North America, Inc. Modular smart necklace
US9915545B2 (en) 2014-01-14 2018-03-13 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US9922236B2 (en) 2014-09-17 2018-03-20 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable eyeglasses for providing social and environmental awareness
US9958275B2 (en) 2016-05-31 2018-05-01 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for wearable smart device communications
US9972216B2 (en) 2015-03-20 2018-05-15 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for storing and playback of information for blind users
US10012505B2 (en) 2016-11-11 2018-07-03 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable system for providing walking directions
US10024667B2 (en) 2014-08-01 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable earpiece for providing social and environmental awareness
US10024680B2 (en) 2016-03-11 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Step based guidance system
US10024679B2 (en) 2014-01-14 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US10024678B2 (en) 2014-09-17 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable clip for providing social and environmental awareness
US10069965B2 (en) 2013-08-29 2018-09-04 Unify Gmbh & Co. Kg Maintaining audio communication in a congested communication channel
US10248856B2 (en) 2014-01-14 2019-04-02 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US10360907B2 (en) 2014-01-14 2019-07-23 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US10395555B2 (en) * 2015-03-30 2019-08-27 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for providing optimal braille output based on spoken and sign language
US10432851B2 (en) 2016-10-28 2019-10-01 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable computing device for detecting photography
US10490102B2 (en) 2015-02-10 2019-11-26 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for braille assistance
US10521669B2 (en) 2016-11-14 2019-12-31 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for providing guidance or feedback to a user
US10561519B2 (en) 2016-07-20 2020-02-18 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable computing device having a curved back to reduce pressure on vertebrae

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9712666B2 (en) 2013-08-29 2017-07-18 Unify Gmbh & Co. Kg Maintaining audio communication in a congested communication channel
US10069965B2 (en) 2013-08-29 2018-09-04 Unify Gmbh & Co. Kg Maintaining audio communication in a congested communication channel
US10248856B2 (en) 2014-01-14 2019-04-02 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US9578307B2 (en) 2014-01-14 2017-02-21 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US10024679B2 (en) 2014-01-14 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US10360907B2 (en) 2014-01-14 2019-07-23 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US9915545B2 (en) 2014-01-14 2018-03-13 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US10024667B2 (en) 2014-08-01 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable earpiece for providing social and environmental awareness
US10024678B2 (en) 2014-09-17 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable clip for providing social and environmental awareness
US9922236B2 (en) 2014-09-17 2018-03-20 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable eyeglasses for providing social and environmental awareness
US9576460B2 (en) 2015-01-21 2017-02-21 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable smart device for hazard detection and warning based on image and audio data
US10490102B2 (en) 2015-02-10 2019-11-26 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for braille assistance
US10391631B2 (en) 2015-02-27 2019-08-27 Toyota Motor Engineering & Manufacturing North America, Inc. Modular robot with smart device
US9586318B2 (en) 2015-02-27 2017-03-07 Toyota Motor Engineering & Manufacturing North America, Inc. Modular robot with smart device
US9811752B2 (en) 2015-03-10 2017-11-07 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable smart device and method for redundant object identification
US9677901B2 (en) 2015-03-10 2017-06-13 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for providing navigation instructions at optimal times
US9972216B2 (en) 2015-03-20 2018-05-15 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for storing and playback of information for blind users
US10395555B2 (en) * 2015-03-30 2019-08-27 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for providing optimal braille output based on spoken and sign language
US9898039B2 (en) 2015-08-03 2018-02-20 Toyota Motor Engineering & Manufacturing North America, Inc. Modular smart necklace
US10024680B2 (en) 2016-03-11 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Step based guidance system
US9958275B2 (en) 2016-05-31 2018-05-01 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for wearable smart device communications
US10561519B2 (en) 2016-07-20 2020-02-18 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable computing device having a curved back to reduce pressure on vertebrae
US10432851B2 (en) 2016-10-28 2019-10-01 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable computing device for detecting photography
US10012505B2 (en) 2016-11-11 2018-07-03 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable system for providing walking directions
US10521669B2 (en) 2016-11-14 2019-12-31 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for providing guidance or feedback to a user
CN107231289A (en) * 2017-04-19 2017-10-03 王宏飞 Information interchange device, information exchanging system and method

Similar Documents

Publication Publication Date Title
CN201440733U (en) Mobile speech communication terminal suitable for person with language barrier
US9111545B2 (en) Hand-held communication aid for individuals with auditory, speech and visual impairments
US20150363165A1 (en) Method For Quickly Starting Application Service, and Terminal
CN102231865B (en) Bluetooth earphone
CN110111787A (en) A kind of semanteme analytic method and server
KR20070060730A (en) Method for transmitting and receipt message in mobile communication terminal
CN201860365U (en) Mobile phone device for deaf-mute
US8036883B2 (en) Electronic device and a system using the same
CN105224601B (en) A kind of method and apparatus of extracting time information
KR20100061582A (en) Apparatus and method for providing emotion expression service in mobile communication terminal
CN103414500A (en) Interactive method between Bluetooth earphone and instant messaging software of terminal and Bluetooth earphone
CN104462058B (en) Character string identification method and device
CN108960158A (en) A kind of system and method for intelligent sign language translation
JP2021150946A (en) Wireless earphone device and method for using the same
CN105842879A (en) Multifunctional blindman's glasses
KR20150055926A (en) Portable terminal and method for determining user emotion status thereof
US20060241935A1 (en) Method for setting main language in mobile terminal and mobile terminal implementing the same
CN108124061A (en) The storage method and device of voice data
CN108920224A (en) Dialog information processing method, device, mobile terminal and storage medium
KR100406901B1 (en) An interpreter using mobile phone
KR101400754B1 (en) System for providing wireless captioned conversation service
CN204795612U (en) Talkback bluetooth headset communication system of function of utensil group
KR100747689B1 (en) Voice-Recognition Word Conversion System
CN105376513A (en) Information transmission method and device
CN112700779A (en) Voice interaction method, system, browser and storage medium

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100421

Termination date: 20150731

EXPY Termination of patent right or utility model