CN112397070B - Sliding translation AR glasses - Google Patents
Sliding translation AR glasses Download PDFInfo
- Publication number
- CN112397070B CN112397070B CN202110071548.5A CN202110071548A CN112397070B CN 112397070 B CN112397070 B CN 112397070B CN 202110071548 A CN202110071548 A CN 202110071548A CN 112397070 B CN112397070 B CN 112397070B
- Authority
- CN
- China
- Prior art keywords
- sliding
- glasses
- translation
- control unit
- display module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000011521 glass Substances 0.000 title claims abstract description 46
- 238000004590 computer program Methods 0.000 claims description 9
- 230000005236 sound signal Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000030279 gene silencing Effects 0.000 description 3
- 238000000034 method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/0414—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/58—Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/02—Casings; Cabinets ; Supports therefor; Mountings therein
- H04R1/025—Arrangements for fixing loudspeaker transducers, e.g. in a box, furniture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/08—Mouthpieces; Microphones; Attachments therefor
- H04R1/083—Special constructions of mouthpieces
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/225—Feedback of the input speech
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/15—Transducers incorporated in visual displaying devices, e.g. televisions, computer displays, laptops
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Optics & Photonics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
- Machine Translation (AREA)
Abstract
The invention provides sliding translation AR glasses, which comprise a glass frame, a glass frame arranged on the glass frame, an AR near-to-eye display module, a sliding key, a microphone, a loudspeaker and a main control unit, wherein the AR near-to-eye display module is covered on the glass frame, and the sliding key is arranged on the glass frame; the main control unit is in signal connection with the AR near-eye display module, the sliding key, the microphone and the loudspeaker. The invention provides sliding translation AR glasses, which solve the technical problems that in the related technology, the AR glasses cannot find a translation error in time and cannot cancel the wrong translation.
Description
Technical Field
The invention relates to the field of AR (augmented reality) glasses, in particular to sliding translation AR glasses.
Background
The AR glasses are a product with science fiction colors which is firstly made by Google and also a product with a forever technology; taking it up can experience science fiction movies immersive. The AR glasses can also be regarded as a miniature mobile phone, the current state of the user is judged by tracking the eye sight track, corresponding functions can be started, and if the user needs to make a call or send a short message, the user only needs to start Google Voice input information.
In the related art, the AR glasses have a language translation function, but the function thereof completely depends on the autonomous translation of the program, and the translation error of the program is easily caused due to the mistake of the wording or accent of the speaker. However, the AR glasses cannot find the translation error in time and cancel the wrong translation, which is inconvenient for the user to correct the translation error.
Therefore, there is a need to provide new sliding translation AR glasses to solve the above technical problems.
Disclosure of Invention
The invention provides sliding translation AR glasses, which solve the technical problems that in the related technology, the AR glasses cannot find a translation error in time and cannot cancel the wrong translation.
In order to solve the technical problem, the sliding translation AR glasses provided by the invention comprise a glasses frame, a glasses frame arranged on the glasses frame, an AR near-to-eye display module, a sliding key, a microphone, a loudspeaker and a main control unit, wherein the AR near-to-eye display module is covered on the glasses frame, and the sliding key is arranged on the glasses frame; the main control unit is in signal connection with the AR near-eye display module, the sliding key, the microphone and the loudspeaker;
the main control unit comprises a memory and a processor, wherein the memory is stored with a computer program, and the computer program realizes the following steps when being executed by the processor:
responding to the touch operation of the sliding key, converting a first voice signal received by the microphone into a text signal, and transmitting the text signal to the AR near-eye display module for display so that a user can see the displayed text;
when the user judges that the displayed character is in error, the user responds to the sliding operation of the sliding key to delete the character signal and silence the loudspeaker;
and when the user judges that the displayed characters are accurately expressed, responding to the pressing and releasing operation of the sliding key, translating the character signals into second voice signals, and starting the loudspeaker to play the second voice signals.
Preferably, the computer program when executed by the processor further implements the steps of:
and responding to the touch operation of the sliding key, converting the second voice signal received by the microphone into a text signal, and transmitting the text signal to the AR near-to-eye display module for display.
Preferably, the text signal comprises at least one text message.
Preferably, the microphone is provided on the lens frame.
Preferably, the number of the microphones is multiple, and the multiple microphones are distributed on the lens frame in an array.
Preferably, the main control unit further comprises a housing, and the speaker is disposed on the housing.
Preferably, the main control unit is in wired signal connection with the AR near-eye display module, the sliding key, the microphone and the speaker.
Preferably, the main control unit is in wireless signal connection with the AR near-to-eye display module, the sliding key, the microphone and the loudspeaker.
Preferably, the master control unit is a mobile device.
Preferably, the main control unit is in signal connection with an external mobile device.
Compared with the related art, the sliding translation AR glasses provided by the invention have the following beneficial effects:
in the sliding translation AR glasses provided by the invention, a first voice signal received by the microphone is converted into a text signal in response to the touch operation of the sliding key, and the text signal is transmitted to the AR near-to-eye display module to be displayed; responding to the sliding operation of the sliding key, deleting the text signal and silencing the loudspeaker;
the first voice signal is a voice signal sent by a user, the user can read corresponding character information to obtain a translation result in real time, when the user judges that the character translation result is wrong, the user can conveniently cancel the translation from the character to the voice, correct the accent and the expression of the user, and send out the corrected voice information again.
And responding to the loose press operation of the sliding key, translating the text signal into a second voice signal, and starting the loudspeaker to play the second voice signal.
When the user judges that the character translation result is correct, the character translation result can be conveniently converted into a voice translation result, so that the translation accuracy is improved, and the translation error can be known and corrected in real time.
Drawings
FIG. 1 is a schematic structural diagram of a sliding translation AR glasses according to a first embodiment of the present invention;
FIG. 2 is an assembly view of the frame and the slide button shown in FIG. 1;
FIG. 3 is a design architecture diagram of the master control unit shown in FIG. 1;
fig. 4 is a schematic structural diagram of sliding translation AR glasses according to a second embodiment of the present invention.
Reference numbers in the figures:
1-a mirror frame, 2-a mirror frame, 3-AR near-to-eye display module, 4-a sliding key, 5-a microphone, 6-a loudspeaker and 7-a main control unit;
71-memory, 72-processor.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The invention is further described with reference to the following figures and embodiments.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides sliding translation AR glasses.
Referring to fig. 1-3 in combination, in an embodiment of the present invention, the sliding translation AR glasses include a frame 1, a frame 2 disposed on the frame 1, an AR near-to-eye display module 3, a sliding button 4, a microphone 5, a speaker 6, and a main control unit 7, wherein the AR near-to-eye display module 3 is disposed on the frame 1 in a covering manner, and the sliding button 4 is disposed on the frame 2; the main control unit 7 is in signal connection with the AR near-eye display module 3, the slide key 4, the microphone 5 and the speaker 6;
the main control unit 7 comprises a memory 71 and a processor 72, wherein the memory 71 stores a computer program, and the computer program realizes the following steps when executed by the processor 72:
responding to the touch operation of the sliding key 4, converting a first voice signal received by the microphone 5 into a text signal, and transmitting the text signal to the AR near-eye display module 3 for display so that a user can see the displayed text;
the first voice signal is a voice signal sent by a user, and can be a voice signal of a native language of the user, such as a voice signal of Chinese; the text signal may include text information in chinese.
When the user judges that the displayed characters are in error expression, the user responds to the sliding operation of the sliding key 4 to delete the character signals and silence the loudspeaker 6;
and when the user judges that the displayed characters are accurately expressed, responding to the releasing operation of the sliding key 4, translating the character signals into second voice signals, and starting the loudspeaker 6 to play the second voice signals.
The second voice message is a voice signal preset by the user, for example, english.
It is understood that the first speech signal and the second speech signal are sound signals of different languages, and both of the first speech signal and the second speech signal may be sound signals of two languages, such as chinese, english, french, russian, german, spanish, and the like.
In the sliding translation AR glasses provided by the invention, in response to the touch operation of the sliding key 4, a first voice signal received by the microphone 5 is converted into a text signal, and the text signal is transmitted to the AR near-eye display module 3 for display; responding to the sliding operation of the sliding key 4, deleting the text signal and silencing the loudspeaker 6;
the first voice signal is a voice signal sent by a user, the user can read corresponding character information to obtain a translation result in real time, when the user judges that the character translation result is wrong, the user can conveniently cancel the translation from the character to the voice, correct the accent and the expression of the user, and send out the corrected voice information again.
And responding to the releasing operation of the sliding key 4, translating the text signal into a second voice signal, and starting the loudspeaker 6 to play the second voice signal.
When the user judges that the character translation result is correct, the character translation result can be conveniently converted into a voice translation result, so that the translation accuracy is improved, and the translation error can be known and corrected in real time.
The computer program when executed by the processor 72 further performs the steps of:
responding to the touch operation of the sliding key 4, converting the second voice signal received by the microphone 5 into a text signal, and transmitting the text signal to the AR near-to-eye display module 3 for display.
In this embodiment, the second voice signal may be an english sound signal, and the text signal may include text information of chinese; therefore, the English sound signals are translated into Chinese character information.
The text signal at least comprises one text message. The text signal may include text information in chinese; the text signals can also comprise text information of Chinese and text information of English, so that the translation function of the AR glasses is enriched.
Referring to fig. 1 again, in the present embodiment, the microphone 5 is disposed on the frame 1. It will be appreciated that in other embodiments, the microphone 5 may be located in other positions of the frame 2 suitable for receiving sound signals.
The number of the microphones 5 is multiple, and the multiple microphones 5 are distributed on the lens frame 1 in an array.
The main control unit 7 further comprises a shell, and the loudspeaker 6 is arranged on the shell.
In this embodiment, the main control unit 7 is connected to the AR near-eye display module 3, the slide button 4, the microphone 5, and the speaker 6 by wired signals.
It is understood that, in other embodiments, the main control unit 7 is in wireless signal connection with the AR near-eye display module 3, the slide key 4, the microphone 5 and the speaker 6.
Referring to fig. 4, in another implementation, the main control unit 7 is in signal connection with an external mobile device. Through the mobile device, switching and adjustment of the speech translation of the main control unit 7 can be realized.
In yet another embodiment, the master control unit 7 is a mobile device. The mobile device can be a portable electronic device such as a mobile phone and a tablet computer.
It will be understood that the master control unit 7 and the frame 1 or frame 2 are provided with corresponding batteries to ensure a stable power supply of the various elements.
The working principle of the sliding translation AR glasses provided by the invention is as follows:
when the glasses are used, a user wears the glasses, and then selects a matched translation mode on the main control unit 7 according to the native language of the user and the language to be translated;
when the foreign language person begins speaking to the user, namely, the foreign language person sends out a second voice signal;
responding to the touch operation of the sliding key 4, converting the second voice signal received by the microphone 5 into a text signal, and transmitting the text signal to the AR near-to-eye display module 3 for display.
In the above process, the main control unit 7 converts the second voice signal into the native language characters of the user, and then transmits the native language characters to the AR near-eye display module 3, and the AR near-eye display module 3 receives the character information and then displays the character information, so that the user can see the character information.
After the user understands the text information, the user starts to speak back to the speaker, and a first voice signal is sent out;
responding to the touch operation of the sliding key 4, converting a first voice signal received by the microphone 5 into a text signal, and transmitting the text signal to the AR near-eye display module 3 for display;
the first voice signal is a voice signal sent by a user, and can be a voice signal of a native language of the user, such as a voice signal of Chinese; the text signal may include text information in chinese.
Responding to the sliding operation of the sliding key 4, deleting the text signal and silencing the loudspeaker 6;
and responding to the releasing operation of the sliding key 4, translating the text signal into a second voice signal, and starting the loudspeaker 6 to play the second voice signal.
In the process, the user starts to speak back to the speaker, namely, a first voice signal is sent out; the main control unit 7 converts the first voice signal into characters, and transmits the characters to the AR near-eye display module 3, the AR near-eye display module 3 receives character information, and then displays the characters, so that a user can see the character information, and then judges whether the displayed characters are accurately expressed;
if the expression is accurate, the sliding key 4 is stopped being pressed, at the moment, the main control unit 7 converts the characters into the foreign language and transmits the foreign language to the loudspeaker 6, and the voice is played through the loudspeaker 6, so that the foreign language person can hear the voice;
if the character expression is wrong, the user points to slide backwards on the sliding key 4, the character display is cancelled, and then the original language or the speaking mode and the expression are required to be repeated again until the character expression meets the requirements of the user.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A sliding translation AR glasses comprises a frame and a frame arranged on the frame, and is characterized by further comprising an AR near-to-eye display module, a sliding key, a microphone, a loudspeaker and a main control unit, wherein the AR near-to-eye display module is covered on the frame, and the sliding key is arranged on the frame; the main control unit is in signal connection with the AR near-eye display module, the sliding key, the microphone and the loudspeaker;
the main control unit comprises a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the following steps are realized:
responding to the touch operation of the sliding key, converting a first voice signal received by the microphone into a text signal, and transmitting the text signal to the AR near-eye display module for display so that a user can see the displayed text;
when the user judges that the displayed character is in error, the user responds to the sliding operation of the sliding key to delete the character signal and silence the loudspeaker;
and when the user judges that the displayed characters are accurately expressed, responding to the pressing and releasing operation of the sliding key, translating the character signals into second voice signals, and starting the loudspeaker to play the second voice signals.
2. The sliding translation AR glasses according to claim 1, wherein the computer program when executed by said processor further performs the steps of:
and responding to the touch operation of the sliding key, converting the second voice signal received by the microphone into a text signal, and transmitting the text signal to the AR near-to-eye display module for display.
3. The sliding translation AR glasses of claim 2, wherein said text signal comprises at least one text message.
4. The sliding translation AR glasses of claim 1, wherein said microphone is disposed on said frame.
5. The sliding translation AR glasses according to claim 4, wherein the number of the microphones is plural, and the plural microphones are distributed on the frame in an array.
6. The sliding translation AR glasses of claim 5, wherein the master control unit further comprises a housing, and the speaker is disposed in the housing.
7. The sliding translation AR glasses according to claim 6, wherein the main control unit is in wired signal connection with the AR near-to-eye display module, the sliding button, the microphone and the speaker.
8. The sliding translation AR glasses according to claim 6, wherein the main control unit is in wireless signal connection with the AR near-to-eye display module, the sliding button, the microphone and the speaker.
9. The sliding translation AR glasses of claim 8, wherein the master control unit is a mobile device.
10. The sliding translation AR glasses according to claim 8, wherein the master control unit is in signal connection with an external mobile device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110071548.5A CN112397070B (en) | 2021-01-19 | 2021-01-19 | Sliding translation AR glasses |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110071548.5A CN112397070B (en) | 2021-01-19 | 2021-01-19 | Sliding translation AR glasses |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112397070A CN112397070A (en) | 2021-02-23 |
CN112397070B true CN112397070B (en) | 2021-04-30 |
Family
ID=74625142
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110071548.5A Active CN112397070B (en) | 2021-01-19 | 2021-01-19 | Sliding translation AR glasses |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112397070B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007020591A2 (en) * | 2005-08-15 | 2007-02-22 | Koninklijke Philips Electronics N.V. | System, apparatus, and method for augmented reality glasses for end-user programming |
CN106055118A (en) * | 2016-05-31 | 2016-10-26 | 邓俊生 | Keyboard with touch recognition, key and recognition method for user operations |
US10334212B2 (en) * | 2017-01-03 | 2019-06-25 | Boe Technology Group Co., Ltd. | Memory auxiliary device and method, spectacle frame and pair of spectacles |
CN110188364A (en) * | 2019-05-24 | 2019-08-30 | 宜视智能科技(苏州)有限公司 | Interpretation method, equipment and computer readable storage medium based on intelligent glasses |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6555272B2 (en) * | 2014-11-12 | 2019-08-07 | 富士通株式会社 | Wearable device, display control method, and display control program |
-
2021
- 2021-01-19 CN CN202110071548.5A patent/CN112397070B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007020591A2 (en) * | 2005-08-15 | 2007-02-22 | Koninklijke Philips Electronics N.V. | System, apparatus, and method for augmented reality glasses for end-user programming |
CN106055118A (en) * | 2016-05-31 | 2016-10-26 | 邓俊生 | Keyboard with touch recognition, key and recognition method for user operations |
US10334212B2 (en) * | 2017-01-03 | 2019-06-25 | Boe Technology Group Co., Ltd. | Memory auxiliary device and method, spectacle frame and pair of spectacles |
CN110188364A (en) * | 2019-05-24 | 2019-08-30 | 宜视智能科技(苏州)有限公司 | Interpretation method, equipment and computer readable storage medium based on intelligent glasses |
Also Published As
Publication number | Publication date |
---|---|
CN112397070A (en) | 2021-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9824687B2 (en) | System and terminal for presenting recommended utterance candidates | |
US20140171036A1 (en) | Method of communication | |
KR101830908B1 (en) | Smart glass system for hearing-impaired communication | |
US9183199B2 (en) | Communication device for multiple language translation system | |
US11489351B2 (en) | Wireless earphone device and method for using the same | |
CN109360549B (en) | Data processing method, wearable device and device for data processing | |
KR20180020368A (en) | Device and method of translating a language into another language | |
KR20200059054A (en) | Electronic apparatus for processing user utterance and controlling method thereof | |
KR102374620B1 (en) | Device and system for voice recognition | |
WO2018186416A1 (en) | Translation processing method, translation processing program, and recording medium | |
CN112764549B (en) | Translation method, translation device, translation medium and near-to-eye display equipment | |
US11475793B2 (en) | Method and device for reading, writing, and communication by deafblind users | |
CN111090998A (en) | Sign language conversion method and device and sign language conversion device | |
CN112397070B (en) | Sliding translation AR glasses | |
JP6457706B1 (en) | Translation system, translation method, and translation apparatus | |
CN111684410A (en) | Language presentation device, speech presentation method, and language presentation program | |
JP6980150B1 (en) | 3D virtual real space providing server, 3D virtual real space providing method, 3D virtual real space providing program, 3D virtual real space display control device, 3D virtual real space display control method, 3D virtual real space display control program And 3D virtual reality space provision system | |
CN214384626U (en) | Intelligent glasses and 5G message interaction system | |
KR100735261B1 (en) | Wireless terminal and a method for outputting voice data using that | |
CN111507115B (en) | Multi-modal language information artificial intelligence translation method, system and equipment | |
Sarosa et al. | Design and Implementation of Voice Time, Time Indicator Application for Diabetic Retinopathy Patients. | |
KR20210020448A (en) | Simultaneous interpretation device based on mobile cloud and electronic device | |
JP2005222316A (en) | Conversation support device, conference support system, reception work support system, and program | |
KR20070071854A (en) | Communication aiding method between dumb person and normal person in mobile telecommunication terminal | |
JP2018173910A (en) | Voice translation system and voice translation program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |