WO2015081694A1 - 智能眼镜及其控制方法 - Google Patents

智能眼镜及其控制方法 Download PDF

Info

Publication number
WO2015081694A1
WO2015081694A1 PCT/CN2014/081282 CN2014081282W WO2015081694A1 WO 2015081694 A1 WO2015081694 A1 WO 2015081694A1 CN 2014081282 W CN2014081282 W CN 2014081282W WO 2015081694 A1 WO2015081694 A1 WO 2015081694A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
display
processor
brain wave
smart glasses
Prior art date
Application number
PCT/CN2014/081282
Other languages
English (en)
French (fr)
Inventor
杨久霞
白峰
白冰
Original Assignee
京东方科技集团股份有限公司
北京京东方光电科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司, 北京京东方光电科技有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US14/417,440 priority Critical patent/US20150379896A1/en
Publication of WO2015081694A1 publication Critical patent/WO2015081694A1/zh

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/04Devices for conversing with the deaf-blind
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/009Teaching or communicating with deaf persons
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L2021/065Aids for the handicapped in understanding

Definitions

  • Embodiments of the present invention relate to a smart eyewear and a control method thereof. Background technique
  • Deaf-mute people do not have the ability to listen and/or speak because of their inherent physical defects, so they cannot know the thoughts of others, nor can they communicate and communicate with each other through language, which brings great inconvenience to daily life. Although most deaf-mute people can express what they want to say in sign language, they can't achieve effective communication and communication if they don't understand sign language. To overcome hearing defects, hearing impaired people can wear hearing aids. Although hearing aids help hearing impaired people to improve hearing impairment, they still have certain limitations. For example, there is no guarantee that all hearing impaired people will have the same hearing as ordinary people after wearing, so that some hearing impaired people still know what others are saying. There are difficulties. Summary of the invention
  • a smart glasses including: a lens, a frame, and a temple; the lens includes a transparent display configured to perform double-sided display; the frame is provided with a camera and a pickup, respectively configured to Obtaining a gesture command and a voice signal and converting into a gesture signal and an audio signal; a brain wave identifier and a processor are disposed on the temple, the brain wave identifier is configured to acquire a brain wave signal of the wearer, and the processor is configured to receive the The gesture signal, the audio signal, and the brain wave signal are processed, and the processing result is sent to the transparent display for the two-sided display.
  • the processor is configured to generate a processing result in the form of teletext information.
  • the transparent display includes a display and two display surfaces disposed on the display, the two display surfaces being configured for front display and back display, respectively.
  • the transparent display includes two displays, each display having a display surface, the two displays being configured for front display and back display, respectively.
  • the transparent display is a flexible display.
  • the smart glasses further include a parsing memory connected to the processor, and storing: a database including a correspondence relationship between the brain wave signals and the indication content thereof, a database including a correspondence relationship between the gesture signals and the indication contents thereof, and including A database of the correspondence of audio signals to their indicated content.
  • the smart glasses further include a positioning system; the camera is further configured to acquire and transmit environmental information around the smart glasses to the processor; the processor is further configured to generate environmental information according to the camera And positioning the wearer in conjunction with the positioning system.
  • the pickup is disposed at a nose pad position in the frame
  • the electroencephalogram identifier is disposed in a middle portion of the temple
  • the processor is disposed at a tail portion of the temple.
  • the positioning system includes a memory pre-stored with location information.
  • the positioning system includes a ranging device disposed on the temple, configured to sense a distance between a current position and a target position and to transmit a sensing result to the processor.
  • the temple is also provided with a data transfer device configured to perform data transfer with the external device.
  • the smart glasses further include a charging device configured to charge at least one of the transparent display, the camera, the pickup, the brain wave recognizer, and the processor.
  • the charging device is a solar charging device integrated on the mirror surface of the lens.
  • the ranging device is selected from the group consisting of: an ultrasonic range finder, an infrared range finder, and a laser range finder.
  • control method for the above smart glasses comprising the following receiving method and expression method, and the receiving method and the expressing method can be performed independently, simultaneously or separately.
  • Example: ⁇ , the connection ⁇ : method includes:
  • Example: ⁇ the expression method includes:
  • the step S1.2 includes: the processor receiving the received gesture signal and audio in a database storing a correspondence relationship between the gesture signal and the indication content thereof and a database storing the correspondence relationship between the audio signal and the indication content thereof.
  • the signal is searched for matching, and the indication content is output in graphic form.
  • the method may further include: prompting an error through the back display of the lens.
  • the step S2.2 includes: the processor performing a search matching on the received first information in a database storing correspondence between the brain wave information and the indication content thereof, and outputting the indication content in a graphic form.
  • the image information may be displayed on the back side, and the brain wave signal indicating the confirmation sent by the wearer is received, and then the graphic information is displayed on the front side.
  • step S1.1 the audio signal is obtained by analog-to-digital conversion of the acquired voice signal by a pickup.
  • step S1.1 the gesture signal and the audio signal are acquired, and the surrounding environment information is acquired by the camera, and the current position is located in combination with the positioning system.
  • step S1.1 the environment information acquired by the camera is compared and calculated by the processor with location information stored in the positioning system.
  • 1 is a schematic structural diagram of a smart glasses provided in an embodiment of the present invention
  • 2 is a schematic diagram of a principle of identifying and judging a smart glasses in an expression mode according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram showing the principle of the matching judgment process of the smart glasses provided in the expression mode according to the embodiment of the present invention.
  • FIG. 4 is a schematic diagram of the principle of the smart glasses provided in the receiving mode according to the embodiment of the present invention
  • FIG. 5 is a flow chart of the steps of the method for controlling the smart glasses provided in the embodiment of the present invention. detailed description
  • composition of the smart glasses according to an embodiment of the present invention is as shown in FIG. 1, which includes: a lens, a frame and a temple.
  • the lens includes a transparent display 10 configured to perform double-sided display; the frame is provided with a camera 11 and a pickup 12, respectively configured to acquire gesture commands and voice signals, and if desired, convert them into gesture signals and audio signals ; the brain wave identifier 13 and the processor are disposed on the temple 14.
  • the brain wave identifier 13 is configured to acquire a brainwave signal of the wearer, and the processor 14 is configured to receive the gesture signal, the audio signal, and the brain wave signal, perform processing, and transmit the processing result to the transparent display. Used for the double-sided display.
  • the processor 14 is built into the tail of the temple on one side.
  • a module having other functions may be built in the tail of the other side of the temple, for example.
  • the transparent display 10 is configured to display the image information processed by the processor 14 on both sides.
  • the transparent display 10 includes a display and two display surfaces disposed on the display, the two display surfaces being used for front display and back display, respectively.
  • the transparent display 10 includes two displays, each having a display surface, so that there are also two display faces in total for the front display and the rear display, respectively.
  • the front display and the back display in the embodiment of the present invention refer to two display modes.
  • the display is transparent, during the display process, the opposite display can be observed on both sides of the lens regardless of the display mode.
  • the second example above differs from the first example in that the number of displays used is different, as described below.
  • each lens panel uses a display having two display surfaces, the display itself being transparent, and the two display surfaces of the transparent display correspond to the A and B sides of the lens, respectively, for back display and The two modes are displayed on the front side;
  • each of the lenses is provided with two displays arranged opposite each other, that is, each lens is composed of two displays, and the two displays respectively correspond to the A of the lens
  • the face and the B face are also used for the back display and the front display, respectively.
  • the front display mode and the back display mode are respectively an expression mode and a reception mode: the expression mode is to display the wearer's thoughts or consciousness to the ordinary person in front of the glasses through the lens of the glasses, and the receiving mode is to obtain The ordinary people's voices, gestures and other information in front of them are finally presented to the wearer through the lenses.
  • the front display of the smart glasses is used in the expression mode, and the so-called front side is along the direction of the wearer's line of sight, that is, the side of the ordinary person (assuming the A side).
  • the receiving mode the rear side of the smart glasses is displayed.
  • the so-called back is the opposite direction to the wearer's line of sight, that is, the side that is displayed to the wearer of the glasses (assuming the B side).
  • the lens of the spectacles is spliced with a flexible display device to achieve a curved structural design of the lens.
  • the display area on the frame is curved to extend to both sides of the user's head. The function is that when the lens of the glasses presents the display pattern, not only the user's front will appear a prompt screen, and the left and right sides thereof There is also a display area on the side and the graphic information is presented.
  • the teletext information is information of one or more groups selected from the group consisting of: text information, picture information, and a combination of text information and picture information.
  • the camera 11 is disposed on the frame for acquiring gesture commands, generating a gesture signal, and transmitting the gesture signal to the processor 14.
  • the camera 11 is located in front of the smart glasses, on both sides of the left and right glasses, and performs an all-round detection around the wearer.
  • the camera 11 is used for acquiring the gestures of other people in front of the wearer to obtain the idea and purpose thereof, and is also used for real-time collecting and detecting the environment and conditions surrounding the user.
  • the information is sent to the processor 14 for processing, and is reasoned and calculated according to the internal database, and the position of the wearer is accurately positioned according to the result, in combination with, for example, a GPS positioning system.
  • the positioning system of the smart glasses includes a GPS locator and a memory pre-stored with environmental information, which are respectively connected to the processor 14; at this time, the camera 11 is also configured to the surrounding environment of the smart glasses. Monitoring, obtaining surrounding environment information and transmitting it to the processor 14; the processor 14 compares and calculates the environmental information with the environmental information pre-stored in the memory database, and locates the current location to obtain positioning information.
  • the positioning information obtained in this way is advantageous for other intelligent terminals to determine the position of the wearer according to the position information, thereby providing the wearer with the best route of the nearby place, for example, providing the wearer with the nearest subway station and from the current position to the The nearest route of the subway station, etc.; for example, other help information can be obtained according to the location information, the principle is the same as above, and details are not described herein again.
  • the positioning system further includes a distance measuring device that senses a distance between the current position and the target position.
  • the ranging device is built into the frame and functions to measure the distance of the wearer from the road sign at the location in real time to achieve accurate positioning.
  • the distance measuring device measures the distance between the wearer and the communication object thereof, thereby setting the finger of the pickup 12 The distance to and from the sound.
  • the ranging device is selected from the group consisting of: an ultrasonic range finder, an infrared range finder, and a laser range finder.
  • the pickup 12 is disposed at the nose pad position of the frame, the brain wave identifier 13 is disposed at the center of the temple, and the processor 14 is disposed at the tail of the temple.
  • the pickup 12 picks up a language signal (for example, an analog signal) in a specific range, converts it into a digital signal, and transmits it to the processor 14, and the processor 14 converts the signal into a graphic by voice recognition. The information is then transmitted to the transparent display 10 and then displayed through the back lens of the glasses.
  • a language signal for example, an analog signal
  • the electroencephalogram recognizer 13 is disposed in the middle of the temple and is very close to the wearer's brain when worn.
  • the human brain generates an electroencephalogram signal when generating consciousness or an idea, and the electroencephalogram recognizer 13 is used to recognize the brain wave signal and read the brain wave information (that is, the operation instruction) when the wearer thinks.
  • the first information is obtained by the decoding and encoding process and sent to the processor 14, and then the first information is parsed and processed by the processor 14, and displayed to the wearer's communication object on the A side in the form of the graphic information.
  • the smart glasses include a parsing memory containing graphics storing brain wave information (first information) and a consciousness or idea (ie, content indicated by the signal) characterizing the signal transmitter A database of correspondences of information.
  • the processor determines the indication content corresponding to the brain wave signal acquired at this time based on the received first information and performs a search comparison in the database, and displays the content.
  • a data transmission device and a charging device are further disposed on the temple; the data transmission device is used for data transmission with the external device; the charging device is opposite to the transparent display 10, the camera 11, the pickup 12, and the brain. At least one of the electric wave identifier 13 and the processor 14 is charged to increase the endurance of the smart glasses.
  • the charging device is a solar powered charging device integrated on the surface on either side of the lens.
  • the data transfer device is built into the tail of the other side leg of the processor 14 that is not provided.
  • the communication function of the smart glasses provided by the embodiment of the present invention is implemented by the data transmission device, that is, the smart glasses wearer provided in this embodiment is contacted by an RF (a radio frequency) system.
  • RF radio frequency
  • voice messages from other smart terminal users pass The corresponding information of the processing unit and the artificial intelligence system is processed and transmitted to the user through the display screen in the B-side display mode; the user's response is transmitted, recognized, and read by the brain wave, converted into graphic information, and displayed in the B-side display mode.
  • the processor 14 receives the confirmation information of the wearer and transmits it to the communication object in the A-side display mode, or converts it into a voice signal by, for example, a processor, and transmits the data through the data.
  • the device is sent out.
  • the communication function is implemented by WIFI or Bluetooth or the like. Therefore, in one example, the smart glasses also have an entertainment function design, and the wearer can use brain waves to play games through the lens, use WIFI to access the Internet or transmit data with other devices.
  • the working principle of the smart glasses in the expression mode is as follows.
  • the wearer's brain sends a mind command to generate an electroencephalogram signal; the brain wave recognizer 13 recognizes the brain wave signal; if the brain wave signal fails to be recognized, the "recognition failure" is displayed on the B side of the transparent display 10, and the wearer passes the B side.
  • the brainwave signal will be generated again after the feedback of "identification failure” is displayed.
  • This process is a process of identifying and judging, but only feedbacks the error of reading information by the brain wave identifier.
  • the schematic diagram is shown in Figure 2.
  • the identified signal is compared with the brain wave database in the memory, for example, the identified signal is matched with the information code in the brain wave database, according to The matching degree judges whether the comparison passes; if the comparison passes, it indicates that the brain wave reading is successful, and the "first information" is obtained by the decoding and encoding process; the first information is sent to the processor 14 to continue parsing the information and The processing performs search matching based on, for example, a database including a correspondence relationship between the brain wave signal and the graphic information stored in the analysis memory, and outputs the matched graphic information, that is, displays through the A side of the transparent display 10.
  • the graphic information obtained by matching the brain wave signal and the database may be displayed on the A side to the wearer for judgment;
  • the graphic information conforms to the wearer's idea, and the wearer generates a confirmed brain wave signal.
  • the processor 14 controls the transparent display to switch to the A-side display mode, and displays the matched graphic information to the wearer. If the graphic information displayed on the B side does not conform to the wearer's intention, the wearer's brain re-issues a new brain wave signal, and the brain wave recognizer receives the brain wave signal from the wearer's brain and repeats with the processor. The previous operation, until it meets the wearer's mind.
  • the process is a matching judgment process, that is, whether the information output to the processor is desired by the wearer
  • the expression of the idea is consistently judged, and the schematic diagram is shown in Figure 3.
  • the graphic information display is performed on the A side of the glasses; however, if the processed information of the processor is different from the idea that the wearer wants to express , the wearer resends the mind command. Thereby, the wearer's idea can be expressed more accurately, and effective and accurate communication can be realized.
  • the working principle of the smart glasses in the receiving mode in an embodiment of the present invention is as shown in FIG.
  • the camera 11 and the pickup 12 respectively acquire gesture commands and voice signals, respectively convert the gesture commands and the voice signals to obtain gesture signals and audio signals, and then send the signals to the processor 14 for processing; the processor 14 stores the gesture signals stored in the analysis memory. Performing a search match in a database corresponding to the correspondence between the audio signal and the graphic information, determining an idea or intention expressed by the ordinary person of the gesture instruction and the voice signal to be expressed by the smart glasses wearer, and The form of the graphic information is displayed to the ordinary person in front of the wearer through the B side of the transparent display 10, thereby realizing the process of the wearer receiving the external information.
  • the pickup 12 converts the speech signal into an audio signal by analog to digital conversion; and the camera 11 converts the gesture instruction into a gesture signal by a decoder and an encoder within the processor.
  • the smart glasses provided by at least one embodiment of the present invention facilitate communication and communication between the deaf and the ordinary person wearing the smart mirror.
  • the camera and the pickup obtain the gesture commands and voice signals of the ordinary person for the wearer, and after being recognized and processed by the processor, are displayed to the wearer through the back lens in the form of, for example, graphic information, so that the wearer knows the thoughts and intentions of the ordinary person;
  • the brain wave identifier acquires the brain wave signal of the wearer, and after being parsed and processed by the processor, is displayed to the ordinary person through the front lens in the form of, for example, corresponding graphic information, so that the ordinary person knows the idea and intention of the wearer. This overcomes the obstacles that cannot be effectively communicated and communicated between deaf and ordinary people.
  • the embodiment of the present invention further provides a control method based on any one of the above embodiments, comprising the steps S1-S2 for controlling the smart glasses to receive display and the steps S3 and S4 for controlling the smart glasses to display and display:
  • the receive display control step is:
  • S2 The audio signal and the gesture signal are respectively recognized by the processor to be converted into graphic information, and displayed on the back side through the lens.
  • the expression display control step is:
  • S4 Convert the first information into graphic information by using a processor, and perform front display through the lens. It should be noted that, in the receiving control, S1 and S2 are executed sequentially; in the expression control, S3 and S4 are executed sequentially; however, the two sets of steps can be executed simultaneously, sequentially, or separately.
  • the smart glasses control method is as shown in Fig. 5, wherein the reception control is performed first, and then the expression control is executed (i.e., the steps of S1-S2-S3-S4 are performed).
  • the expression control is executed (i.e., the steps of S1-S2-S3-S4 are performed).
  • there may be only receiving control executing in steps S1-S2) or only expression control (executing in steps S3-S4) or performing expression control before performing reception control ( That is, it is performed in the order of S3-S4-S1-S2).
  • step S1 includes: searching, by the processor, the received gesture signal and the audio signal in a database storing a correspondence relationship between the gesture signal and the indication content thereof and a database storing the correspondence relationship between the audio signal and the indication content thereof. Matches, the indication content is output in graphic form.
  • the method further includes: prompting an error through the back display of the lens.
  • step S4 includes: performing a search match on the received first information in a database storing correspondence between brain wave information and its indication content by the processor, and outputting the indication content in a graphic form.
  • the method before the front lens display of the smart glasses in step S4, the method further includes: displaying the graphic information on the back lens, receiving the brain wave signal indicating the confirmation issued by the wearer, and then transmitting the graphic information through the lens. Perform a front display.
  • step S1 the audio signal is obtained by analog-to-digital conversion of the acquired speech signal by the pickup.
  • the gesture signal and the audio signal are acquired, and the surrounding environment information is acquired by the camera, and the current position is located in combination with the positioning system.
  • step S1 the environment information acquired by the camera is compared and calculated by the processor with location information stored in the positioning system.
  • the recognition, reading and matching of the gesture signal, the voice signal and the brain wave signal, the generation and output of the graphic information, the front and back display thereof, and the like The implementation of other auxiliary functions such as the positioning of the person and the data transmission of the external system can be referred to the related function description of the aforementioned smart glasses.
  • the control method by wearing the smart glasses, people can communicate and communicate with ordinary people around them, mainly through a camera and a device having voice recognition and brain wave recognition or
  • the module realizes the collection and conversion of gesture commands, voice signals and brain wave signals, and the obtained signals are processed by the processor and displayed in the form of graphic information. Since the smart eyes are transparent displays that can be displayed on both sides, it can be realized. Effective communication and communication between deaf and ordinary people, and improve the accuracy of existing methods such as sign language expression.

Abstract

一种智能眼镜及其控制方法,其中智能眼镜包括:镜片、镜框和镜腿;镜片包括透明显示器(10),进行双面显示;镜框上设置有摄像头(11)和拾音器(12),分别获取手势指令和语音信号;镜腿上设置有脑电波识别器(13)和处理器(14),脑电波识别器(13)获取脑电波信号,处理器(14)接收手势指令、语音信号以及脑电波信号并进行处理。该智能眼镜可实现外界信息转化为眼镜佩戴者可以看到的图文信息,同时还可实现佩戴者将无法表达的语音通过图文方式展现给别人,例如达到聋哑人与普通人之间无障碍的交流。

Description

智能眼镜及其控制方法 技术领域
本发明的实施例涉及一种智能眼镜及其控制方法。 背景技术
聋哑人由于其自身固有的生理缺陷而不具备听力和 /或说话的能力, 因此 无法获知他人的想法, 也无法通过语言与人沟通和交流, 这给日常生活带来 很大不便。 虽然大部分聋哑人自身能够通过手语表达自己想要说的话, 但是 如果遇到不懂手语的人也无法实现有效的交流和沟通。为克服听力上的缺陷, 听力有障碍的人可以佩戴助听器。 助听器虽然有助于听力障碍者改善听觉障 碍, 但是还是具有一定的局限性, 例如, 不能保证所有听力障碍者佩戴后都 具有和普通人相同的听力, 使得部分听力障碍者对于获知别人说的话仍然存 在困难。 发明内容
根据本发明的至少一个实施例, 提供了一种智能眼镜, 包括: 镜片、 镜 框和镜腿; 镜片包括透明显示器, 被配置以进行双面显示; 镜框上设置有摄 像头和拾音器, 分别被配置以获取手势指令和语音信号并转换成手势信号和 音频信号; 镜腿上设置有脑电波识别器和处理器, 脑电波识别器被配置以获 取佩戴者的脑电波信号, 处理器被配置以接收所述手势信号、 音频信号以及 脑电波信号、 进行处理并将处理结果发送给所述透明显示器以用于所述双面 显示。
在一个示例中, 所述处理器被配置以生成图文信息形式的处理结果。 在一个示例中, 对于每一镜片, 所述透明显示器包括一个显示器以及设 置在该显示器上的两个显示面, 所述两个显示面分别被配置以进行正面显示 和背面显示。
在一个示例中, 对于每一镜片, 所述透明显示器包括两个显示器, 每个 显示器具有一个显示面, 所述两个显示器分别被配置以进行正面显示和背面 显示。 在一个示例中, 所述透明显示器是柔性显示器。
在一个示例中, 所述智能眼镜还包括与所述处理器连接的解析存储器, 存储有:包含脑电波信号与其指示内容的对应关系的数据库、 包含手势信号与 其指示内容的对应关系的数据库以及包含音频信号与其指示内容的对应关系 的数据库。
在一个示例中, 所述智能眼镜还包括定位系统; 所述摄像头还被配置以 获取智能眼镜周围的环境信息并将其发送给处理器; 所述处理器还被配置以 根据摄像头发送的环境信息并结合所述定位系统对佩戴者进行定位。
在一个示例中, 所述拾音器设置在镜框中的鼻托位置处, 所述脑电波识 别器设置在镜腿的中部, 所述处理器设置在镜腿的尾部。
在一个示例中, 所述定位系统包括预存了位置信息的存储器。
在一个示例中, 所述定位系统包括设置在镜腿上的测距器件, 被配置以 对当前位置与目标位置之间的距离进行感测并将感测结果发送给所述处理 器。
在一个示例中, 所述镜腿上还设置有数据传输器件, 被配置以与外接设 备之间进行数据传输。
在一个示例中, 所述智能眼镜还包括充电器件, 被配置以对透明显示器、 摄像头、 拾音器、 脑电波识别器和处理器中的至少一个进行充电。
在一个示例中, 所述充电器件为集成在所述镜片的镜面上的太阳能充电 器件。
在一个示例中, 所述测距器件选自以下构成的组: 超声波测距仪、 红外 测距仪和激光测距仪。
根据本发明的至少一个实施例, 还提供了一种对上述智能眼镜的控制方 法, 包括以下接收方法和表达方法, 并且所述接收方法和表达方法可先后、 同时或分别独立执行。
例:^, 该接^:方法包括:
51.1、通过摄像头和拾音器获取交流对象的手势指令和语音信号并将其转 换为手势信号和音频信号后发送给处理器;
51.2、通过处理器对音频信号和手势信号分别进行识别以将其转换成图文 信息, 并通过镜片进行背面显示以呈现给佩戴者;
例:^, 该表达方法包括:
S2.1、通过脑电波识别器获取佩戴者的脑电波信号,对脑电波信号进行编 码和解码后得到第一信息并将其发送给处理器;
S2.2、通过处理器将第一信息转换成图文信息,并通过镜片进行正面显示 以呈现给交流对象。
在一个示例中,所述步骤 S1.2包括: 处理器在存储有手势信号与其指示内 容的对应关系的数据库和存储有音频信号与其指示内容的对应关系的数据库 中对所接收的手势信号和音频信号进行查找匹配, 将指示内容以图文形式输 出。
在一个示例中,如果步骤 S2.1中获取脑电波信号失败,则所述方法还可以 包括: 通过镜片的背面显示来提示发生错误。
在一个示例中,所述步骤 S2.2包括: 处理器在存储有脑电波信息与其指示 内容的对应关系的数据库中对所接收的第一信息进行查找匹配, 将指示内容 以图文形式输出。
在一个示例中,步骤 S2.2中进行正面显示之前还可以包括:对图文信息进 行背面显示, 接收到佩戴者发出的表示确认的脑电波信号后再将所述图文信 息进行正面显示。
在一个示例中,步骤 S1.1中,所述音频信号是经过拾音器对获取的语音信 号进行模数转换得到的。
在一个示例中,步骤 S1.1中,获取手势信号和音频信号的同时还通过摄像 头获取周围环境信息, 并结合定位系统对当前位置进行定位。
在一个示例中, 步骤 S1.1中, 通过所述处理器将所述摄像头获取的环 境信息与定位系统中存储的位置信息进行比较和计算。 附图说明
以下将结合附图对本发明的实施例进行更详细的说明, 以使本领域普通 技术人员更加清楚地理解本发明, 其中:
图 1是本发明实施例中提供的一种智能眼镜的结构示意图; 图 2是本发明实施例中提供的智能眼镜在表达模式下识别判断过程的原 理示意图;
图 3是本发明实施例中提供的智能眼镜在表达模式下匹配判断过程的原 理示意图;
图 4是本发明实施例中提供的智能眼镜在接收模式下的原理示意图; 图 5是本发明实施例中提供的一种智能眼镜的控制方法的步骤流程图。 具体实施方式
为使本发明的实施例的目的、 技术方案和优点更加清楚, 下面将结合本 发明实施例的附图对本发明的实施例的技术方案进行清楚、 完整的描述。 显 然, 所描述的实施例仅是本发明的一部分示例性实施例, 而不是全部的实施 例。 基于所描述的本发明的示例性实施例, 本领域普通技术人员在无需创造 性劳动的前提下所获得的所有其它实施例都属于本发明的保护范围。
除非另作定义, 此处使用的技术术语或者科学术语应当为本发明所属领 域内具有一般技能的人士所理解的通常意义。 本发明专利申请说明书以及权 利要求书中使用的 "第一"、 "第二" 以及类似的词语并不表示任何顺序、 数 量或者重要性, 而只是用来区分不同的组成部分。 同样, "一个"、 "一" 或者 "该" 等类似词语也不表示数量限制, 而是表示存在至少一个。 "包括" 或者 "包含" 等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后 面列举的元件或者物件及其等同,而不排除其他元件或者物件。 "上"、 "下"、 等仅用于表示相对位置关系, 当被描述对象的绝对位置改变后, 则该相对位 置关系也可能相应地改变。
下面对本发明的具体实施方式作进一步详细描述, 其中为了使描述更清 楚, 文中省略了一些特征、 结构, 但这种描述方式并不表示在本发明的实施 例中, 只包含所描述的特征、 结构, 因此, 还可以包括其他需要的特征结构。
本发明一实施例的智能眼镜的组成结构如图 1所示, 其包括: 镜片、 镜框 和镜腿。
镜片包括透明显示器 10, 被配置以进行双面显示; 镜框上设置有摄像头 11和拾音器 12, 分别被配置以获取手势指令和语音信号, 并且如果需要, 还 可将其转换成手势信号和音频信号; 镜腿上设置有脑电波识别器 13和处理器 14, 脑电波识别器 13被配置以获取佩戴者的脑电波信号, 处理器 14被配置以 接收所述手势信号、 音频信号以及脑电波信号、 进行处理并将处理结果发送 给所述透明显示器以用于所述双面显示。
需要说明的是, 图 1示出的实施例中将处理器 14内置于一侧的镜腿尾部, 在其它实施例中, 还可在另一侧的镜腿尾部内置具有其它功能的模块, 例如 内置的通信模块 WIFI, 蓝牙或者是定位模块 GPS等。
本发明的一实施例中, 透明显示器 10被配置以对处理器 14处理后的图文 信息进行双面显示。
在实现双面显示的第一种示例中, 该透明显示器 10包括一个显示器以及 设置在该显示器上的两个显示面, 这两个显示面分别用于正面显示和背面显 示。 在实现双面显示的第二种示例中, 该透明显示器 10包括两个显示器, 每 个显示器具有一个显示面, 因此总共也具有两个显示面, 分别用于正面显示 和背面显示。
需要说明的是, 本发明实施例中的正面显示和背面显示指的是两种显示 模式。 事实上, 由于显示器是透明显示, 在显示的过程中, 无论处于哪一种 显示模式, 镜片两侧都可以同时观测到相反的显示内容。 上述第二种示例与 第一种示例的区别在于使用的显示器个数不同, 具体说明如下。 第一种示例 中, 每个镜片釆用一个具有两个显示面的显示器, 显示器本身是透明的, 该 透明显示器的两个显示面分别对应于镜片的 A面和 B面, 用于背面显示和正面 显示两种模式; 而第二种示例中, 每个镜片釆用两个背对设置的显示器, 即 每个镜片是由两个显示器组成的, 并且这两个显示器分别对应于该镜片的 A 面和 B面, 同样分别用于背面显示和正面显示两种模式。 需要说明的是, 上述 对透明显示器的描述均是对其中一个镜片结构的描述, 两个镜片的结构可以 是完全一样的。
在本发明的一实施例中, 上述正面显示模式和背面显示模式分别为表达 模式和接收模式: 表达模式就是将佩戴者的想法或意识通过眼镜的镜片展示 给面前的普通人, 接收模式就是获取面前的普通人的语音、 手势等信息, 最 终通过镜片展示给佩戴者。 一般在表达模式下利用智能眼镜的正面显示, 所 谓正面就是顺着佩戴者视线方向, 即展示给面前普通人的一面(假设是 A面), 而在接收模式下利用智能眼镜的背面显示, 所谓背面就是与佩戴者视线相反 的方向, 即展示给眼镜佩戴者的一面 (假设是 B面)。
在本发明的一实施例中, 眼镜的镜片釆用柔性显示器件, 从而实现镜片 的弯曲结构设计。 在其中一个示例中, 对镜框上的显示区域进行弯曲以使其 延伸到用户头部两侧, 其功能是在眼镜的镜片呈现显示图案时, 不只是用户 的前方会出现提示画面, 其左右两侧也存在显示区域并呈现图文信息。
在本发明的至少一个实施例中, 图文信息为一种或多种选自以下构成的 组的信息: 文字信息、 图片信息以及文字信息和图片信息的组合。
在本发明的一实施例中, 摄像头 11设置在镜框上, 用于获取手势指令、 产生手势信号并将该手势信号发送给处理器 14。 在一个示例中, 摄像头 11位 于该智能眼镜的正前方、 左右眼镜的两侧, 对佩戴者周围进行全方位的检测。
在本发明的一实施例中, 摄像头 11除了用于获取佩戴者面前其他人的手 势指令以获知其想法和目的之外, 同时还用于实时釆集和检测与自身周围的 环境和情况相关的信息, 将相应的信息发送给处理器 14进行处理, 并根据内 部数据库进行推理和计算,根据结果再结合例如 GPS定位系统对佩戴者所处位 置进行精确定位。
在本发明的一实施例中的,智能眼镜的定位系统包括 GPS定位器和预存有 环境信息的存储器, 其分别与处理器 14连接; 此时, 摄像头 11还被配置以对 智能眼镜的周围环境进行监视, 获取周围环境信息并将其发送给处理器 14; 处理器 14将该环境信息与存储器数据库中预存的环境信息进行比较和计算, 对当前位置进行定位, 得到定位信息。 以此方式获得的定位信息有利于其他 智能终端根据该位置信息对佩戴者位置进行判定, 从而为佩戴者提供附近场 所的最佳路线, 例如为佩戴者提供最近的地铁站以及从当前位置到该地铁站 的最近路线等等; 还例如可以根据该位置信息获取其它方面的帮助信息, 原 理同上, 此处不再赘述。
在本发明的一实施例中, 该定位系统还包括测距器件, 对当前位置与目 标位置之间的距离进行感测。 在一个示例中, 测距器件内置于镜框上, 其功 能是实时测量佩戴者与其所在位置路标的距离, 以实现准确定位。 在另一实 施例中, 测距器件测量佩戴者与其交流对象的距离, 凭此设定拾音器 12的指 向及拾音的距离。 在一个示例中, 测距器件选自以下测距仪构成的组中的一 种: 超声波测距仪、 红外测距仪和激光测距仪。
在本发明的一实施例中, 拾音器 12设置在镜框的鼻托位置, 所述脑电波 识别器 13设置在镜腿的中部, 所述处理器 14设置在镜腿的尾部。 在本发明的 一实施例中,拾音器 12拾取特定范围内的语言信号 (例如模拟信号),将其转换 为数字信号后传输给处理器 14, 处理器 14经过语音识别将该信号转换成图文 信息后传输给透明显示器 10, 然后通过眼镜的背面镜片进行显示。
在本发明的一实施例中, 脑电波识别器 13设置在镜腿的中部, 佩戴时非 常接近佩戴者的大脑。 人的大脑在产生意识或想法的时候会生成一种脑电波 信号, 脑电波识别器 13用于识别出该脑电波信号, 读取佩戴者大脑思考时的 脑电波信息(也就是操作指令), 通过解码和编码处理得到第一信息并发送给 处理器 14, 之后由处理器 14对第一信息进行解析和处理, 并通过图文信息的 形式在 A面显示给佩戴者的交流对象。
对于上述脑电波识别, 在其中一个示例中, 智能眼镜包括解析存储器, 其包含存储了脑电波信息(第一信息)与表征信号发射者的意识或想法(即: 信号指示的内容) 的图文信息的对应关系的数据库。 处理器基于接收的第一 信息、 在该数据库中进行查找比对来确定此时获取的脑电波信号对应的指示 内容, 并进行显示。
在本发明的一实施例中, 镜腿上还设置有数据传输器件和充电器件; 数 据传输器件用于与外接设备之间进行数据传输; 充电器件对透明显示器 10、 摄像头 11、 拾音器 12、 脑电波识别器 13和处理器 14中的至少一个进行充电以 提高该智能眼镜的续航能力。
在一个示例中, 该充电器件为太阳能充电的充电器件, 集成在镜片两侧 的表面上。
在一个示例中, 数据传输器件内置于未设置处理器 14的另一侧镜腿的尾 部。
在一个示例中, 通过数据传输器件实现本发明实施例提供的智能眼镜的 通信功能, 即: 通过 RF ( adio Frequency, 即射频) 系统与本实施例中提供 的智能眼镜佩戴者进行联络。 例如, 来自其他智能终端用户的语音信息通过 处理单元和人工智能系统的相应信息处理后通过显示屏以 B面显示模式传达 给用户; 用户的回应通过脑电波发出、 被识别和读取后转换成图文信息, 以 B 面显示模式显示在镜片上以等待用户对内容进行确认; 处理器 14接收到佩戴 者的确认信息后将其以 A面显示模式传达给交流对象,或者是将其经例如处理 器转换成语音信号后通过该数据传输器件发送出去。
在本发明的另一实施例中, 通信功能通过 WIFI或 Bluetooth等实现。 因此, 在其中一个示例中, 该智能眼镜还同时具有娱乐功能的设计, 佩戴者可以利 用脑电波通过镜片玩游戏, 利用 WIFI上网或与其他设备之间进行数据传输。
在本发明的一实施例中, 智能眼镜在表达模式下的工作原理如下所述。 佩戴者大脑发出意念指令, 产生脑电波信号; 脑电波识别器 13对该脑电 波信号进行识别; 如果识别脑电波信号失败则在透明显示器 10的 B面显示 "识 别失败", 佩戴者通过 B面显示得到 "识别失败"的反馈后将再次产生脑电波信 号。 该过程为识别判断过程, 只是对脑电波识别器读取信息的错误进行反馈, 原理示意图如图 2所示。
进一步地, 脑电波识别器 13对脑电波信号识别成功之后, 将识别的信号 与存储器中的脑电波数据库进行信号比对, 例如, 将识别的信号与脑电波数 据库中的信息码进行匹配, 根据匹配度判断比对是否通过; 如果比对通过, 则表明脑电波读取成功, 通过解码和编码处理获得 "第一信息"; 将第一信息 发送给处理器 14以继续对该信息进行解析和处理, 例如基于解析存储器中预 先存储的包含脑电波信号与图文信息的对应关系的数据库进行查找匹配, 并 将匹配出来的图文信息进行输出, 即: 通过透明显示器 10的 A面进行显示。
在本发明的一实施例中, 在对脑电波信号与数据库对比匹配得到的图文 信息在 A面显示之前, 还可以将其先在 B面显示给佩戴者以供其进行判断; 如 果显示的图文信息符合佩戴者的意念, 佩戴者产生一个确认的脑电波信号, 处理器 14收到该确认信息后再控制透明显示器切换到 A面显示模式,将匹配的 图文信息显示给佩戴者之外的人;如果 B面显示的图文信息不符合佩戴者的意 念, 则佩戴者大脑重新发出新的脑电波信号, 脑电波识别器重新接收佩戴者 大脑发出的脑电波信号并且与处理器重复之前的操作, 直到符合佩戴者的意 念为止。 该过程为匹配判断过程, 即对处理器输出的信息是否与佩戴者想要 表达的意念一致进行判断, 原理示意图如图 3所示。 通过这个判断过程, 如果 处理器处理后的图文信息完全符合佩戴者的意见,则在眼镜的 A面进行图文信 息显示; 但是如果处理器处理后的信息与佩戴者想表达的意念有差异, 则佩 戴者重新发送意念指令。 由此, 能够更加准确地表达佩戴者的想法, 实现有 效、 准确的交流。
本发明一实施例中智能眼镜在接收模式下的工作原理如图 4所示。
摄像头 11和拾音器 12分别获取手势指令和语音信号, 分别对手势指令和 语音信号进行转换得到手势信号和音频信号, 然后发送给处理器 14进行处理; 处理器 14在解析存储器内存储的包含手势信号和音频信号分别与图文信息的 对应关系的数据库中进行查找匹配, 确定普通人的该手势指令和语音信号所 代表的想要对该智能眼镜佩戴者表达的想法或意图, 并将其以该图文信息的 形式通过透明显示器 10的 B面显示给佩戴者面前的普通人,实现佩戴者接受外 界信息的过程。
在其中一个示例中, 拾音器 12是通过模数转换将语音信号转换成音频信 号的; 而摄像头 11通过处理器内的解码器及编码器将所述手势指令转换成手 势信号。
如上所述, 本发明至少一个实施例所提供的智能眼镜有利于佩戴该智能 目艮镜的聋哑人与普通人之间的交流和沟通。 摄像头和拾音器获取普通人针对 佩戴者的手势指令和语音信号, 经过处理器的识别和处理后以例如图文信息 的形式通过背面镜片展示给佩戴者, 使得佩戴者获知普通人的想法和意图; 同样, 脑电波识别器获取佩戴者的脑电波信号, 经过处理器的解析和处理后 以例如相应的图文信息的形式通过正面镜片展示给普通人, 使得普通人获知 佩戴者的想法和意图。 由此克服聋哑人与普通人之间无法有效沟通和交流的 障碍。
本发明的实施例还提供了一种基于上述各实施例中任一智能眼镜的控制 方法, 包括控制智能眼镜进行接收显示的步骤 S 1 -S2和控制智能眼镜进行表达 显示的步骤 S3和 S4:
例如, 接收显示控制步骤为:
S1、 通过摄像头和拾音器获取交流对象的手势指令和语音信号并将其转 换为手势信号和音频信号后发送给处理器;
S2、 通过处理器对音频信号和手势信号分别进行识别以将其转换成图文 信息, 并通过镜片进行背面显示。
例如, 表达显示控制步骤为:
S3、 通过脑电波识别器获取佩戴者的脑电波信号, 对脑电波信号进行编 码和解码后得到第一信息并将其发送给处理器;
S4、 通过处理器将第一信息转换成图文信息, 并通过镜片进行正面显示。 需要说明的是, 在接收控制中, S1和 S2先后执行; 在表达控制中, S3和 S4先后执行; 但是, 这两组步骤彼此之间可同时执行、 先后执行或者分别独 立执行。
本发明一实施例的智能眼镜控制方法如图 5所示,其中,先执行接收控制, 再执行表达控制(即: 以 S1-S2-S3-S4的步骤执行)。 但应理解的是, 在其他实 施例中,可仅存在接收控制(以 S1-S2的步骤执行)或仅存在表达控制(以 S3-S4 的步骤执行)或先执行表达控制再执行接收控制 (即; 以 S3-S4-S1-S2的顺序 执行)。
在一个示例中, 步骤 S1包括: 通过处理器在存储有手势信号与其指示内 容的对应关系的数据库和存储有音频信号与其指示内容的对应关系的数据库 中对所接收的手势信号和音频信号进行查找匹配, 将指示内容以图文形式输 出。
在一个示例中, 如果在步骤 S3中获取脑电波信号失败, 则所述方法还包 括: 通过镜片的背面显示来提示发生错误。
在一个示例中, 步骤 S4包括: 通过处理器在存储有脑电波信息与其指示 内容的对应关系的数据库中对所接收的第一信息进行查找匹配, 将指示内容 以图文形式输出。
在一个示例中, 步骤 S4中智能眼镜的正面镜片显示之前还包括: 将图文 信息在背面镜片进行显示, 接收到佩戴者发出的表示确认的脑电波信号后再 通过镜片对所述图文信息进行正面显示。
在一个示例中, 步骤 S1中, 音频信号是经过拾音器对获取的语音信号进 行模数转换得到的。 在一个示例中, 步骤 SI中, 获取手势信号和音频信号的同时还通过摄像 头获取周围环境信息, 并结合定位系统对当前位置进行定位。
在一个示例中, 步骤 S1中, 通过所述处理器将所述摄像头获取的环境 信息与定位系统中存储的位置信息进行比较和计算。
在本发明的实施例所提供的智能眼镜的控制方法中, 手势信号、 语音信 号和脑电波信号的识别、 读取和匹配, 图文信息的生成、 输出及其正、 背面 显示, 以及诸如佩戴者的定位、 与外界系统的数据传输等其他辅助功能的实 施均可参照前述智能眼镜的相关功能描述。
综上所述, 本发明至少一个实施例提供的控制方法中, 通过佩戴上述智 能眼镜, 人们能够和周围的普通人进行交流和沟通, 主要是通过摄像头以及 具有语音识别和脑电波识别的器件或模块实现对手势指令、 语音信号以及脑 电波信号的釆集和转换, 将得到的信号经过处理器处理后以图文信息的形式 进行展示, 由于智能眼睛是可以双面显示的透明显示器, 能够实现聋哑人和 身边普通人之间的有效沟通和交流, 而且提高手语表达等现有方式的精确度。
以上实施方式仅用于说明本发明, 而对本发明的限制, 有关技术领域的 普通技术人员, 在不脱离本发明的精神和范围的情况下, 还可以做出各种变 化和变型, 因此所有等同的技术方案也属于本发明的范畴, 本发明的专利保 护范围应由权利要求限定。
本申请要求于 2013年 12月 5曰递交的中国专利申请第 201310652206. 8 号的优先权, 在此全文引用上述中国专利申请公开的内容以作为本申请的 一部分。

Claims

权利要求书
1、 一种智能眼镜, 包括: 镜片、 镜框和镜腿;
镜片包括透明显示器, 被配置以进行双面显示;
镜框上设置有摄像头和拾音器, 分别被配置以获取手势指令和语音信号 并转换成手势信号和音频信号;
镜腿上设置有脑电波识别器和处理器, 脑电波识别器被配置以获取佩戴 者的脑电波信号, 处理器被配置以接收手势信号、 音频信号以及脑电波信号、 进行处理并将处理结果发送给所述透明显示器以用于所述双面显示。
2、 根据权利要求 1所述的智能眼镜, 其中, 所述处理器被配置以生成图 文信息形式的处理结果。
3、 根据权利要求 1或 2所述的智能眼镜, 其中, 对于每一镜片, 所述透明 显示器包括一个显示器以及设置在该显示器上的两个显示面, 所述两个显示 面分别被配置以进行正面显示和背面显示。
4、 根据权利要求 1或 2所述的智能眼镜, 其中, 对于每一镜片, 所述透明 显示器包括两个显示器, 每个显示器具有一个显示面, 所述两个显示器分别 被配置以进行正面显示和背面显示。
5、根据权利要求 1-4任一所述的智能眼镜, 其中, 所述透明显示器是柔性 显示器。
6、根据权利要求 1-5任一所述的智能眼镜, 其中,还包括与所述处理器连 接的解析存储器, 存储有如下数据库至少之一:包含脑电波信号与其指示内容 的对应关系的数据库、 包含手势指令与其指示内容的对应关系的数据库以及 包^ ^吾音信号与其指示内容的对应关系的数据库。
7、 根据权利要求 1-6任一所述的智能眼镜, 还包括定位系统; 其中, 所述 摄像头还被配置以获取智能眼镜周围环境的环境信息并将其发送给处理器; 所述处理器还被配置以根据摄像头发送的环境信息并结合所述定位系统对当 前位置进行定位。
8、根据权利要求 1-7任一所述的智能眼镜, 其中, 所述拾音器设置在镜框 的鼻托位置, 所述脑电波识别器设置在镜腿的中部, 所述处理器设置在镜腿 的尾部。
9、 根据权利要求 7所述的智能眼镜, 其中, 所述定位系统包括设置在镜 腿上的测距器件, 被配置以对当前位置与目标位置之间的距离进行感测并将 感测数据发送给所述处理器。
10、 根据权利要求 1-9任一所述的智能眼镜, 其中, 所述镜腿上还设置有 充电器件, 被配置以对透明显示器、 摄像头、 拾音器、 脑电波识别器和处理 器中的至少一个进行充电。
11、 根据权利要求 10所述的智能眼镜, 其中, 所述充电器件为太阳能充 电器件, 集成在所述透明显示器的表面上。
12、 根据权利要求 9所述的智能眼镜, 其中, 所述测距器件选自以下测距 仪构成的组: 超声波测距仪、 红外测距仪和激光测距仪。
13、 一种智能眼镜的控制方法, 所述智能眼镜包括: 镜片、 镜框和镜腿; 镜片包括透明显示器, 被配置以进行双面显示;
镜框上设置有摄像头和拾音器, 分别被配置以获取手势指令和语音信号 并转换成手势信号和音频信号;
镜腿上设置有脑电波识别器和处理器, 脑电波识别器被配置以获取佩戴 者的脑电波信号, 处理器被配置以接收手势信号、 音频信号以及脑电波信号、 进行处理并将处理结果发送给所述透明显示器以用于所述双面显示;
所述控制方法包括以下接收方法和表达方法, 并且所述接收方法和表达 方法可先后、 同时或分别独立执行,
其中, 接收方法包括:
S1.1、通过摄像头和拾音器获取交流者的手势指令和语音信号并将其转换 为手势信号和音频信号后发送给处理器;
S1.2、通过处理器对音频信号和手势信号分别进行识别和读取以将其转换 成图文信息, 并通过镜片进行背面显示以呈献给佩戴者;
表达方法包括:
52.1、通过脑电波识别器获取佩戴者的脑电波信号,对脑电波信号进行编 码和解码后得到第一信息并将所述发送给处理器;
52.2、通过处理器将第一信息转换成图文信息,并通过镜片进行正面显示 以呈现给交流对象。
14、 根据权利要求 13所述的控制方法, 其中, 所述步骤 S1.2包括: 处理器 在存储有手势信号与其指示内容的对应关系的数据库和存储有音频信号与其 指示内容的对应关系的数据库中对所接收的手势信号和音频信号进行查找匹 配, 将指示内容以图文形式输出。
15、根据权利要求 13或 14所述的控制方法, 其中, 如果步骤 S2.1中获取脑 电波信号失败, 则所述方法还包括:
通过镜片的背面显示来提示发生错误。
16、 根据权利要求 13-15任一所述的控制方法, 其中, 所述步骤 S2.2包括: 通过处理器在存储有脑电波信息与其指示内容的对应关系的数据库中对所接 收的第一信息进行查找匹配, 将指示内容以图文形式输出。
17、 根据权利要求 13-16任一所述的控制方法, 其中, 步骤 S2.2中进行正 面显示之前还包括:
对图文信息进行背面显示, 接收到佩戴者发出的表示确认的脑电波信号 后再将所述图文信息进行正面显示。
18、 根据权利要求 13-17任一所述的控制方法, 其中, 步骤 S1.1中, 所述 音频信号是经过拾音器对获取的语音信号进行模数转换得到的。
19、 根据权利要求 13-18任一所述的控制方法, 其中, 步骤 S1.1中, 获取 手势信号和音频信号的同时还通过摄像头获取智能眼镜周围环境信息, 并结 合与所述处理器连接的定位系统对当前位置进行定位。
20、 根据权利要求 19所述的控制方法, 其中, 通过所述处理器将所 述摄像头获取的环境信息与定位系统中存储的位置信息进行比较和计 算。
PCT/CN2014/081282 2013-12-05 2014-06-30 智能眼镜及其控制方法 WO2015081694A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/417,440 US20150379896A1 (en) 2013-12-05 2014-06-30 Intelligent eyewear and control method thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310652206.8A CN103646587B (zh) 2013-12-05 2013-12-05 一种智能眼镜及其控制方法
CN201310652206.8 2013-12-05

Publications (1)

Publication Number Publication Date
WO2015081694A1 true WO2015081694A1 (zh) 2015-06-11

Family

ID=50251793

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/081282 WO2015081694A1 (zh) 2013-12-05 2014-06-30 智能眼镜及其控制方法

Country Status (3)

Country Link
US (1) US20150379896A1 (zh)
CN (1) CN103646587B (zh)
WO (1) WO2015081694A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106994689A (zh) * 2016-01-23 2017-08-01 鸿富锦精密工业(武汉)有限公司 基于脑电信号控制的智能机器人系统和方法
CN105472256B (zh) * 2016-01-05 2018-09-28 上海斐讯数据通信技术有限公司 拍摄和传输图像的方法、智能眼镜及系统
US11125998B2 (en) * 2014-01-02 2021-09-21 Nokia Technologies Oy Apparatus or method for projecting light internally towards and away from an eye of a user
CN114822172A (zh) * 2022-06-23 2022-07-29 北京亮亮视野科技有限公司 基于ar眼镜的文字显示方法及装置

Families Citing this family (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103646587B (zh) * 2013-12-05 2017-02-22 北京京东方光电科技有限公司 一种智能眼镜及其控制方法
CN104951259B (zh) * 2014-03-28 2019-10-18 索尼公司 显示设备及其显示控制方法
CN104065388A (zh) * 2014-07-09 2014-09-24 李永飞 人脑电台
CN104375641B (zh) * 2014-10-27 2017-12-26 联想(北京)有限公司 一种控制方法及电子设备
TW201624469A (zh) * 2014-12-26 2016-07-01 Univ Chienkuo Technology 聽障人士之電子智能溝通眼鏡
KR102311741B1 (ko) * 2015-01-14 2021-10-12 삼성디스플레이 주식회사 머리 장착형 디스플레이 장치
CN106302974B (zh) * 2015-06-12 2020-01-31 联想(北京)有限公司 一种信息处理的方法及电子设备
CN104966433A (zh) * 2015-07-17 2015-10-07 江西洪都航空工业集团有限责任公司 一种辅助聋哑人对话的智能眼镜
DE102015214350A1 (de) * 2015-07-29 2017-02-02 Siemens Healthcare Gmbh Verfahren zu einer Kommunikation zwischen einem medizinischen Netzwerk und einem medizinischen Bedienpersonal mittels einer mobilen Datenbrille, sowie eine mobile Datenbrille
CN105137601B (zh) * 2015-10-16 2017-11-14 上海斐讯数据通信技术有限公司 一种智能眼镜
CN105468140A (zh) * 2015-11-05 2016-04-06 京东方科技集团股份有限公司 佩戴设备、应用设备系统
KR102450803B1 (ko) * 2016-02-11 2022-10-05 한국전자통신연구원 양방향 수화 번역 장치 및 장치가 수행하는 양방향 수화 번역 방법
CN106157750A (zh) * 2016-08-24 2016-11-23 深圳市铁格龙科技有限公司 一种智能聋哑人发音及交流学习眼镜
CN106205293A (zh) * 2016-09-30 2016-12-07 广州音书科技有限公司 用于语音识别和手语识别的智能眼镜
CN106656352B (zh) * 2016-12-27 2020-04-07 广东小天才科技有限公司 一种信息传递方法及装置、可穿戴设备
CN106601075A (zh) * 2017-02-05 2017-04-26 苏州路之遥科技股份有限公司 脑电波输入训练器
US10854110B2 (en) 2017-03-03 2020-12-01 Microsoft Technology Licensing, Llc Automated real time interpreter service
US11861255B1 (en) 2017-06-16 2024-01-02 Apple Inc. Wearable device for facilitating enhanced interaction
CN109425983A (zh) * 2017-08-27 2019-03-05 南京乐朋电子科技有限公司 一种脑电波成像投影眼镜
CN108106665A (zh) * 2017-12-12 2018-06-01 深圳分云智能科技有限公司 一种具有玻璃监测功能的智能穿戴设备
US11435583B1 (en) * 2018-01-17 2022-09-06 Apple Inc. Electronic device with back-to-back displays
CN108198552B (zh) * 2018-01-18 2021-02-02 深圳市大疆创新科技有限公司 一种语音控制方法及视频眼镜
CN110111651A (zh) * 2018-02-01 2019-08-09 周玮 基于体态感知的智能语言交互系统
CN108509034B (zh) * 2018-03-16 2021-05-11 Oppo广东移动通信有限公司 电子装置、信息处理方法及相关产品
CN111954290B (zh) * 2018-03-30 2023-04-18 Oppo广东移动通信有限公司 电子装置、功率调整方法及相关产品
CN108711425A (zh) * 2018-05-03 2018-10-26 华南理工大学 一种基于语音控制的视频输入听觉显示导盲装置及方法
CN108803871A (zh) * 2018-05-07 2018-11-13 歌尔科技有限公司 头戴显示设备中数据内容的输出方法、装置及头戴显示设备
CN110058413A (zh) * 2018-05-23 2019-07-26 王小峰 一种智能穿戴系统
US10908419B2 (en) 2018-06-28 2021-02-02 Lucyd Ltd. Smartglasses and methods and systems for using artificial intelligence to control mobile devices used for displaying and presenting tasks and applications and enhancing presentation and display of augmented reality information
CN109255314B (zh) * 2018-08-30 2021-07-02 Oppo广东移动通信有限公司 信息提示方法、装置、智能眼镜及存储介质
JP7283652B2 (ja) * 2018-10-04 2023-05-30 シーイヤー株式会社 聴覚サポートデバイス
IT201800009607A1 (it) * 2018-10-19 2020-04-19 Andrea Previato Sistema e metodo di ausilio ad utenti con disabilità comunicativa
USD899500S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD900920S1 (en) 2019-03-22 2020-11-03 Lucyd Ltd. Smart glasses
USD899493S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD900206S1 (en) 2019-03-22 2020-10-27 Lucyd Ltd. Smart glasses
USD899499S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD899498S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD900205S1 (en) 2019-03-22 2020-10-27 Lucyd Ltd. Smart glasses
USD899497S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD899494S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD899496S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD899495S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD900204S1 (en) 2019-03-22 2020-10-27 Lucyd Ltd. Smart glasses
USD900203S1 (en) 2019-03-22 2020-10-27 Lucyd Ltd. Smart glasses
CN110351631A (zh) * 2019-07-11 2019-10-18 京东方科技集团股份有限公司 聋哑人交流设备及其使用方法
CN112506335B (zh) * 2019-09-16 2022-07-12 Oppo广东移动通信有限公司 头戴式设备及其控制方法、装置、系统和存储介质
USD958234S1 (en) 2019-12-12 2022-07-19 Lucyd Ltd. Round smartglasses having pivot connector hinges
USD954135S1 (en) 2019-12-12 2022-06-07 Lucyd Ltd. Round smartglasses having flat connector hinges
USD955467S1 (en) 2019-12-12 2022-06-21 Lucyd Ltd. Sport smartglasses having flat connector hinges
USD954136S1 (en) 2019-12-12 2022-06-07 Lucyd Ltd. Smartglasses having pivot connector hinges
USD954137S1 (en) 2019-12-19 2022-06-07 Lucyd Ltd. Flat connector hinges for smartglasses temples
USD974456S1 (en) 2019-12-19 2023-01-03 Lucyd Ltd. Pivot hinges and smartglasses temples
CN111046854B (zh) * 2020-01-10 2024-01-26 北京服装学院 一种脑电波外部识别方法、装置及系统
CN111258088A (zh) * 2020-02-25 2020-06-09 厦门明睐科技有限公司 一种脑电波控制的智能眼镜设备及使用方法
US11282523B2 (en) * 2020-03-25 2022-03-22 Lucyd Ltd Voice assistant management
CN111751995A (zh) * 2020-06-11 2020-10-09 重庆工业职业技术学院 一种声音视觉化的单目头戴式ar眼镜装置及其实现方法
CN111787264B (zh) * 2020-07-21 2021-08-10 北京字节跳动网络技术有限公司 一种远程教学的提问方法、装置、提问终端和可读介质
CN115695620A (zh) * 2021-07-22 2023-02-03 所乐思(深圳)科技有限公司 智能眼镜及其控制方法和系统

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090128449A1 (en) * 2007-11-15 2009-05-21 International Business Machines Corporation Augmenting Reality For A User
CN101819334A (zh) * 2010-04-01 2010-09-01 夏翔 多功能电子眼镜
CN202533867U (zh) * 2012-04-17 2012-11-14 北京七鑫易维信息技术有限公司 一种头戴式眼控显示终端
US20120299950A1 (en) * 2011-05-26 2012-11-29 Nokia Corporation Method and apparatus for providing input through an apparatus configured to provide for display of an image
CN103211655A (zh) * 2013-04-11 2013-07-24 深圳先进技术研究院 一种骨科手术导航系统及导航方法
CN103279232A (zh) * 2012-06-29 2013-09-04 上海天马微电子有限公司 一种橱窗互动装置及其互动实施方法
CN103310683A (zh) * 2013-05-06 2013-09-18 深圳先进技术研究院 智能眼镜及基于智能眼镜的语音交流系统及方法
CN103336579A (zh) * 2013-07-05 2013-10-02 百度在线网络技术(北京)有限公司 穿戴式设备的输入方法和穿戴式设备
CN103646587A (zh) * 2013-12-05 2014-03-19 北京京东方光电科技有限公司 一种智能眼镜及其控制方法

Family Cites Families (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4902120A (en) * 1988-11-22 1990-02-20 Weyer Frank M Eyeglass headphones
JP3289304B2 (ja) * 1992-03-10 2002-06-04 株式会社日立製作所 手話変換装置および方法
US5610678A (en) * 1993-12-30 1997-03-11 Canon Kabushiki Kaisha Camera including camera body and independent optical viewfinder
US6433913B1 (en) * 1996-03-15 2002-08-13 Gentex Corporation Electro-optic device incorporating a discrete photovoltaic device and method and apparatus for making same
US6240392B1 (en) * 1996-08-29 2001-05-29 Hanan Butnaru Communication device and method for deaf and mute persons
EP1027627B1 (en) * 1997-10-30 2009-02-11 MYVU Corporation Eyeglass interface system
US6491394B1 (en) * 1999-07-02 2002-12-10 E-Vision, Llc Method for refracting and dispensing electro-active spectacles
US6510417B1 (en) * 2000-03-21 2003-01-21 America Online, Inc. System and method for voice access to internet-based information
US20020158816A1 (en) * 2001-04-30 2002-10-31 Snider Gregory S. Translating eyeglasses
US7023498B2 (en) * 2001-11-19 2006-04-04 Matsushita Electric Industrial Co. Ltd. Remote-controlled apparatus, a remote control system, and a remote-controlled image-processing apparatus
JP2004199027A (ja) * 2002-10-24 2004-07-15 Seiko Epson Corp 表示装置、及び電子機器
JP2004144990A (ja) * 2002-10-24 2004-05-20 Alps Electric Co Ltd 両面発光型液晶表示モジュール
US7546158B2 (en) * 2003-06-05 2009-06-09 The Regents Of The University Of California Communication methods based on brain computer interfaces
US7120486B2 (en) * 2003-12-12 2006-10-10 Washington University Brain computer interface
US8965460B1 (en) * 2004-01-30 2015-02-24 Ip Holdings, Inc. Image and augmented reality based networks using mobile devices and intelligent electronic glasses
KR100594117B1 (ko) * 2004-09-20 2006-06-28 삼성전자주식회사 Hmd 정보 단말기에서 생체 신호를 이용하여 키를입력하는 장치 및 방법
US20060094974A1 (en) * 2004-11-02 2006-05-04 Cain Robert C Systems and methods for detecting brain waves
US8696113B2 (en) * 2005-10-07 2014-04-15 Percept Technologies Inc. Enhanced optical and perceptual digital eyewear
US11428937B2 (en) * 2005-10-07 2022-08-30 Percept Technologies Enhanced optical and perceptual digital eyewear
US20080144854A1 (en) * 2006-12-13 2008-06-19 Marcio Marc Abreu Biologically fit wearable electronics apparatus and methods
KR100866215B1 (ko) * 2006-12-20 2008-10-30 삼성전자주식회사 뇌파를 이용한 단말의 구동 방법 및 장치
WO2010004698A1 (ja) * 2008-07-11 2010-01-14 パナソニック株式会社 脳波を用いた機器の制御方法および脳波インタフェースシステム
CN100595635C (zh) * 2009-01-14 2010-03-24 长春大学 智能盲人导航眼镜
US8964298B2 (en) * 2010-02-28 2015-02-24 Microsoft Corporation Video display modification based on sensor input for a see-through near-to-eye display
CN102236986A (zh) * 2010-05-06 2011-11-09 鸿富锦精密工业(深圳)有限公司 手语翻译系统、手语翻译装置及手语翻译方法
US9994228B2 (en) * 2010-05-14 2018-06-12 Iarmourholdings, Inc. Systems and methods for controlling a vehicle or device in response to a measured human response to a provocative environment
US20110291918A1 (en) * 2010-06-01 2011-12-01 Raytheon Company Enhancing Vision Using An Array Of Sensor Modules
US8442626B2 (en) * 2010-06-21 2013-05-14 Aleksandrs Zavoronkovs Systems and methods for communicating with a computer using brain activity patterns
US20120078628A1 (en) * 2010-09-28 2012-03-29 Ghulman Mahmoud M Head-mounted text display system and method for the hearing impaired
GB201103200D0 (en) * 2011-02-24 2011-04-13 Isis Innovation An optical device for the visually impaired
US8593795B1 (en) * 2011-08-09 2013-11-26 Google Inc. Weight distribution for wearable computing device
KR20130045471A (ko) * 2011-10-26 2013-05-06 삼성전자주식회사 전자장치 및 그 제어방법
US20170164878A1 (en) * 2012-06-14 2017-06-15 Medibotics Llc Wearable Technology for Non-Invasive Glucose Monitoring
TWI467539B (zh) * 2012-07-20 2015-01-01 Au Optronics Corp 影像顯示的控制方法以及顯示系統
WO2014041871A1 (ja) * 2012-09-12 2014-03-20 ソニー株式会社 画像表示装置及び画像表示方法、並びに記録媒体
US9966075B2 (en) * 2012-09-18 2018-05-08 Qualcomm Incorporated Leveraging head mounted displays to enable person-to-person interactions
US10073201B2 (en) * 2012-10-26 2018-09-11 Qualcomm Incorporated See through near-eye display
JP6094190B2 (ja) * 2012-12-10 2017-03-15 ソニー株式会社 情報処理装置および記録媒体
US9240162B2 (en) * 2012-12-31 2016-01-19 Lg Display Co., Ltd. Transparent display apparatus and method for controlling the same
EP2972678A4 (en) * 2013-03-15 2016-11-02 Interaxon Inc CLOTHING COMPUTER APPARATUS AND ASSOCIATED METHOD
US9280972B2 (en) * 2013-05-10 2016-03-08 Microsoft Technology Licensing, Llc Speech to text conversion
CN106537233A (zh) * 2014-04-22 2017-03-22 伊瓦·阿尔布佐夫 用于头戴式智能设备的热成像配件
CN105607253B (zh) * 2014-11-17 2020-05-12 精工爱普生株式会社 头部佩戴型显示装置以及控制方法、显示系统
US9672760B1 (en) * 2016-01-06 2017-06-06 International Business Machines Corporation Personalized EEG-based encryptor

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090128449A1 (en) * 2007-11-15 2009-05-21 International Business Machines Corporation Augmenting Reality For A User
CN101819334A (zh) * 2010-04-01 2010-09-01 夏翔 多功能电子眼镜
US20120299950A1 (en) * 2011-05-26 2012-11-29 Nokia Corporation Method and apparatus for providing input through an apparatus configured to provide for display of an image
CN202533867U (zh) * 2012-04-17 2012-11-14 北京七鑫易维信息技术有限公司 一种头戴式眼控显示终端
CN103279232A (zh) * 2012-06-29 2013-09-04 上海天马微电子有限公司 一种橱窗互动装置及其互动实施方法
CN103211655A (zh) * 2013-04-11 2013-07-24 深圳先进技术研究院 一种骨科手术导航系统及导航方法
CN103310683A (zh) * 2013-05-06 2013-09-18 深圳先进技术研究院 智能眼镜及基于智能眼镜的语音交流系统及方法
CN103336579A (zh) * 2013-07-05 2013-10-02 百度在线网络技术(北京)有限公司 穿戴式设备的输入方法和穿戴式设备
CN103646587A (zh) * 2013-12-05 2014-03-19 北京京东方光电科技有限公司 一种智能眼镜及其控制方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11125998B2 (en) * 2014-01-02 2021-09-21 Nokia Technologies Oy Apparatus or method for projecting light internally towards and away from an eye of a user
CN105472256B (zh) * 2016-01-05 2018-09-28 上海斐讯数据通信技术有限公司 拍摄和传输图像的方法、智能眼镜及系统
CN106994689A (zh) * 2016-01-23 2017-08-01 鸿富锦精密工业(武汉)有限公司 基于脑电信号控制的智能机器人系统和方法
CN114822172A (zh) * 2022-06-23 2022-07-29 北京亮亮视野科技有限公司 基于ar眼镜的文字显示方法及装置

Also Published As

Publication number Publication date
US20150379896A1 (en) 2015-12-31
CN103646587A (zh) 2014-03-19
CN103646587B (zh) 2017-02-22

Similar Documents

Publication Publication Date Title
WO2015081694A1 (zh) 智能眼镜及其控制方法
US9101459B2 (en) Apparatus and method for hierarchical object identification using a camera on glasses
CN104983511A (zh) 针对全盲视觉障碍者的语音帮助智能眼镜系统
US10062302B2 (en) Vision-assist systems for orientation and mobility training
KR20150144510A (ko) 시각 장애인용 안경 시스템
KR101835235B1 (ko) 시각 장애인 보조장치 및 그 제어방법
KR101684264B1 (ko) 글라스형 웨어러블 디바이스의 버스도착 알림방법 및 이를 이용한 글라스형 웨어러블 디바이스용 프로그램
CN109696748A (zh) 一种用于同步翻译的增强现实字幕眼镜
TWI652656B (zh) 視覺輔助系統及具有該視覺輔助系統的可穿戴裝置
KR101728707B1 (ko) 글라스형 웨어러블 디바이스를 이용한 실내 전자기기 제어방법 및 제어프로그램
CN112002186B (zh) 一种基于增强现实技术的信息无障碍系统及方法
KR101982848B1 (ko) 분리형 웨어러블 디바이스 및 제어장치로 구성된 시력 취약계층용 시력 보조장치
US10943117B2 (en) Translation to braille
CN105824137A (zh) 可视化智能眼镜
KR20160024140A (ko) 본 발명의 일실시예에 따른 글라스형 웨어러블 디바이스를 이용한 매장정보 제공서비스 시스템 및 방법
CN210606226U (zh) 一种双模式聋哑人交流设备
Saha et al. Vision maker: An audio visual and navigation aid for visually impaired person
CN218045797U (zh) 一种盲人穿戴智慧云眼镜及系统
KR102516155B1 (ko) 홈 오토메이션과 연동되는 음성인식 기반의 웨어러블 디바이스
KR102570418B1 (ko) 사용자 행동을 분석하는 웨어러블 디바이스 및 이를 이용한 대상인식방법
EP3882894A1 (en) Seeing aid for a visually impaired individual
US20240105173A1 (en) Method and apparatus for providing virtual space in which interaction with another entity is applied to entity
US20230104182A1 (en) Smart Wearable Sensor-Based Bi-Directional Assistive Device
KR101661556B1 (ko) 글라스형 웨어러블 디바이스를 이용한 신원 확인 방법 및 프로그램
KR20230141395A (ko) 정보를 제공하는 방법 및 이를 지원하는 전자 장치

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 14417440

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14868369

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 17.11.2016)

122 Ep: pct application non-entry in european phase

Ref document number: 14868369

Country of ref document: EP

Kind code of ref document: A1