WO2017036516A1 - Dispositif de traitement portable externe pour application médicale, système et procédé de mémoire vocale - Google Patents

Dispositif de traitement portable externe pour application médicale, système et procédé de mémoire vocale Download PDF

Info

Publication number
WO2017036516A1
WO2017036516A1 PCT/EP2015/069918 EP2015069918W WO2017036516A1 WO 2017036516 A1 WO2017036516 A1 WO 2017036516A1 EP 2015069918 W EP2015069918 W EP 2015069918W WO 2017036516 A1 WO2017036516 A1 WO 2017036516A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
speech
state
augmented reality
people
Prior art date
Application number
PCT/EP2015/069918
Other languages
English (en)
Inventor
Pan Hui
Bowen SHI
Zhanpeng HUANG
Christoph Peylo
Original Assignee
Deutsche Telekom Ag
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deutsche Telekom Ag filed Critical Deutsche Telekom Ag
Priority to EP15771861.0A priority Critical patent/EP3218896A1/fr
Priority to PCT/EP2015/069918 priority patent/WO2017036516A1/fr
Publication of WO2017036516A1 publication Critical patent/WO2017036516A1/fr

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user

Definitions

  • This invention relates to a system and a method for helping memory impaired people memorize information by extracting information from daily vocal dialogues and providing the extracted information to the user when required.
  • Augmented reality is a technology to supplement the real world by overlaying computer-generated virtual contents on the view of real environment to create a new mixed environment in which a user can see both the real and virtual contents in his or her field of view. It is particularly applicable when users require informational support for a task while still focusing on that task. It has the potential to allow users to interact with information without getting distracted from the real world. With optical see-through or video see-through display terminals, users are able to interact with virtual contents without attention distraction from the real environment.
  • AR glasses are important devices where augmented reality is displayed. Versions include eyewear that employ cameras to intercept the real world view and re-display its augmented view through the eye pieces and devices in which the AR imagery is projected through or reflected off the surfaces of the eyewear lens pieces. With the capability to integrate augmented reality, the AR glasses have the potential to do many things such as feeding peoples' live information during activities and even letting people manipulate 3D objects with ease.
  • Embodiments of the present technology relate to a system and a method for helping memory impaired people memorize information.
  • the system extracts inforaiation from self-introduction dialogues.
  • the extracted information will be provided to the user in cases when he needs.
  • the main type of information the voice-memory system, VMS, aims to obtain is the vocal information and the facial information, which is a major information source for AR glasses.
  • the VMS assists the user to memorize and recall information from daily self-introduction dialogues, which are typically daily speech sources and contain large amount of personal information.
  • the obtained information is stored in the AR glass memory and will be offered to the user when he needs them.
  • the VMS will be triggered automatically and personal information like job and hobbies will be drawn if they are dealt with in the dialogue.
  • both his photo and personal information will be displayed as a hint on the screen, which assists the user to recall him.
  • an externally wearable treatment device for medical application comprises an augmented reality glass which may comprises a camera for capturing a live video stream or an image; a voice recorder to record speech concurrently with the camera; a central processing unit which may include a speech processor; an image processor; a natural language processor; wherein the central processing unit is adapted to generate and render the plurality of the people information contents for display on a screen of the augmented reality glass; a memory unit for storing a plurality of images captured by the camera and the plurality of people information contents; a display device for displaying a fused virtual and real contents; where the virtual content may comprises at least one of the plurality of people information contents.
  • a Voice-Memory System for assisting a user to memorize and recall a plurality of people information contents.
  • the system may comprise an augmented reality glass which may include a camera for capturing a live video stream or an image; a voice recorder to record speech concurrently with the camera; a central processing unit which may include a speech processor; an image processor; a natural language processor; wherein the central processing unit is adapted to generate and render the plurality of the people information contents for display on a screen of the augmented reality glass; a memory unit for storing a plurality of images captured by the camera and the plurality of people information contents; a display device for displaying a fused virtual and real contents; where the virtual content may comprises at least one of the plurality of people information contents.
  • the device or system may include a plurality of sensors adapted to gather information including position and orientation.
  • the plurality of sensors may comprise at least a voice sensor, a position and orientation sensor and a motion sensor.
  • an operation system of the central processing unit may be an Android-based system.
  • the speech processor may process a vocal signal of the speech.
  • the speech processor may include a speech recognizer, an Android speech recognition API and a local speech recognizer.
  • the Google server may be adapted to upload the vocal signal.
  • the image processor may include a face recognizer, a face detector and a snapdragon Software Development Kit.
  • the image processor may be adapted to detect whether the human face exists in the image; and recognize and compare the human face in the image with the images stored in a database of the memory unit.
  • the image processor may be adapted to process a plurality of human faces in the image and select a region of interest of the human face in the image.
  • the image processor may be operable without connection to the internet.
  • the natural language processor may comprise a text classifier and an information extractor; wherein the information extractor is based on an Open Natural Language Processor Library.
  • the natural language processor may be adapted to perform automatic summarization, preferably producing a readable summary of a chunk of text; discourse analysis, including identifying discourse structure of a connected text; Named Entity Recognition, NER, preferably determining which items in the text map to proper names such as people or places; parsing, preferably determining the parse tree of a given sentence.
  • the natural language processor may be a self-designed NLP module; which utilizes the open source natural language processing tool kits as well as self-designed algorithms.
  • the device or system according to a further aspect of the present invention may include a motion evaluator; where the motion sensor may be adapted to measure the motion of augmented reality glass and the motion evaluator may be adapted to judge whether the augmented reality glass is in a static state or in a motion state.
  • the plurality of people information contents may comprise at least one of three categories of information; the three categories of information may comprise of a vocal information, a facial information or an extra information; and the three categories of information may be synthesized by an intention interpreter.
  • the vocal information may be the vocal signal from the speech; the vocal information may be processed by a speech processor and translated into a plurality of text formed scripts; and the plurality of text formed scripts may be processed by the text classifier and information extractor in the natural language processor.
  • the vocal information may comprise at least four types of information, preferably, name, job, company and age.
  • the facial information may comprise of the human face, preferably a face of the person the user is talking to.
  • the extra information may comprise a geographical and a date information.
  • the size of the memory unit may be at least 100Mb and stores a maximum of 150 people information contents.
  • the memory unit may further comprise a Read-only memory and a Random-access memory, wherein the Read-only memory is a database.
  • the augmented reality glass may further comprise an information retrieval system, and the information retrieval system may be adapted to transform the vocal information into a plurality of text and image form.
  • the augmented reality glass may be connectable to the internet via an internet module and comprises GPS functionality.
  • the device or system may further comprise a battery and a power manager, the power manager may function as an interface between the battery and the system; the power manager may regulate the turning on or off of the system in accordance with an electricity level; the power manager may monitor the battery level; and when the battery is at a low level, the power manager turns the system off automatically to conserve the battery consumption.
  • the power manager may function as an interface between the battery and the system; the power manager may regulate the turning on or off of the system in accordance with an electricity level; the power manager may monitor the battery level; and when the battery is at a low level, the power manager turns the system off automatically to conserve the battery consumption.
  • the device or system according to a further aspect of the present invention may further comprise a storage manager.
  • the storage manager may comprise a plurality of interfaces, where at least one of the plurality of interfaces may displays a plurality of storage information on an upper portion of the interface on the screen of the augmented reality glass, where the plurality of storage information may comprise at least a total number of the names of the people information contents stored in the database of the memory unit, and a percentage of free space available in the database of the memory unit, wherein at least one of the plurality of interfaces may display the names of the plurality of people information contents in an alphabetically order on the middle and lower portions of the interface on the screen of the augmented reality glass.
  • a voice-memory method may be used for assisting a user to memorize and recall a plurality of people information contents in the voice-memory system according to the above disclosure; the method may further comprise the steps of operating a sleeping state; operating an inputting state; and operating an outputting state.
  • the step of operating a sleeping state may comprise running an operating system in the background; where the operating system may be surveilling the environment by analyzing the speech through the voice recorder, the image from the camera and the user head's attitude information through the motion sensor; and detecting if the speech is the self introduction dialogue or when the user requires a hint pertaining to the plurality of people information contents.
  • the step of operating a sleeping state may further comprise determining a content of the speech via the intention interpreter and determining whether the operating system activates the inputting state or the outputting state or remains in the sleeping state.
  • the step of operating an inputting state may comprise extracting a plurality of people information contents from a current environment and the speech; storing and classifying the plurality of people information contents in the database of the memory unit.
  • the step of operating an outputting state may comprise recognizing via the image processor, the human face on the image captured by the camera; extracting the plurality of people information content from the database and displaying at least one of the plurality of people information contents on the screen of the augmented reality glass as long as the human face is captured by the camera of the augmented reality glass.
  • the method according to a further aspect of the present invention may further comprise activating the inputting state when the intention interpreter detects that the content of the speech is the self-introduction dialogue; the human face detected on the image captured by the camera is not in the database and the augmented reality glass is in the static state.
  • the method according to a further aspect of the present invention may further comprise activating the outputting state when the speech is the self-introduction dialogue; the human face detected on the image captured by the camera is in the database and the augmented reality glass is in the static state.
  • the method according to a further aspect of the present invention may further comprise activating the outputting state when the intention interpreter detects that the user requires the hint on at least one of the plurality of people infomiation contents based on the content of the speech.
  • the method according to a further aspect of the present invention may further comprise activating the sleeping state when the human face is not detected on the image captured by the camera.
  • the method according to a further aspect of the present invention may further comprise activating the sleeping state at the end of the inputting state or at the end of the outputting state.
  • the method according to a further aspect of the present invention may further comprise the step of operating a storage management state using a storage manager.
  • the step of operating a storage management state using a storage manager may comprise activating the storage manager interface via a predefined vocal command, preferably, VMS Storage Manager; wherein when the storage manager interface is activated, displays the plurality of storage information on the upper portion of the storage manager interface, wherein the plurality of storage information comprises a total number of names of the plurality of people information contents stored in the database of the memory unit, and the percentage of free space available in the database of the memory unit, displaying the names of the plurality of people information contents in an alphabetically order on the middle and lower portions of the storage manager interface; managing the database of the memory unit via a plurality of predefined vocal commands, preferably, delete, new name, new job, new age, new time and/or new location.
  • managing the storage database may comprise selecting by vocally calling the name of the plurality of people information contents; retrieving the relevant plurality of people information contents; displaying on the storage manager interface, the relevant plurality of people infomiation contents, preferably, name, job, age, meeting time and/or meeting location; revising and/or deleting via a plurality of predefined vocal commands, at least one of the displayed plurality of people information contents.
  • the method according to a further aspect of the present invention when the database of the memory unit is full, displaying an alert information on the screen of the augmented reality glass, preferably, not enough storage.
  • a computer program may comprise computer executable instructions which when run on the voice-memory system perform the method steps disclosed above.
  • a wearable device wearable by a user wherein the wearable device comprises the voice memory system described above.
  • the system is preferably composed of three function modes, an information exploring, an information storing and an information display.
  • the information exploring mode the system is mainly employed to detect whether it contains target information in the speech source. Information will be extracted from speech if it is detected. The system turns to information storing if the target information is extracted. It will be classified and stored in the database of AR glass.
  • Information display is designed to detect situations when the user needs information. Information output takes several forms including voice hint, virtual screen display, etc.
  • the three functional modes are realized by three certain states of the system respectively: an inputting state, a sleeping state and an outputting state. The three states are switched between each other automatically.
  • the voice-memory system is embedded on AR glasses which may be equipped with a mobile central processing unit 603, a memory 604, a camera 301 , a display device 602 and a plurality of sensors 101.
  • the camera 301 is used to capture a human face and its nearby environment.
  • the mobile processing unit 603 extracts information from texts, recognizes contents from the speech which takes the form of electrical signals transmitted by sensor.
  • the display device 602 serves as an interface for transmitting information to the user when both text and voice hint are employed.
  • the voice-memory system utilizes a series of newly-developed methods.
  • the method comprises: (a) Detecting topic from speech including dialogues and monologues automatically; (b) Searching target named entity from a segmented text; (c) classifying and storing information in a light-weight database; (e) Representing and matching information from a light-weight database; (f) Detecting and Recognizing a human face in a given image or a live video stream; (g) Recognizing speech and transferring vocal signals into texts in an adapted way, using either internet or local speech recognizers.
  • FIG.l is an illustration of an embodiment of the flowchart of the VMS.
  • FIG.2 is an illustration of an embodiment of the sleeping state of the VMS.
  • FIG.3 is an illustration of an embodiment of the inputting state of the VMS.
  • FIG.4 is an illustration of an embodiment of the outputting state of the VMS.
  • FIG.5 is an illustration of an embodiment of state flow of the VMS.
  • FIG.6 is an illustration of an embodiment of the structures of the VMS and the relationship between modules and peripherals of the glass.
  • FIG.7 is an illustration of an embodiment of the core processor structures of the VMS.
  • FIG.8 is an illustration of a self-introduction scene
  • FIG.9 is an illustration of a scene of the two men in Fig 8 meeting again
  • FIG.10 is an illustration of an embodiment of the VMS storage manager
  • FIG.l 1 is an illustration of an embodiment of the VMS storage warning information
  • FIGs. 1 to 9 which in general relates to retrieving information from vocal speeches based on the augmented reality glass.
  • the same components may be designed by the same reference numbers although they are illustrated in different figures. Detailed descriptions of constructions or processes known in the reality may be omitted to avoid obscuring the subject matter of the present invention.
  • the system for implementing the augmented reality environment includes a mobile central processing unit, a display screen, a rear built-in camera, a microphone, a voice sensor, and a position and orientation sensor in the embodiments.
  • a user wears an augmented reality glass with a built-in rear camera and a voice recorder that is switched on.
  • the information retrieval system is run in the background of the operation system of augmented reality glass.
  • the operation system may be confined to Android or Android-based systems.
  • the glass must be connected to the internet and supports GPS.
  • the system will analyze the vocal signal, extract the information and store it in its database. Afterwards the system will detect the user's intention and determine whether he needs the information and provide the information to the user if necessary.
  • Useful information herein means the personal information (like name and job) in self-introduction dialogues. This type of information is originally in vocal forms. Through the information retrieval system, they will be transformed and offered to the user in text and image form.
  • the whole system works for self-introduction scenarios, which is defined as a situation where two people who do not know each other, introduce themselves to each other.
  • the personal information may include name, face, job, age, meeting time and location.
  • Fig. 1 shows the structure of the VMS according to the invention.
  • the VMS (automatic information retrieval system) is composed of three states, namely the sleeping state 801, the inputting state 803 and the outputting state 804.
  • the sleeping state 801 the program runs in backstage and there is not much user-system interaction in this state.
  • the aim of the system in this state is to detect whether a self-introduction dialogue occurs or if the user needs hints.
  • the system will turn to the inputting state (see Fig. 3) where information is detected, extracted and stored in a light-weight database in a section of ROM 601 in the augmented reality glass.
  • the system will turn to the outputting state 804 (see Fig. 4) where the main target is to obtain information from the database and to show the information on the screen 602.
  • the sleeping state 801 see Fig. 2
  • the system continues to surveil the environment where it obtains speech through a voice recorder 201, image view through a camera 301 and the user head's attitude information through a motion sensor 101.
  • the three categories of information will be synthesized by an intention interpreter 802, which determines whether the system shall turn to the inputting state 803 or the outputting state 804 or remains in the sleeping state.
  • the speech is recorded by the voice recorder 201 in the augmented reality glass.
  • the vocal signal is processed through a speech processor 202.
  • the speech processor 202 is offered with two solutions— a local speech recognitizer interface 605 (see Fig. 6) and a VMS speech recognition interface 207.
  • the VMS speech recognition interface 207 is designed by the Irish Bengal and it is employed by default.
  • the VMS speech recognizer 207 (see Fig. 7) employs android speech recognition API 208, where the vocal signal is uploaded to the Google server 701 and the translated content is returned to the client in the form of texts. Therefore, the augmented reality glass must be able to connect to the internet via the internet module 606 (see Fig. 6).
  • the local speech recognizer 605 can be employed as well if it is available in the glass.
  • the translated content is processed by the text classifier belonging to the natural language processor 203.
  • the natural language processor 203 is a self-designed NLP module, which utilizes the open source natural language processing tool kits as well as self-designed algorithms.
  • the text classifier 204 enclosed, is based on Naive Bayesian algorithm and it is trained offline with a corpora which is obtained through a self-designed python crawler on more than 50 English language education websites. The text classifier 204 will return true if the topic of translated content is self-introduction.
  • Image is drawn from each frame of the video stream from the camera.
  • the image is passed through an image processor 302.
  • the image processor 302 will perform face detection and face recognition.
  • the image processor 302 is based on a snapdragon SDK 305 for android.
  • the image processor 302 is run real-time under no-internet conditions.
  • the motion sensor 101 measures the motion of the AR glass. It passes a 3d vector of linear acceleration, which means the acceleration along x axis, y axis and z axis.
  • Motion evaluator 102 is a module using the formula of threshold to process the 3d vector, which is capable of judging whether the AR glass is in a static or in a motion state.
  • the intention interpreter 802 collects the information of the above three aspects and employs the following rule to manage the system: if the speech topic is a self-introduction and a face is detected but not in the database and the AR glass is in a static state, the system activates the inputting state 803; if the speech topic is a self-introduction, a face is detected and existed in the database and the AR glass is in a static state, the system activates the outputting state 804; otherwise the system remains in the sleeping state 801.
  • the mission of the system is to extract useful information from the current environment.
  • the information includes three types— vocal information, facial information and extra information.
  • the vocal information is passed through the speech recognizer and translated into text formed scripts.
  • the scripts are processed by an information extractor 205 in the Natural Language Processor, NLP, 203.
  • the information extractor is based on an OpenNLP Library 206, which is an android developing tool kit for natural language processing tasks.
  • the main components used are listed in the following: sentence detector, tokenizer, pos tagger and chunker. From the vocal signals four types of information will be extracted: name, job, company and age. They will be left blank if they are not dealt with in the dialogue. Facial information is the photo of the person when they are talking.
  • the video frame is also passed through an image processor 302 and a photo comprising a human face will be extracted.
  • Extra information includes the geographical and date information.
  • the GPS 607 must be supported by the AR glass.
  • the information is stored in a section of ROM 601 , and the ROM 601 should be at least 100Mb.
  • the system supports at most 150 persons' infomiation. The system will return to the sleeping state at the end of the inputting state.
  • the system extracts information from the database.
  • the image processor will recognize the face in the video frame and then extracts the person's photo with all its related personal information from the database. The information will be shown on the screen as long as the person appears in the user's sight. System will return to the sleeping state at the end of outputting.
  • the VMS provides a storage manager 504 for users to manually manage the database. All management to the database is done by vocal commands. Users can call the vocal command "VMS Storage Manager" to start this interface.
  • the storage management interface comprises two parts. On the top portion of the interface, the number of stored persons and the percentage of free space are displayed. The names of all stored persons are listed alphabetically below the top portion and users may overwrite the recorded personal infomiation. When the user wants to revise the personal information he only needs to call the person's name and all information of this person will be shown on the virtual screen.
  • a series of vocal commands can be used in this interface like "Delete” to delete this person, "Name." to overwrite his name. Complete vocal commands used in this interface are defined in the following:
  • FIG. 6 illustrates the composition of the whole system in terms of core processors and their relationship with the peripheral.
  • the power manager 402 is a module to regulate the turning on or off of the system in accordance with the electricity level. It functions as an interface between a battery 401 and the VMS.
  • the system is run in the background and it keeps calling the peripheral devices (camera 301, voice recorder 201 and motion sensor 101) and exploiting the CPU 603 to realize complicate recognition algorithms.
  • the augmented reality glass is at a low battery level, the system will be turned off automatically to conserve the battery consumption. User can turn the system off manually with a voice command: "VMS, Off'.
  • the power manager continues to monitor the battery level.
  • Figures 8 to 11 are illustrations of how the system works in a real self-introduction scenario.
  • the system provides three functional modes: the self-introduction mode, the meeting mode and the method mode.
  • the complete work flow includes the information detection and extraction as well as information prompting, which corresponds to inputting state and outputting state of the system.
  • Fig. 8 demonstrates a self-introduction scene.
  • Al is introducing himself to Bl who is wearing augmented glass A2 with the VMS installed.
  • Al looks at Bl through the augmented reality glass A2.
  • the conversation deals with at least one element of the following attributes: name, job, age.
  • the following information are displayed: "self-introduction detected... (A4) Information being recorded ....(A5) Information extracted successfully! (A6)".
  • Those hints (A4-A6) imply that the system has extracted information from the conversation of self-introduction scenario.
  • Fig. 9 shows a scene when person A 1 meets Bl in the future.
  • the system will detect B 1 automatically and print his personal information on the virtual screen A3.
  • the personal information is listed in the formats of name, job, age, first-meeting time, first meeting venue and personal photo (A7-A11). The attribute will be left blank if that particular type of information did not appear in the first meeting scene.
  • Fig. 10 illustrates how the system works when the user wants to overwrite the database manually.
  • a new window (CI) will appears when the user vocally calls "VMS storage manager".
  • C2 number of persons added
  • C3 percentage of free space
  • C5 a list containing all names of persons in the database, which are listed alphabetically. If the user calls a name from the list, all recorded information of that name will be displayed on the screen (C5).
  • User may use pre-defined commands to revise or delete information in C6.
  • Fig. 11 illustrates the prompted warning information when there is not enough space to store personal information.
  • a piece of alert information will be prompted on the screen (C8) the moment, the user adds new persons: "Not enough storage”(C9).
  • the user has to access the storage manager to delete some previously stored persons in order to continue adding new person.

Abstract

L'invention concerne un système et un procédé d'extraction d'informations personnelles à partir de dialogues vocaux quotidiens pour des personnes portant des verres à réalité augmentée. Le système selon l'invention est conçu pour aider des personnes atteintes de déficit mnésique (tel que l'amnésie, la maladie d'Alzheimer, etc.), ledit système pouvant extraire automatiquement des informations personnelles pratiques à partir d'une conversation quotidienne en face à face. Ces informations sont stockées sous forme d'ensemble de données privées pour des recherches et des interrogations ultérieures dans des applications associées, telles que des systèmes de rappel et de recommandation.
PCT/EP2015/069918 2015-09-01 2015-09-01 Dispositif de traitement portable externe pour application médicale, système et procédé de mémoire vocale WO2017036516A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP15771861.0A EP3218896A1 (fr) 2015-09-01 2015-09-01 Dispositif de traitement portable externe pour application médicale, système et procédé de mémoire vocale
PCT/EP2015/069918 WO2017036516A1 (fr) 2015-09-01 2015-09-01 Dispositif de traitement portable externe pour application médicale, système et procédé de mémoire vocale

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2015/069918 WO2017036516A1 (fr) 2015-09-01 2015-09-01 Dispositif de traitement portable externe pour application médicale, système et procédé de mémoire vocale

Publications (1)

Publication Number Publication Date
WO2017036516A1 true WO2017036516A1 (fr) 2017-03-09

Family

ID=54238381

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/069918 WO2017036516A1 (fr) 2015-09-01 2015-09-01 Dispositif de traitement portable externe pour application médicale, système et procédé de mémoire vocale

Country Status (2)

Country Link
EP (1) EP3218896A1 (fr)
WO (1) WO2017036516A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10366691B2 (en) 2017-07-11 2019-07-30 Samsung Electronics Co., Ltd. System and method for voice command context
CN111638798A (zh) * 2020-06-07 2020-09-08 上海商汤智能科技有限公司 一种ar合影方法、装置、计算机设备及存储介质
US11080936B2 (en) 2018-12-12 2021-08-03 Nokia Technologies Oy First-person perspective-mediated reality
WO2022088968A1 (fr) * 2020-10-29 2022-05-05 International Business Machines Corporation Détection et amélioration de la détérioration de mémoire
US11868535B2 (en) 2021-09-16 2024-01-09 Memory On Hand Inc. Wearable device that provides spaced retrieval alerts to assist the wearer to remember desired information

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130127980A1 (en) * 2010-02-28 2013-05-23 Osterhout Group, Inc. Video display modification based on sensor input for a see-through near-to-eye display
US20130147837A1 (en) * 2011-12-13 2013-06-13 Matei Stroila Augmented reality personalization

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7020345B2 (en) * 2001-04-26 2006-03-28 Industrial Technology Research Institute Methods and system for illuminant-compensation
US20140294257A1 (en) * 2013-03-28 2014-10-02 Kevin Alan Tussy Methods and Systems for Obtaining Information Based on Facial Identification
EP2899609B1 (fr) * 2014-01-24 2019-04-17 Sony Corporation Système et procédé de rappel de nom

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130127980A1 (en) * 2010-02-28 2013-05-23 Osterhout Group, Inc. Video display modification based on sensor input for a see-through near-to-eye display
US20130147837A1 (en) * 2011-12-13 2013-06-13 Matei Stroila Augmented reality personalization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SATYANARAYANAN M ET AL: "The Case for VM-Based Cloudlets in Mobile Computing", IEEE PERVASIVE COMPUTING, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 8, no. 4, 1 October 2009 (2009-10-01), pages 14 - 23, XP011449059, ISSN: 1536-1268, DOI: 10.1109/MPRV.2009.82 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10366691B2 (en) 2017-07-11 2019-07-30 Samsung Electronics Co., Ltd. System and method for voice command context
US11080936B2 (en) 2018-12-12 2021-08-03 Nokia Technologies Oy First-person perspective-mediated reality
CN111638798A (zh) * 2020-06-07 2020-09-08 上海商汤智能科技有限公司 一种ar合影方法、装置、计算机设备及存储介质
WO2022088968A1 (fr) * 2020-10-29 2022-05-05 International Business Machines Corporation Détection et amélioration de la détérioration de mémoire
US11495211B2 (en) 2020-10-29 2022-11-08 International Business Machines Corporation Memory deterioration detection and amelioration
GB2615241A (en) * 2020-10-29 2023-08-02 Ibm Memory deterioration detection and amelioration
GB2615241B (en) * 2020-10-29 2024-03-06 Ibm Memory deterioration detection and amelioration
US11868535B2 (en) 2021-09-16 2024-01-09 Memory On Hand Inc. Wearable device that provides spaced retrieval alerts to assist the wearer to remember desired information

Also Published As

Publication number Publication date
EP3218896A1 (fr) 2017-09-20

Similar Documents

Publication Publication Date Title
US10893202B2 (en) Storing metadata related to captured images
KR102002979B1 (ko) 사람-대-사람 교류들을 가능하게 하기 위한 헤드 마운티드 디스플레이들의 레버리징
US11397462B2 (en) Real-time human-machine collaboration using big data driven augmented reality technologies
CN109905593B (zh) 一种图像处理方法和装置
US20190340200A1 (en) Multi-modal interaction between users, automated assistants, and other computing services
US11392213B2 (en) Selective detection of visual cues for automated assistants
WO2017036516A1 (fr) Dispositif de traitement portable externe pour application médicale, système et procédé de mémoire vocale
US20230206912A1 (en) Digital assistant control of applications
CN110809187B (zh) 视频选择方法、视频选择装置、存储介质与电子设备
US20120088211A1 (en) Method And System For Acquisition Of Literacy
JP2003533768A (ja) 記憶支援装置
CN113851029B (zh) 一种无障碍通信方法和装置
EP3087727B1 (fr) Dispositif d'autoportrait base sur les emotions
CN113822187A (zh) 手语翻译、客服、通信方法、设备和可读介质
US20230199297A1 (en) Selectively using sensors for contextual data
CN116912478A (zh) 目标检测模型构建、图像分类方法、电子设备
CN113822186A (zh) 手语翻译、客服、通信方法、设备和可读介质
WO2024091266A1 (fr) Système et procédé de génération de sous-titres visuels
CN115858941A (zh) 搜索方法、装置、电子设备以及存储介质
Ananth et al. DESIGN AND IMPLEMENTATION OF SMART GUIDED GLASS FOR VISUALLY IMPAIRED PEOPLE

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15771861

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2015771861

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE