GB2528867A - Smart device control - Google Patents

Smart device control Download PDF

Info

Publication number
GB2528867A
GB2528867A GB1413619.6A GB201413619A GB2528867A GB 2528867 A GB2528867 A GB 2528867A GB 201413619 A GB201413619 A GB 201413619A GB 2528867 A GB2528867 A GB 2528867A
Authority
GB
United Kingdom
Prior art keywords
processor
head
wearable device
microphone
vocal sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1413619.6A
Other versions
GB201413619D0 (en
Inventor
Alexandre Chabrol
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to GB1413619.6A priority Critical patent/GB2528867A/en
Publication of GB201413619D0 publication Critical patent/GB201413619D0/en
Priority to US14/803,782 priority patent/US20160034252A1/en
Publication of GB2528867A publication Critical patent/GB2528867A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • Optics & Photonics (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

A wearable device, such as a head-mountable device 10 in the form of glasses, includes a processor 15 adapted to respond to a user instruction given as a non-vocal sound generated by the user in their oral cavity and captured by microphone 25 and to perform an operation in response to the instruction. The non-vocal sounds can be a sucking noise, swallowing noise, slurping or whistling noise. The non-vocal sounds may be programmable. The device may also include an image sensor or camera 11 adapted to capture an image in response to the instruction.

Description

SMART DEVICE CONTROL
FIELD OF THE INVENTION
[0001] The present invention relates to the control of a smart wearable device, e.g. a head-mountable device such as smart glasses.
[0002] The present invention further relates to a method of controlling a smart wearable device such as smart glasses.
BACKGROUND
[0003] Modem society is becoming more and more reliant on electronic devices to enhance our ways of life. In particular, the advent of portable and wearable electronic devices, as for instance facilitated by the miniaturization of semiconductor components, has greatly increased the role of such devices in modem life. Such electronic devices may be used for information provisioning as well as for interacting with users (wearers) of other electronic devices.
[0004] For instance, wearable electronic devices such as head-mountable devices may include a plethora of functionality, such as display functionality that will allow a user of the device to receive desired information on the electronic device, for instance via a wireless connection such as a wireless Internet or phone connection, and/or image capturing functionality for capturing still images, i.e. photos, or image streams, i.e. video, using the wearable electronic device. For example, a head-mountable device such as glasses, headwear and so on, may include image sensing elements capable of capturing such images in response to the appropriate user interaction with the device.
[0005] Several different methods of controlling such wearable devices, e.g. head- mountable devices, are known. For instance, US 2013/0257709 Al discloses a head-mountable device including a proximity sensor at a side section thereof for detecting a particular eye movement, which eye movement can be used to trigger the performance of a computing action by the head-mountable device. US 20 13/0258089 Al discloses a gaze detection technology for controlling an eye camera for instance in the form of glasses. The detected gaze may be used to zoom the camera in on a gaze target. US 8,203,502 BI discloses a wearable heads-up display with an integrated finger tracking input sensor adapted to recognize finger inputs, e.g. gestures, and use these inputs as commands. It is furthennore known to control such devices using voice commands, [0006] A drawback of these control mechanisms is that it requires a discrete and considered action by the wearer of the device. This can cause one or more of the following problems. For example, if the device operation to be triggered by the action of the wearer is time-critical, the time the wearer requires to remember and perform the required action may cause the device operation to be triggered too late, For instance, this problem may occur if the device operation is an image capture of a moving target.
[0007] In addition, is the device operation is such an image capture, the performance of such an action may cause the wearer of a head-mountable device to move his or her head, which also may be undesirable in relation to the task to be performed by the head-mountable device, e.g. an image capture event.
[0008] Moreover, users may be uncomfortable performing the required actions because the actions may lack discretion. This may prevent a user from performing a desired action or even prevent a user from purchasing such a head-mountable device. In addition, voice recognition control typically requires the accurate positioning of a microphone in or near the mouth of a user, which may be unpleasant and/or may lead to poor recognition if the microphone is not correctly positioned,
BRIEF SUMMARY OF THE INVENTION
[0009] The present invention seeks to provide a smart wearable device such as a head-mountable device that can be more easily controlled, [00101 The present invention further seeks to provide a method for controlling a smart wearable device such as a head-mountable device more easily.
[00111 According to an aspect, there is provided a wearable device comprising a processor adapted to respond to a user instruction and to perform an operation in response to said instruction, wherein the processor is adapted to communicate with a microphone adapted to capture sounds from the oral cavity of the user; wherein the processor is adapted to recognize a non-vocal sound generated by the user in said oral cavity as said user instruction.
[0012] The present invention is based on the insight that a wearer of a wearable device such as a head-mountable device may control the device by forming sounds in his or her oral cavity (inside his or her mouth), for instance by using saliva present in the oral cavity to generate the sound or noise, e.g. a swallowing noise or a noise generated by displacing saliva inside the oral cavity, such as sucking saliva through teeth or in between tongue and palette for instance, or by using the breathing airflow to generate such noises, e.g. by puffing a cheek or similar. This has the advantage that the operation to be performed by the head-mountable device can be controlled in an intuitive and discrete manner without requiring external or visual movement. Moreover, it has been found that such non-vocal sounds can be recognized more easily than for instance spoken word, such that the positioning of the microphone to detect the non-vocal sounds is less critical, thus increasing device flexibility.
[0013] The microphone does not necessarily need to form a part of the wearable device.
For instance, a separate microphone may be used that may be connected to the wearable device in any suitable manner, e.g. using a wireless link such as a Bluetooth link. However, in a preferred embodiment, the wearable device further comprises the microphone such that all required hardware elements are contained within the wearable device.
[0014] In an embodiment, the wearable device comprises an image sensor under control of said processor; and the processor is adapted to capture an image with said image sensor in response to said instruction. This provides a particularly useful implementation of the present invention, as the discrete and eye or hmd movement-free triggering of the image capturing event allows for the accurate capturing of the desired image, or images in case of a video stream, in a discrete manner, The image sensor may form part of a camera module, which module for instance may further comprise optical elements, e.g. one or more lenses, which may be variable lenses, e.g. zoom lenses under control of the processor.
[0015] In an embodiment, the wearable device is a head-mountable device.
[0016] The head-mountable device comprises glasses in an embodiment. Such smart glasses are particularly suitable for e.g. image capturing, as is well-known per Se, for instance from US 2013/0258089 Al. Such glasses may comprise one or more integrated image sensors, for instance integrated in a pair of lenses, at least one of said lenses comprising a plurality of image sensing pixels under control of the processor for capturing an image (or stream of images). Alternatively, one or more image sensors may be integrated in the frame of the glasses, e.g. as part of one or more camera modules as explained above.
In an embodiment, a pair of spatially separated image sensors maybe capable of capturing individual images, e.g. to compile a 3-D image from the individual images captured by the separate image sensors.
[0017] The glasses may compise a pair of side aims for supporting the glasses on the head, said microphone being positioned at an end of one of said side ams such that the microphone can be positioned behind the ear of the wearer, thereby facilitating the capturing of non-vocal sounds in the oral cavity, Alternatively, the microphone may be attached to said glasses, e.g. using a separate lead, for positioning in or behind an ear of the user.
[0018] In an embodiment, the non-vocal sound maybe user-programmable such that the wearer of the wearable device can define the sound that should be recognized by the processor of the wearable device, e.g. the head-mountable device, This allows the wearer to define a discrete sound that the wearer is comfortable using to trigger the desired operation of the wearable device, e.g. an image capture operation. To this end, the processor may be adapted to compare a sound captured by the microphone with a programmed sound.
[0019] According to another aspect, there is provided a method of controlling a wearable device such as a head-mountable device, including a processor, the method comprising capturing a non-vocal sound generated in the oral cavity of a wearer of the head-mountable device with a microphone; transmifting the captured non-vocal sound to said processor; and performing a device operation with said processor in response to the captured non-vocal sound. Such a method facilitates the operation of a wearable device in a discrete and intuitive manner.
[0020] In an embodiment, the method further comprises comparing the captured non-vocal sound to a stored non-vocal sound with said processor; and performing said operation if the captured non-vocal sound matches the stored non-vocal sound to ensure that the desired operation of the wearable device is triggered by the appropriate sound only.
[002t] To this end, the method may further comprise recording a non-vocal sound with the microphone; and storing the recorded non-vocal sound to create the stored non-vocal sound. This for instance allows the wearer of the wearable device to define a non-vocal sound-based command the wearer is comfortable using to operate the head-mountable device.
[0022] In an example embodiment, the step of performing said operation comprises capturing an image under control of said processor. For instance, said capturing an image may comprise capturing said image using an image sensor integrated in a pair of glasses.
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] Preferred embodiments of the present invention will now be described, by way of example only, with reference to the following drawings, in which: FIG. I schematically depicts a head-mountable device according to an embodiment worn by a user; FIG. 2 schematically depicts a head-mountable device according to an embodiment; FIG. 3 schematically depicts a head-mountable device according to another embodiment; FIG. 4 depicts a flow chart of a method of controlling a head-mountable device according to an embodiment; and FIG. S depicts a flow chart of a method of controlling a head-mountable device according to another embodiment.
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0024] It should be understood that the Figures are merely schematic and are not drawn to scale. It should also be understood that the same reference numerals are used throughout the Figures to indicate the same or similar parts, [0025] In the context of the present application, where embodiments of the present invention constitute a method, it should be understood that such a method is a process for execution by a computer, i.e. is a computer-implementable method, The various steps of the method therefore reflect various parts of a computer program, e.g. various parts of one or more algorithms.
[0026] In the context of the present application, where reference is made to a non-vocal sound or noise, this is intended to include any sound formed inside the oral cavity of a person without purposive or primary use of the vocal chords. Such a non-vocal sound may be formed by the displacement of air or saliva within the oral cavity. Non-limiting examples of such non-vocal noises originating within the oral cavity may be a sucking noise, swallowing noise, whistling noise and so on. In some particularly preferred embodiments, the non-vocal noise is a noise involving the displacement of saliva within the oral cavity, i.e, the mouth, for instance by sucking saliva from one location in the oral cavity to another, e,g, sucking saliva through or in between teeth, slurping or swallowing saliva and so on. Such non-vocal sounds may be generated with a closed mouth in some embodiments, thereby allowing the sound to be generated in a discrete manner.
[0027] In the context of the present application, a wearable device may be any smart device, e.g. any device comprising electronics for capturing images and/or information over a wireless link that can be worn by a person, for instance around the wrist, neck, waist or on the head of the wearer. For instance, the wearable device may be a head-mountable device, which may be an optical device such as a monocle or a pair of glasses, and/or a garment such as a hat, cap or helmet, which garment may comprise an integrated optical device. Other suitable head-mountable devices will be apparent to the skilled person.
[0028] In the remainder of this description, the wearable device and the method of controlling such as device will be described using a head-mountable device by way of non-limiting example only; it should be understood that the wearable device may take any suitable alternative shape, e.g. a smart watch, smart necklace, smart belt and so on.
[0029] FIG. I schematically depicts an example embodiment of such ahead-mountable device 10 worn by a wearer I, here shown in the form of a pair of glasses by way of non-limiting example only. The pair of glasses typically comprises a pair of lenses 12 mounted in a mounting frame 13, with side arms 14 extending from the mounting frame 13 to support the glasses on the ears 3 of the wearer I, as is well-known per se. The mounting frame 13 and side arms 14 each may be manufactured from any suitable material, e.g. a metal or plastics material, and may be hollow to house wires, the fitnction of which will be explained in more detail below.
[0030] FIG. 2 schematically depicts a non-limiting example embodiment of the circuit arrangement included in the head-mountable device 10. By way of non-limiting example, the head-mountable device 10 comprises an optical device 11 communicatively coupled to a processor 15, which processor is arranged to control the optical device ii in accordance with instructions received from the wearer 1 of the head-mountable device 10. The optical device 11 for instance may be a heads-up display integrated in one or more of the lenses 12 of the head-mountable device 10. In a particularly advantageous embodiment, the optical device 11 may include an image sensor for capturing still images or a stream of images under control of the processor 15. For instance, the optical device 11 may comprise a camera module including such an image sensoi, which camera module may further include optical elements such as lenses, e.g. zoom lenses, which may be controlled by the processor 15, as is well-known per se. The head-mountable device 10 may comprise one or more of such optical devices 11, e.g. two image sensors for capturing stereoscopic images, or a combination of a heads-up display with one or more of such image sensors.
[0031] The at least one optical device 11 may be integrated in the head-mountable device 0 in any suitable manner. For instance, in case of the at least one optical device 11 being an image sensor, e.g. an image sensor forming part of a camera module, the at least one optical device 11 may be integrated in or placed on the mounting frame 13 or the side arms 14. Alternatively, the at least one optical device 11 may be integrated in or placed on the lenses 12. For instance, at least one of the lenses 12 may comprise a plurality of image sensing pixels and/or display pixels for implementing an image sensor and/or a heads-up display. The integration of such optical functionality in a head-mountable device 10 such as smart glasses is well-known per se to the person skilled in the art and will therefore not be explained in further detail for the sake of brevity only.
[0032] Similarly, the processor 15 may be integrated in or on the head-mountable device in any suitable manner and in or on any suitable location. For instance, the processor 15 may be integrated in or on the mounting frame 13, the side arms 14 or the bridge in between the lenses 12. Communicative coupling between the one or more optical devices 11 and the processor 15 may be provided in any suitable manner, e.g. in the form of wires or alternative electrically conductive members integrated or hidden in the support frame 13 and/or side arms 14 of the head-mountable device 10. The processor 15 may be any suitable processor, e.g. a general purpose processor or an application-specific integrated circuit.
[0033] The processor 15 is typically arranged to facilitate the smart functionalities of the head-mountable device 10, e.g. to control the one or more optical devices 11, e.g. by capturing data from one or more image sensors and optionally processing this data, by receiving data for display on an heads-up display and driving the display to display the data, and so on, As this is well-known per se to the skilled person, this will not be explained in further detail for the sake of brevity only.
[0034] The head-mountable device 10 may fhrther comprise one or more data storage devices 20, e.g. a type of memory such as a RAI\'l memory, Flash memory, solid state memory and so on, communicatively coupled to the processor 15. The processor 15 for instance may store data captured by the one or more optical devices ii in the one or more data storage devices 20, e.g. store pictures or videos in the one or more data storage devices 20. In an embodiment, the one or more data storage devices 20 may also include computer-readable code that can be read and executed by the processor 15. For instance, the one or more data storage devices 20 may include program code for execution by the processor t5, which program code implements the desired functionality of the head-mountable device 10.
The one or more data storage devices 20 may be integrated in the head-mountable device 10 in any suitable manner. In an embodiment, at least some of the data storage devices 20 may be integrated in the processor 15.
[0035] The processor 15 is responsive to a microphone 25 for placing in the ear area 3 of the wearer 1 such that the microphone 25 can pick up noises in the oral cavity or mouth 2 of the wearer 1. For instance, the microphone 25 may be shaped such that it can be placed behind the ear 3 as shown in FIG. I or alternatively the microphone 25 may be shaped such that it can be placed in the ear 3. Other suitable shapes and locations for the microphone 25 will be apparent to the skilled person.
[0036] In FIG. 2, the microphone 25 is shown as an integral part of the head-mountable device 0. For instance, the microphone 25 may be attached to or integrated in a side arm 14 of a head-mountable device 10 in the form of glasses, such that the microphone 25 is positioned behind the ear 3 of the wearer 1 in normal use of the head-mountable device tO.
In this embodiment, the microphone 25 may be communicatively connected to the processor by via link 22, which may be embodied by electrically conductive tracks, e.g. wires, embedded in the side arm 14.
[0037] Alternatively, the microphone 25 may be connected to the head-mountable device by means of a flexible lead, which allows the wearer Ito position the microphone 25 at a suitable location such as behind or inside the ear 3, In this embodiment, the microphone 25 may be communicatively connected to the processor 15 via a link 22, such as by electrically conductive tracks, e.g. wires, embedded in the flexible lead.
[0038] In yet another embodiment, the microphone 25 may be wirelessly connected to the processor 15 via a wireless link 22. To this end, the microphone 25 includes a wireless transmitter and the head-mountable device 10 includes a wireless receiver communicatively coupled to the processor 15, which wireless transmitter and wireless receiver are arranged to communicate with each other over a wireless link using any suitable wireless communication protocol such as Bluetooth, The wireless receiver may form an integral part of the processor 15 or may be separate to the processor 5.
[0039] In this wireless embodiment, it is not necessary for the microphone 25 to form an integral part of the head-mountable device 10. The microphone 25 in this embodiment may be provided as a separate component, as schematically shown in FIG, 3 where the microphone 25 is depicted outside the boundary of the head-mountable device 10. It should be understood that it is furthermore feasible to provide a head-mountable device 10 without a microphone 25, wherein a separate microphone 25 may be provided that can communicate with the processor 15 over a wired connection, e.g. by plugging the separate microphone 25 into a communications port such as a (micro) USB port or the like of the head-mountable device 10.
[0040] The microphone 25 may communicate the noises captured in the oral cavity 2 of the wearer I in digital form to the processor S. To this end, the microphone 25 may include an analog to digital converter (ADC) that converts a captured analog signal into a digital signal before transmitting a signal to the processor 15. Alternatively, the microphone 25 may be arranged to transmit an analog signal to the head-mountable device 10, in which case the head-mountable device 10, e.g. the processor 15, may include an ADC to perform the necessary conversion, [0041] In operation, the microphone 25 is arranged to communicate with the processor such that the processor 15 may control the head-mountable device 10. This will be explained in more detail with the aid of FIG 4, which depicts a flow chart of an embodiment of a method of controlling such a head-mountable device 10, which method initiates in step 110.
[0042] As mentioned before, the microphone 25 is typically positioned such that it captures noises within the oral cavity 2 of the wearer I of the head-mountable device 10. In particular, the microphone 25 may capture non-vocal noises within the oral cavity 2, as shown in step 120. The microphone 25 communicates, i.e. transmits, the detected noises to the processor 15 as shown in step 30. The processor 15 analyses the detected noises received from the microphone 25 to determine if the detected noise is a defined non-vocal sound that should be recognized as a user instruction. To this end, the processor S may perform a pattern analysis as is well-known per se. For instance, the processor 15 may compare the received noise with a stored pattern to determine if the received noise matches the stored noise pattern. Upon such a pattern match, the processor 15 will have established that the wearer I of the head-mountable device 10 has issued a particular instruction to the head-mountable device 0, such as for instance an instruction to capture an image or a stream of images with the at least one optical device 11, e.g. the at least one image sensor.
[0043] For instance, the wearer I may have issued an instruction to take a picture or record a video using the head-mountable device 10. Following the recognition of the instruction, i.e. following recognition of the captured non-vocal sound as an instruction, the processor 15 will perform the desired device operation in step 150 before the method terminates in step 160. It will be clear to the skilled person that the performed device operation in step 1 50 may include additional steps such as the storage of captured image data in the one or more data storage devices 20 and/or the displaying of the captured image data on a heads-up display of the head-mountable device 10.
[0044] In an embodiment, the processor 15 may be pre-programmed to recognize a particular non-vocal sound, In this embodiment, the head-mountable device 10 may be programmed to train the wearer 1 in generating the pre-programmed non-vocal sound, e.g. by including a speaker and playing back the noise to the wearer 1 over the speaker.
Alternatively, the non-vocal sound may be described in a user manual, Other ways of teaching the wearer I to produce the appropriate non-vocal sound may be apparent to the skilled person.
[0045] In a particularly advantageous embodiment, the head-mountable device 10 may allow the wearer 1 to define a non-vocal sound of choice to be recognized by the processor as the instruction for performing a particular operation with the head-mountable device 10. The control method in accordance to this embodiment will be explained in further detail with the aid of FIG. 5, which depicts a flow chart of the method according to this embodiment.
[0046] As before, the method is initiated in step 110, after which it is checked in step 112 if the wearer 1 wants to program the head-mountable device 10 by providing the head-mountable device 10 with the non-vocal sound of choice. To this end, the head-mountable device 0 may include an additional user interface such as a button or the like to initiate the programming mode of the head-mountable device 10. Alternatively, the processor 15 may further be configured to recognize voice commands received through the microphone 25, such as "PROGRAM INSTRUCTION" or the like.
[0047] If it is detected in step 112 that the wearer 1 wants to program the head-mountable device 10, the method proceeds to step 114 in which the user-specified non-vocal sound is captured with the microphone 25 and stored by the processor 15. For instance, the processor 15 may store the recorded user-specified non-vocal sound in the data storage device 20, which may form part of the processor 15 as previously explained. In an embodiment, step 114 is performed upon confirmation of the wearer I that the captured non-vocal sound is acceptable, for instance by the wearer 1 confirming that step 114 should be performed by providing the appropriate instmction, e.g. via the aforementioned additional user interface, If the head-mountable device 10 is equipped with a display, the wearer I may further be assisted in the recording process by the displaying of appropriate instructions on the display of the head-mountable device 10. In this embodiment, step 112 may be repeated until the wearer I has indicated that the captured non-vocal sound should be stored, after which the method proceeds to step 114 as previously explained. This is not explicitly shown in FIG. 5.
[0048] Upon completion of the programming mode, or upon the wearer I indicating in step 112 that the head-mountable device 10 does not require programming, e,g, by not invoking the programming mode of the head-mountable device 10, the method proceeds to the previously described step 120 in which the microphone 25 captures sounds originating from the oral cavity 2 of the wearer I and transmits the captured sounds to the processor 1 5 in the previously described step 130.
[0049] In step 140, the processor 15 compares the captured non-vocal sound with the recorded non-vocal sound of step I N, e.g. using the previously explained pattern matching or other suitable comparison techniques that will be immediately apparent to the skilled person. It is checked in step 142 if the captured sound matches the stored sound, after which the method proceeds to previously described step 150 in which the processor 15 invokes the desired operation on the head-mountable device 10 in case of a match or returns to step 120 in case the captured non-vocal sound does not match the stored non-vocal sound.
[0050] At this point, it is noted that the head-mountable device 10 may of course include further functionality, such as a transmitter and/or a receiver for communicating wirelessly with a remote sewer such as a wireless access point or a mobile telephony access point. In addition, the head-mountable device 10 may comprise additional user interfaces for operating the head-mountable device 10, For example, an additional user interface may be provided in case the head-mountable device 10 includes a heads-up display in addition to an image capturing device, where the image capturing device may be controlled as previously described and the heads-up display may be controlled using the additional user interface.
Any suitable user interface may be used for this purpose. The head-mountable device 10 may further comprise a communication port, e.g. a (micro) USB port or a proprietary port for connecting the head-mountable device 10 to an external device, e.g. for the purpose of charging the head-mountable device 10 and/or communicating with the head-mountable device 10. The head-mountable device 10 typically further comprises a power source, e.g. a battery, integrated in the head-mountable device 10.
[0051] Moreover, although the concept of the present invention has been explained in particular relation to image capturing using the head-mountable device 10, it should be understood that any type of operation of the head-mountable device 10 may be invoked by the processor 15 upon recognition of a non-vocal sound generated in the oral cavity 2 of the wearer I. [0052] The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed, Many modifications and variations will be apparent to those of ordinary skill in the art, The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (15)

  1. Wearable device (10) comprising: a processor (15) adapted to respond to a user instruction and to perform an operation in response to said instruction, wherein the processor is adapted to communicate with a microphone (25) adapted to capture sounds from the oral cavity (2) of the user (1); wherein the processor is adapted to recognize a non-vocal sound generated by the user in said oral cavity as said user instruction,
  2. 2. The wearable device (10) of claim 1, further comprising the microphone (25).
  3. 3. The wearable device (10) of claim I or 2, wherein: the wearable device comprises an image sensor (11) under control of said processor (15); arid the processor is adapted to capture an image with said image sensor in response to said instruction.
  4. 4. The wearable device (10) of claim 3, wherein the image sensor (11) forms part of a camera,
  5. 5. The head-mountable device (10) of any of claims 1-4, wherein the wearable device is a head-mountable device,
  6. 6. The wearable device (10) of claim 5, wherein the head-mountable device comprises glasses that comprise a pair of side arms (14) for supporting the glasses on the head of the user (1), said microphone (25) being positioned at an end of one of said side arms.
  7. 7. The wearable device (10) of claim 6, wherein the microphone (25)is attached to said glasses for positioning in or behind an ear (3) of the user (1).
  8. 8. The wearable device (10) of any of claims 1-7, wherein the non-vocal sound is programmable.
  9. 9. The wearable device (10) of claim 8, wherein the processor (15) is adapted to compare a sound captured by the microphone (25) with a user-programmed sound.
  10. 10. The wearable device (10) of any of claims 1-9, wherein the non-vocal sound is generated using saliva and/or by swallowing.
  11. 11. A method of controlling a wearable device (10) including a processor (15) comprising: capturing (120) a non-vocal sound generated in the oral cavity (2) of a wearer (1) of the wearable device using a microphone (25); transmitting (130) the captured non-vocal sound to said processor; and performing (140) a device operation with said processor in response to the captured non-vocal sound.
  12. 12. The method of claim 11, further comprising: comparing the captured non-vocal sound to a stored non-vocal sound with said processor; and performing (t40) said operation if the captured non-vocal sound matches the stored non-vocal sound.
  13. 13. The method of claim 12, fbrther comprising: recording (t t4) a non-vocal sound with the microphone (25); and storing the recorded non-vocal sound to create the stored non-vocal sound.
  14. 14. The method of any of claims tt-13, wherein the step of performing (140) said operation comprises capturing an image under control of said processor (15).
  15. 15. The method of claim 14, wherein the wearable device (10) comprises a pair of glasses, and wherein said capturing an image comprises capturing said image using an image sensor embedded in said pair of glasses.
GB1413619.6A 2014-07-31 2014-07-31 Smart device control Withdrawn GB2528867A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1413619.6A GB2528867A (en) 2014-07-31 2014-07-31 Smart device control
US14/803,782 US20160034252A1 (en) 2014-07-31 2015-07-20 Smart device control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1413619.6A GB2528867A (en) 2014-07-31 2014-07-31 Smart device control

Publications (2)

Publication Number Publication Date
GB201413619D0 GB201413619D0 (en) 2014-09-17
GB2528867A true GB2528867A (en) 2016-02-10

Family

ID=51587563

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1413619.6A Withdrawn GB2528867A (en) 2014-07-31 2014-07-31 Smart device control

Country Status (2)

Country Link
US (1) US20160034252A1 (en)
GB (1) GB2528867A (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140364967A1 (en) * 2013-06-08 2014-12-11 Scott Sullivan System and Method for Controlling an Electronic Device
CN105844108A (en) * 2016-04-05 2016-08-10 深圳市智汇十方科技有限公司 Intelligent wearing equipment
ES2794834T3 (en) * 2016-08-02 2020-11-19 Univ Sorbonne Medical device designed to be worn in front of the eyes
CN111477222A (en) * 2019-01-23 2020-07-31 上海博泰悦臻电子设备制造有限公司 Method for controlling terminal through voice and intelligent glasses
CN112558766A (en) * 2020-12-11 2021-03-26 上海影创信息科技有限公司 Method and system for waking up function interface in scene and AR glasses thereof
WO2024018400A2 (en) * 2022-07-20 2024-01-25 Q (Cue) Ltd. Detecting and utilizing facial micromovements

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020077831A1 (en) * 2000-11-28 2002-06-20 Numa Takayuki Data input/output method and system without being notified
JP2005130427A (en) * 2003-10-23 2005-05-19 Asahi Denshi Kenkyusho:Kk Operation switch device
JP2006343965A (en) * 2005-06-08 2006-12-21 Sanyo Electric Co Ltd Operation command input device
US20090296965A1 (en) * 2008-05-27 2009-12-03 Mariko Kojima Hearing aid, and hearing-aid processing method and integrated circuit for hearing aid

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1027627B1 (en) * 1997-10-30 2009-02-11 MYVU Corporation Eyeglass interface system
US20130283169A1 (en) * 2012-04-24 2013-10-24 Social Communications Company Voice-based virtual area navigation
EP2555536A1 (en) * 2011-08-05 2013-02-06 Samsung Electronics Co., Ltd. Method for controlling electronic apparatus based on voice recognition and motion recognition, and electronic apparatus applying the same
US9223136B1 (en) * 2013-02-04 2015-12-29 Google Inc. Preparation of image capture device in response to pre-image-capture signal
KR102083596B1 (en) * 2013-09-05 2020-03-02 엘지전자 주식회사 Display device and operation method thereof
RU2017106629A (en) * 2014-08-03 2018-09-04 Поготек, Инк. SYSTEM OF WEARABLE CAMERAS AND DEVICES, AND ALSO A WAY OF ATTACHING CAMERA SYSTEMS OR OTHER ELECTRONIC DEVICES TO WEARABLE PRODUCTS

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020077831A1 (en) * 2000-11-28 2002-06-20 Numa Takayuki Data input/output method and system without being notified
JP2005130427A (en) * 2003-10-23 2005-05-19 Asahi Denshi Kenkyusho:Kk Operation switch device
JP2006343965A (en) * 2005-06-08 2006-12-21 Sanyo Electric Co Ltd Operation command input device
US20090296965A1 (en) * 2008-05-27 2009-12-03 Mariko Kojima Hearing aid, and hearing-aid processing method and integrated circuit for hearing aid

Also Published As

Publication number Publication date
GB201413619D0 (en) 2014-09-17
US20160034252A1 (en) 2016-02-04

Similar Documents

Publication Publication Date Title
US11668938B2 (en) Wearable imaging device
US20160034252A1 (en) Smart device control
US10342428B2 (en) Monitoring pulse transmissions using radar
US10175753B2 (en) Second screen devices utilizing data from ear worn device system and method
US9927877B2 (en) Data manipulation on electronic device and remote terminal
US11626127B2 (en) Systems and methods for processing audio based on changes in active speaker
US20220232321A1 (en) Systems and methods for retroactive processing and transmission of words
US11929087B2 (en) Systems and methods for selectively attenuating a voice
US20210350823A1 (en) Systems and methods for processing audio and video using a voice print
CN112947755A (en) Gesture control method and device, electronic equipment and storage medium
US11580727B2 (en) Systems and methods for matching audio and image information
US11432067B2 (en) Cancelling noise in an open ear system
WO2021038295A1 (en) Hearing aid system with differential gain
JP2015092646A (en) Information processing device, control method, and program
CN113572956A (en) Focusing method and related equipment
WO2019102680A1 (en) Information processing device, information processing method, and program
KR102314710B1 (en) System sign for providing language translation service for the hearing impaired person
US20220284915A1 (en) Separation of signals based on direction of arrival
US20210266681A1 (en) Processing audio and video in a hearing aid system
US20230042310A1 (en) Wearable apparatus and methods for approving transcription and/or summary
US11736874B2 (en) Systems and methods for transmitting audio signals with varying delays
US11875791B2 (en) Systems and methods for emphasizing a user's name
US20220261587A1 (en) Sound data processing systems and methods
US20230062598A1 (en) Adjusting an audio transmission when a user is being spoken to by another person
US20240205614A1 (en) Integrated camera and hearing interface device

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)