WO2020248778A1 - 控制方法、穿戴设备和存储介质 - Google Patents

控制方法、穿戴设备和存储介质 Download PDF

Info

Publication number
WO2020248778A1
WO2020248778A1 PCT/CN2020/090980 CN2020090980W WO2020248778A1 WO 2020248778 A1 WO2020248778 A1 WO 2020248778A1 CN 2020090980 W CN2020090980 W CN 2020090980W WO 2020248778 A1 WO2020248778 A1 WO 2020248778A1
Authority
WO
WIPO (PCT)
Prior art keywords
wearable device
information
wearer
time difference
housing
Prior art date
Application number
PCT/CN2020/090980
Other languages
English (en)
French (fr)
Inventor
杨鑫
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to KR1020217039094A priority Critical patent/KR20220002605A/ko
Priority to JP2021571636A priority patent/JP7413411B2/ja
Priority to EP20822293.5A priority patent/EP3968320A4/en
Publication of WO2020248778A1 publication Critical patent/WO2020248778A1/zh
Priority to US17/528,889 priority patent/US20220076684A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • H04R5/0335Earpiece support, e.g. headbands or neckrests
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/24Speech recognition using non-acoustical features
    • G10L15/25Speech recognition using non-acoustical features using position of the lips, movement of the lips or face analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/06Decision making techniques; Pattern matching strategies
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/10Multimodal systems, i.e. based on the integration of multiple recognition engines or fusion of expert systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • H04R1/04Structural association of microphone with electric circuitry therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L2021/02082Noise filtering the noise being echo, reverberation of the speech
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/40Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
    • H04R2201/4012D or 3D arrays of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/13Hearing devices using bone conduction transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones

Definitions

  • This application relates to the field of electronic technology, and in particular to a control method, wearable device and storage medium.
  • This application provides a control method, wearable device and storage medium.
  • the embodiment of the application provides a method for controlling a wearable device.
  • the wearable device includes an acousto-electric element and a vibration sensor, and the control method includes:
  • the wearable device of the embodiment of the present application includes a housing, a processor, an acousto-electric element, and a vibration sensor.
  • the acousto-electric element is disposed in the housing, and the processor is connected to the acousto-electric element and the vibration sensor.
  • the device is used to obtain the sound information collected by the acousto-electric element and the vibration information collected by the vibration sensor; and is used to determine the identity information of the sender of the sound command based on the sound information and the vibration information.
  • the sound information is determined; and for controlling the wearable device to execute the sound command or ignore the sound command according to the identity information.
  • a non-volatile computer-readable storage medium containing computer-executable instructions.
  • the processor is caused to execute the above-mentioned control method of a wearable device .
  • FIG. 1 is a three-dimensional schematic diagram of a wearable device according to an embodiment of the present application.
  • FIG. 2 is a schematic plan view of a wearable device according to another embodiment of the present application.
  • FIG. 3 is a schematic plan view of a partial structure of a wearable device according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the adjustment process of the wearable device according to the embodiment of the present application.
  • FIG 5 is another schematic diagram of the adjustment process of the wearable device according to the embodiment of the present application.
  • FIG. 6 is a schematic plan view of a partial structure of a wearable device according to another embodiment of the present application.
  • FIG. 7 is a schematic plan view of a partial structure of a wearable device according to another embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a method for controlling a wearable device according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a scene of a method for controlling a wearable device according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of modules of a control device of a wearable device according to an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of a method for controlling a wearable device according to another embodiment of the present application.
  • FIG. 12 is a schematic flowchart of a method for controlling a wearable device according to another embodiment of the present application.
  • FIG. 13 is a schematic flowchart of a method for controlling a wearable device according to another embodiment of the present application.
  • FIG. 14 is a schematic diagram of vibration information and sound information of a method for controlling a wearable device according to an embodiment of the present application
  • 15 is a schematic flowchart of a method for controlling a wearable device according to another embodiment of the present application.
  • 16 is a schematic flowchart of a method for controlling a wearable device according to another embodiment of the present application.
  • FIG. 17 is a schematic flowchart of a method for controlling a wearable device according to another embodiment of the present application.
  • FIG. 18 is a schematic flowchart of a method for controlling a wearable device according to another embodiment of the present application.
  • FIG. 19 is a schematic diagram of another scene of a method for controlling a wearable device according to an embodiment of the present application.
  • 20 is a schematic diagram of another module of the wearable device according to the embodiment of the present application.
  • the wearable device 100 includes a housing 20, a supporting member 30, a display 40, a refractive member 50 and an adjustment mechanism 60.
  • the housing 20 is an external component of the wearable device 100 and plays a role of protecting and fixing the internal components of the wearable device 100. By enclosing the internal components by the housing 20, it is possible to avoid direct damage to these internal components by external factors.
  • the housing 20 can be used to house and fix at least one of the display 40, the diopter 50, and the adjustment mechanism 60.
  • the housing 20 is formed with a receiving groove 22, and the display 40 and the diopter 50 are received in the receiving groove 22.
  • the adjustment mechanism 60 is partially exposed from the housing 20.
  • the housing 20 further includes a housing front wall 21, a housing top wall 24, a housing bottom wall 26 and a housing side wall 28.
  • a gap 262 is formed in the middle of the housing bottom wall 26 facing the housing top wall 24.
  • the housing 20 is roughly shaped like a "B".
  • the housing 20 may be formed by machining aluminum alloy by a computer numerical control (Computerized Numerical Control, CNC) machine tool, or may be formed of polycarbonate (PC) or PC and acrylonitrile-butadiene-styrene (Acrylonitrile Butadiene Styrene plastic). , ABS) injection molding.
  • CNC Computer numerical Control
  • PC polycarbonate
  • PC acrylonitrile-butadiene-styrene
  • ABS acrylonitrile Butadiene Styrene injection molding.
  • the specific manufacturing method and specific materials of the housing 20 are not limited here.
  • the supporting member 30 is used to support the wearable device 100.
  • the wearable device 100 may be fixed on the head of the user through the supporting member 30.
  • the supporting member 30 includes a first bracket 32, a second bracket 34 and an elastic band 36.
  • the first bracket 32 and the second bracket 34 are symmetrically arranged about the gap 262. Specifically, the first support 32 and the second support 34 are rotatably arranged on the edge of the housing 20. When the user does not need to use the wearable device 100, the first support 32 and the second support 34 can be stacked close to the housing 20 to facilitate In storage. When the user needs to use the wearable device 100, the first support 32 and the second support 34 can be expanded to realize the function of supporting the first support 32 and the second support 34.
  • a first bending portion 322 is formed at one end of the first bracket 32 away from the housing 20, and the first bending portion 322 is bent toward the bottom wall 26 of the housing. In this way, when the user wears the wearable device 100, the first bending portion 322 can be erected on the user's ear, so that the wearable device 100 is not easy to slip off.
  • a second bending portion 342 is formed at an end of the second bracket 34 away from the housing 20, and the second bending portion 342 is bent toward the bottom wall 26 of the housing.
  • the explanation and description of the second bending portion 342 can refer to the first bending portion 322, and to avoid redundancy, it will not be repeated here.
  • the elastic band 36 detachably connects the first bracket 32 and the second bracket 34. In this way, when the user wears the wearable device 100 for strenuous activities, the wearable device 100 can be further fixed by the elastic band 36 to prevent the wearable device 100 from loosening or even falling during strenuous activities. It can be understood that in other examples, the elastic band 36 may also be omitted.
  • the display 40 includes an OLED display screen.
  • the OLED display does not require a backlight, which is beneficial to the thinness of the wearable device 100.
  • the OLED screen has a large viewing angle and low power consumption, which is conducive to saving power consumption.
  • the display 40 may also be an LED display or a Micro LED display. These displays are merely examples and the embodiments of the present application are not limited thereto.
  • the refractive component 50 is arranged on the side of the display 40.
  • the refractive component 50 includes a refractive cavity 52, a light-transmitting liquid 54, a first film layer 56, a second film layer 58 and a side wall 59.
  • the light-transmitting liquid 54 is disposed in the refractive cavity 52.
  • the adjustment mechanism 60 is used to adjust the amount of the light-transmitting liquid 54 to adjust the shape of the refractive member 50.
  • the second film layer 58 is disposed relative to the first film layer 56, the sidewall 59 connects the first film layer 56 and the second film layer 58, and the first film layer 56, the second film layer 58, and the sidewall 59
  • the refractive cavity 52 and the adjusting mechanism 60 are used to adjust the amount of the transparent liquid 54 to change the shape of the first film layer 56 and/or the second film layer 58.
  • "changing the shape of the first film layer 56 and/or the second film layer 58" includes three cases: the first case: changing the shape of the first film layer 56 without changing the shape of the second film layer 58; The second case: the shape of the first film layer 56 is not changed and the shape of the second film layer 58 is changed; the third case: the shape of the first film layer 56 is changed and the shape of the second film layer 58 is changed.
  • the first case is taken as an example for description.
  • the first film layer 56 may have elasticity. It can be understood that when the amount of the light-transmitting liquid 54 in the refractive cavity 52 changes, the pressure in the refractive cavity 52 also changes, so that the shape of the refractive component 50 changes.
  • the adjusting mechanism 60 reduces the amount of the light-transmitting liquid 54 in the refractive cavity 52, the pressure in the refractive cavity 52 is reduced, and the pressure difference between the pressure outside the refractive cavity 52 and the pressure in the refractive cavity 52 As it increases, the refractive cavity 52 becomes more concave.
  • the adjustment mechanism 60 increases the amount of the light-transmitting liquid 54 in the refractive cavity 52, the pressure in the refractive cavity 52 increases, and the pressure outside the refractive cavity 52 is equal to the pressure in the refractive cavity 52. The difference is reduced, and the refractive cavity 52 is more convex.
  • the form of the refractive member 50 can be adjusted by adjusting the amount of the light-transmitting liquid 54.
  • the adjustment mechanism 60 is connected to the diopter 50.
  • the adjustment mechanism 60 is used to adjust the form of the diopter 50 to adjust the refractive power of the diopter 50.
  • the adjustment mechanism 60 includes a cavity 62, a sliding member 64, a driving part 66, an adjustment cavity 68 and a switch 61.
  • the sliding member 64 is slidably arranged in the cavity 62, the driving member 66 is connected to the sliding member 64, the cavity 62 and the sliding member 64 jointly define an adjustment cavity 68, the adjustment cavity 68 is connected to the refractive cavity 52 through the side wall 59, and the driving member 66 is used to drive the sliding member 64 to slide relative to the cavity 62 to adjust the volume of the adjustment cavity 68 to adjust the amount of the light-transmitting liquid 54 in the refractive cavity 52.
  • the volume of the adjusting cavity 68 is adjusted by the sliding member 64 to adjust the amount of the light-transmitting liquid 54 in the refractive cavity 52.
  • the sliding member 64 slides away from the side wall 59, the volume of the adjustment cavity 68 increases, the pressure in the adjustment cavity 68 decreases, and the light-transmitting liquid 54 in the refractive cavity 52 enters Adjusting the cavity 68, the first film layer 56 is more and more recessed inward.
  • the sliding member 64 slides toward the side wall 59, the volume of the adjusting cavity 68 decreases, the pressure in the adjusting cavity 68 increases, and the light-transmitting liquid 54 in the adjusting cavity 68 enters In the refractive cavity 52, the first film layer 56 protrudes more and more outward.
  • the side wall 59 is formed with a flow channel 591, and the flow channel 591 communicates with the adjusting cavity 68 and the refractive cavity 52.
  • the adjustment mechanism 60 includes a switch 61 provided in the flow channel 591, and the switch 61 is used to control the open and close state of the flow channel 591.
  • the number of switches 61 is two. Both switches 61 are one-way switches. One switch 61 is used to control the flow of light-transmitting liquid 54 from the adjusting cavity 68 to the refractive cavity 52, and the other switch 61 It is used to control the light-transmitting liquid 54 to flow from the refractive cavity 52 to the regulating cavity 68.
  • the flow of the light-transmitting liquid 54 between the adjusting cavity 68 and the refractive cavity 52 is realized through the switch 61 to maintain the pressure balance on both sides of the side wall 59.
  • the change in the volume of the adjustment cavity 68 will cause the pressure in the adjustment cavity 68 to change, thereby causing the flow of the transparent liquid 54 between the adjustment cavity 68 and the refractive cavity 52.
  • the switch 61 controls the opening and closing state of the flow channel 591 to control whether the flow of the light-transmitting liquid 54 between the adjusting cavity 68 and the refractive cavity 52 can be realized, thereby controlling the adjustment of the shape of the refractive component 50.
  • the switch 61 that controls the flow of the transparent liquid 54 from the refractive cavity 52 to the adjustment cavity 68 is turned on, the sliding member 64 slides away from the side wall 59, and the volume of the adjustment cavity 68 increases.
  • the pressure in the adjustment cavity 68 decreases, the light-transmitting liquid 54 in the refractive cavity 52 enters the adjustment cavity 68 through the switch 61, and the first film layer 56 is more and more inwardly recessed.
  • the switch 61 that controls the flow of the light-transmitting liquid 54 from the refractive cavity 52 to the adjustment cavity 68 is closed. Even if the slider 64 slides away from the side wall 59, the volume of the adjustment cavity 68 increases and the adjustment cavity 68 The pressure inside decreases, the light-transmitting liquid 54 in the refractive cavity 52 cannot enter the adjustment cavity 68, and the shape of the first film layer 56 does not change.
  • the switch 61 that controls the flow of the light-transmitting liquid 54 from the adjusting cavity 68 to the refractive cavity 52 is opened, the sliding member 64 slides toward the side wall 59, and the volume of the adjusting cavity 68 decreases. , The pressure in the regulating cavity 68 increases, the light-transmitting liquid 54 in the regulating cavity 68 enters the refractive cavity 52 through the switch 61, and the first film layer 56 protrudes more and more outward.
  • the switch 61 that controls the flow of the light-transmitting liquid 54 from the adjusting cavity 68 to the refractive cavity 52 is closed. Even if the slider 64 slides toward the side wall 59, the volume of the adjusting cavity 68 decreases, and the adjusting cavity 68 The internal pressure increases, the transparent liquid 54 in the regulating cavity 68 cannot enter the refractive cavity 52, and the shape of the first film layer 56 does not change.
  • the driving component 66 can realize its function of driving the sliding member 64 to slide based on various structures and principles.
  • the driving part 66 includes a knob 662 and a screw 664, the screw 664 is connected to the knob 662 and the sliding member 64, and the knob 662 is used to drive the screw 664 to rotate The sliding member 64 is driven to slide relative to the cavity 62.
  • the slider 64 can be driven by the knob 662 and the lead screw 664. Since the screw 664 and the knob 662 cooperate to convert the rotary motion of the knob 662 into the linear motion of the screw 664, when the user rotates the knob 662, the screw 664 can drive the slider 64 to slide relative to the cavity 62, thereby causing adjustment
  • the volume change of the cavity 68 further adjusts the amount of the transparent liquid 54 in the refractive cavity 52.
  • the knob 662 can be exposed from the housing 20 to facilitate the user to rotate.
  • a threaded part is formed on the knob 662
  • a threaded part that matches the knob 662 is formed on the screw 664
  • the knob 662 and the screw 664 are threadedly connected.
  • the switch 61 can be opened correspondingly. In this way, the light-transmitting liquid 54 can flow, and the pressure balance on both sides of the side wall 59 is ensured.
  • the knob 662 rotates clockwise and the sliding member 64 slides away from the side wall 59 to turn on the switch 61 that controls the flow of the light-transmitting liquid 54 from the refractive cavity 52 to the adjustment cavity 68.
  • the knob 662 rotates counterclockwise and the sliding member 64 slides toward the side wall 59 to turn on the switch 61 that controls the flow of the light-transmitting liquid 54 from the adjusting cavity 68 to the refractive cavity 52.
  • the rotation angle of the knob 662 is not associated with the diopter power of the diopter 50, and the user only needs to rotate the knob 662 to a position with the best visual experience.
  • the rotation angle of the knob 662 and the diopter power of the diopter 50 may also be correlated.
  • the driving component 66 includes a gear 666 and a rack 668 meshing with the gear 666.
  • the rack 668 connects the gear 666 and the sliding member 64.
  • the gear 666 is used to drive the rack 668 to move to drive the sliding member 64 relative to the cavity. 62 slide.
  • the sliding member 64 is driven by the gear 666 and the rack 668. Since the gear 666 and the rack 668 cooperate to convert the rotary motion of the gear 666 into the linear motion of the rack 668, when the user rotates the gear 666, the rack 668 can drive the slider 64 to slide relative to the cavity 62, thereby causing adjustment
  • the volume change of the cavity 68 further adjusts the amount of the transparent liquid 54 in the refractive cavity 52.
  • the gear 666 can be exposed from the housing 20 to facilitate the rotation of the user.
  • the switch 61 can be opened correspondingly. In this way, the light-transmitting liquid 54 can flow, and the pressure balance on both sides of the side wall 59 is ensured.
  • the gear 666 rotates clockwise so that the rack 668 is meshed with the gear 666, the length of the rack 668 is shortened, and the sliding member 64 is pulled to move away from the side wall 59, and the light-transmitting liquid 54 is controlled from the refractive index.
  • the switch 61 from the cavity 52 to the regulating cavity 68 is opened.
  • the gear 666 rotates counterclockwise so that the rack 668 meshed with the gear 666 is disengaged from the gear 666, the length of the rack 668 increases, and the sliding member 64 is pushed to move toward the side wall 59, which will control the penetration
  • the switch 61 of the optical liquid 54 flowing from the adjusting cavity 68 to the refractive cavity 52 is turned on.
  • the rotation angle of the gear 666 and the diopter power of the diopter 50 are not associated, and the user only needs to rotate the gear 666 to a position with the best visual experience.
  • the rotation angle of the gear 666 and the refractive power of the diopter 50 may also be correlated.
  • the driving component 66 includes a driving motor 669, a motor shaft 6691 of the driving motor 669 is connected to the sliding member 64, and the driving motor 669 is used to drive the sliding member 64 to slide relative to the cavity 62.
  • the sliding member 64 is driven by the driving motor 668.
  • the driving motor 669 may be a linear motor.
  • the linear motor has a simple structure and directly generates linear motion without passing through an intermediate conversion mechanism, which can reduce the motion inertia and improve the dynamic response performance and positioning accuracy.
  • the sliding member 64 is driven by the driving motor 668, so that the driving of the sliding member 64 is editable.
  • the drive motor 668 can be correlated with the power of refraction through prior calibration. The user can directly input the refractive power, and the driving motor 668 automatically operates to drive the sliding member 64 to slide to the corresponding position.
  • the driving component 66 may also include an input 6692, and the input 6692 includes but is not limited to devices such as buttons, knobs, or touch screens.
  • the input 6692 is a button, and two buttons are respectively disposed on opposite sides of the cavity 62. The keys can be exposed from the housing 20 to facilitate the user to press.
  • the button can control the operating time of the driving motor 669 according to the number or duration of external force pressing, thereby controlling the sliding distance of the sliding member 64.
  • the switch 61 can be opened correspondingly. In this way, the light-transmitting liquid 54 can flow, and the pressure balance on both sides of the side wall 59 is ensured.
  • the user presses one of the two buttons to drive the motor shaft 6691 to extend, and the motor shaft 6691 pushes the slider 64 to move toward the side wall 59, which will control the flow of the transparent liquid 54 from the regulating cavity 68.
  • the switch 61 to the refractive cavity 52 is turned on.
  • the motor shaft 6691 when the user presses the other of the two buttons, the motor shaft 6691 is shortened, and the motor shaft 6691 pulls the slider 64 to move away from the side wall 59, which will control the light-transmitting liquid 54 from the refractive cavity.
  • the switch 61 flowing 52 to the adjustment chamber 68 is opened.
  • the structure of the refractive component 50 not only includes the above refractive cavity 52, the light-transmitting liquid 54, the first film layer 56, the second film layer 58 and the side wall 59, as long as the refractive component 50 can achieve diopter
  • the refractive component 50 includes a plurality of lenses and a driving member, and the driving member is used to drive each lens from the storage position to the refractive position.
  • the driving member can also drive each lens moved to the refractive position to move on the refractive axis, thereby changing the refractive power of the refractive component 50.
  • the shape of the refractive component described above includes the shape and state of the refractive component, and the structure of the above refractive cavity 52, light-transmitting liquid 54, first film layer 56, second film layer 58, and sidewall 59
  • the shape of the first film layer 56 and/or the second film layer 58 is changed to achieve the change of diopter; the structure of the above multiple lenses and the driving member can realize the change of diopter by changing the state of the lens.
  • the embodiment of the present application provides a wearable device 100, which includes a display 40, a refractive component 50, and an adjustment mechanism 60.
  • the refractive member 50 is provided on the side of the display 40.
  • the adjustment mechanism 60 is connected to the diopter 50, and the adjustment mechanism 60 is used to adjust the form of the diopter 50 to adjust the diopter of the diopter 50.
  • the shape of the refractive member 50 is adjusted by the adjusting mechanism 60 to adjust the diopter of the refractive member 50, so that users with refractive errors can see the image displayed on the display 40 clearly, which is beneficial to improve user experience .
  • the refractive component 50 and the adjusting mechanism 60 can linearly correct the refractive power, so that everyone with different refractive power can wear it flexibly.
  • the volume of the refractive component 50 and the adjustment mechanism 60 is small, which does not affect the wearing experience of the wearable device 100. Users do not need to buy a lot of lenses, which can reduce the price.
  • an embodiment of the present application provides a method for controlling the wearable device 100.
  • the wearable device 100 includes an acoustic and electric element 110 and a vibration sensor 120.
  • Control methods include:
  • Step S12 Acquire the sound information collected by the acoustic electric element 110 and the vibration information collected by the vibration sensor 120;
  • Step S14 Determine the identity information of the sender of the voice command according to the voice information and the vibration information, and the voice command is determined by the voice information;
  • Step S16 Control the wearable device 100 to execute the voice command or ignore the voice command according to the identity information.
  • an embodiment of the present application provides a control device 10 of a wearable device 100.
  • the wearable device 100 includes an acousto-electric element 110 and a vibration sensor 120.
  • the control device 10 includes an acquisition module 12, a determination module 14 and a control module 16.
  • the acquisition module 12 is used to acquire the sound information collected by the acousto-electric element 110 and the vibration collected by the vibration sensor 120 Information;
  • the determination module 14 is used to determine the identity information of the sender of the voice command based on the sound information and vibration information, and the voice command is determined by the voice information;
  • the control module 16 is used to control the wearable device 100 to execute the voice command or ignore the voice command based on the identity information.
  • the embodiment of the present application provides a wearable device 100.
  • the wearable device includes a housing 20, a processor 101, an acousto-electric element 110, and a vibration sensor 120.
  • the acousto-electric element 110 is arranged in the housing 20.
  • the processor 101 is connected to the acousto-electric element 110 and the vibration sensor 120.
  • the processor 101 is used to obtain the acoustic-electric element.
  • the control method of the wearable device 100, the control device 10, and the wearable device 100 determine the identity information of the sender of the voice command according to the sound information and vibration information, thereby controlling the wearable device 100 to execute or ignore the voice command, which can avoid wearing
  • the false triggering of the device 100 makes the control of the wearable device 100 more accurate.
  • the wearable device 100 may be an electronic device such as electronic glasses, electronic clothes, electronic bracelets, electronic necklaces, electronic tattoos, watches, earphones, pendants, and headphones.
  • the wearable device 100 may also be an electronic device or a head mount display (HMD) of a smart watch.
  • HMD head mount display
  • the embodiment of the present application takes the wearable device 100 as an example of electronic glasses to explain the control method of the wearable device 100 in the embodiment of the present application. This does not mean that the specific form of the wearable device 100 is limited.
  • the number of acousto-electric elements 110 is three, the housing 20 includes a housing front wall 21, and the three acousto-electric elements 110 are respectively arranged at a first preset position 211, a second preset position 212, and a third preset position of the housing front wall 21 Location 213.
  • the housing 20 includes a housing side wall 28 and a housing top wall 24.
  • the number of the housing side walls 28 is two.
  • the two housing side walls 28 are respectively arranged on opposite sides of the housing top wall 24.
  • the first preset position 211 is close to the housing.
  • the top wall 24 and one of the housing side walls 28, and the second preset position 212 is close to the housing top wall 24 and the other housing side wall 28.
  • the housing 20 includes a housing top wall 24 and a housing bottom wall 26.
  • the housing top wall 24 and the housing bottom wall 26 are respectively arranged on opposite sides of the housing front wall 21.
  • the middle of the housing bottom wall 26 forms a gap 262 toward the housing top wall 24.
  • the third preset position 213 is close to the gap 262.
  • the distribution of the three acousto-electric elements 110 can be made more dispersed, which not only makes the appearance of the wearable device 100 beautiful, but also improves the de-reverberation when the output information of the acousto-electric element 110 is subsequently de-reverberated to obtain sound information. Effect.
  • the acousto-electric element 110 is a microphone.
  • the number of acousto-electric elements 110 is three, and the three acousto-electric elements 110 are respectively arranged at the first preset position 211, the second preset position 212, and the third preset position 213.
  • the number of acousto-electric elements 110 may be 1, 2, 4, 6, or other numbers.
  • the acousto-electric element 110 may be arranged in the first bracket 32, the second bracket 34 or other positions of the wearable device 100.
  • the specific number and specific location of the acousto-electric elements 110 are not limited here.
  • the wearable device 100 includes a supporting member 30 connected to the housing 20.
  • the supporting member 30 includes a first bracket 32 and a second bracket 34, and the vibration sensor 120 is disposed on the first bracket 32 and/or the second bracket 34.
  • first bracket 32 away from the casing 20 is formed with a first bending portion 322, and the end of the second bracket 34 away from the casing 20 is formed with a second bending portion 342.
  • the casing 20 includes a casing bottom wall 26.
  • the bending portion 322 and the second bending portion 342 are bent toward the bottom wall 26 of the housing, and the vibration sensor 120 is disposed on the first bending portion 322 and/or the second bending portion 342.
  • the vibration sensor 120 is a gyroscope.
  • the number of the vibration sensor 120 is one, and the vibration sensor 120 is disposed on the first bending portion 322 of the first bracket 32 of the wearable device 100.
  • the number of vibration sensors 120 is two, one of the vibration sensors 120 is disposed at the first bending part 322, and the other vibration sensor 120 is disposed at the second bending part 342.
  • the number of the vibration sensor 120 is one, and the vibration sensor 120 may also be disposed at the second bending portion 342.
  • the vibration sensor 120 on the part where the support member 30 is in contact with the user's head, such as the first bending part 322 and the second bending part 342, can enable the vibration sensor 120 to collect more and more accurate vibrations. Information, so that the control of the wearable device 100 based on the vibration information is more accurate.
  • the number of vibration sensors 120 may be 3, 4, 5, or other numbers.
  • the vibration sensor 120 may be provided in other positions of the wearable device 100.
  • the specific number and specific positions of the vibration sensors 120 are not limited here.
  • voice command herein may refer to information that can be recognized by the wearable device 100 and can control the wearable device 100.
  • voice information may refer to information from which voice commands can be extracted.
  • “Sound information” may include the start time of the sound information, the end time of the sound information, voiceprint information, and so on.
  • vibration information may include the start time of the vibration information, the end time of the vibration information, the frequency and amplitude of the vibration, and so on.
  • identity information can refer to the inherent identity of the sender (for example, the identity uniquely determined by the ID number), or the identity of the sender due to factors such as position, behavior, and status (for example, the machine of the wearable device 100). Master, the owner of the non-wearable device 100, the wearer of the wearable device 100, and the wearer of the non-wearable device 100).
  • the voice command is: "Change the power-on password to 123456".
  • the issuer of the voice command is the owner of the wearable device 100.
  • the vibration information it can be determined that the voice command is issued by The identity information sent by the wearer of the wearable device 100, that is to say, according to the sound information and the vibration information, the identity information of the sender of the sound command can be determined as: "owner” and "wearer”.
  • the wearable device 100 can be controlled to change the power-on password to "123456". This can prevent users who have multiple wearable devices 100 from changing the power-on passwords of other wearable devices 100 that are not worn when they need to modify the power-on password of the wearable device 100 they are wearing.
  • the voice command is: "Change the power-on password to 123456".
  • the voiceprint information of the voice message it can be determined that the issuer of the voice command is not the owner of the wearable device 100.
  • the vibration information it can be determined that the voice command is The identity information sent by the wearer of the wearable device 100, that is to say, according to the sound information and vibration information, the identity information of the sender of the voice command can be determined as: "non-owner" and "wearer".
  • the wearable device 100 can be controlled to ignore the voice command. In this way, when the wearable device 100 is worn by a user who is not the owner, the user who is not the owner can take the opportunity to tamper with the power-on password.
  • the identity information includes a wearer and a non-wearer
  • step S16 includes:
  • Step S162 When the identity information is the wearer, control the wearable device 100 to execute a voice command;
  • Step S164 When the identity information is a non-wearer, control the wearable device 100 to ignore the voice command.
  • control module 16 is used to control the wearable device 100 to execute voice commands when the identity information is a wearer; and used to control the wearable device 100 to ignore voice commands when the identity information is a non-wearer.
  • the processor 101 is used to control the wearable device 100 to execute voice commands when the identity information is a wearer; and used to control the wearable device 100 to ignore the voice commands when the identity information is a non-wearer.
  • the wearable device 100 can be controlled to execute voice commands or ignore voice commands according to the identity information. It can be understood that when the wearable device 100 is in a noisy and noisy environment, if the issuer of the voice command is not distinguished from the wearer or the non-wearer, the wearable device 100 is easily triggered by other sounds in the environment. In this embodiment, the wearable device 100 is controlled to execute the voice command only when it is determined that the issuer of the voice command is the wearer, which improves the adaptability of the wearable device 100 to the environment and makes the wearable device 100 even in a chaotic environment. Can work normally.
  • three users wear three wearable devices 100 respectively, and control the wearable devices 100 they wear through voice.
  • User No. 1 wears No. 1 wearable device 100 and issues a voice command of "Open Document A”
  • User No. 2 wears No. 2 wearable device 100 and issues a voice command of “Open Document B”
  • User No. 3 wears No. 3 wearable device 100, and issue the voice command "Open Document C”.
  • wearable device 100 For No. 1 wearable device 100, it can be determined by sound information and vibration information that the issuer of the voice command "Open Document A" is the wearer, that is, user No. 1, "Open Document B” and "Open Document C” The issuer of the voice command is the non-wearer. At this time, the wearable device 100 executes the voice command of "open document A” and ignores the voice commands of "open document B” and "open document C".
  • No. 2 wearable device 100 it can be determined by the sound information and vibration information that the issuer of the voice command "open document B" is the wearer, that is, user No. 2, "open document A” and “open document C” The issuer of the voice command is the non-wearer. At this time, the No. 2 wearable device 100 executes the voice command of "open document B” and ignores the voice commands of "open document A” and "open document C".
  • the wearable device 100 of No. 3 it can be determined by the sound information and vibration information that the issuer of the voice command "open document C" is the wearer, that is, user No. 3, "open document B” and "open document A" The issuer of the voice command is the non-wearer. At this time, the No. 3 wearable device 100 executes the voice command of "open document C” and ignores the voice commands of "open document B" and "open document A".
  • the wearable device 100 can also accurately execute voice commands corresponding to the wearer.
  • step S14 includes:
  • Step S142 Determine the time difference between the sound information and the vibration information
  • Step S144 Determine the identity information according to the time difference.
  • the determining module 14 is used to determine the time difference between the sound information and the vibration information; and used to determine the identity information according to the time difference.
  • the processor 101 is used to determine the time difference between sound information and vibration information; and used to determine the identity information according to the time difference.
  • the identity information of the sender of the voice command is determined based on the voice information and vibration information. It can be understood that the time when the sound is generated is the same as the time when the vocal cords start to vibrate, and both the propagation of sound and the propagation of vibration require time. Therefore, the identity information of the sender of the voice command can be determined based on the time difference between the voice information and the vibration information.
  • the identity information includes the wearer and the non-wearer
  • the time difference includes the start time difference T1
  • step S142 includes:
  • Step S1422 Determine the start time difference T1 according to the start time t2 of the sound information and the start time t1 of the vibration information;
  • Step S144 includes:
  • Step S1442 In the case that the initial time difference T1 is less than or equal to the preset time threshold, determine the identity information as the wearer;
  • Step S1444 In the case where the initial time difference T1 is greater than the time threshold, it is determined that the identity information is a non-wearer.
  • the determining module 14 is used to determine the starting time difference T1 according to the starting time t2 of the sound information and the starting time t1 of the vibration information; and when the starting time difference T1 is less than or equal to a preset time threshold, The identity information is determined to be the wearer; and used to determine that the identity information is the non-wearer when the starting time difference T1 is greater than the time threshold.
  • the processor 101 is configured to determine the start time difference T1 according to the start time t2 of the sound information and the start time t1 of the vibration information; and when the start time difference T1 is less than or equal to a preset time threshold, The identity information is determined to be the wearer; and used to determine that the identity information is the non-wearer when the starting time difference T1 is greater than the time threshold.
  • the identification information is determined according to the start time difference T1.
  • the time threshold may be obtained through experiments in advance and stored in the wearable device 100.
  • the vibration information collected by the vibration sensor 120 comes from the vocal cord vibration caused by the facial muscles synchronized tiny vibrations. Therefore, the vibration information 120 reflects the information of the wearer of the wearable device 100, and the starting time t1 of the vibration information can be inferred as The moment when the wearer begins to speak.
  • the sound can be transmitted through the air, and the sound information collected by the acoustoelectric element 110 may reflect the information of the wearer or the information of the non-wearer. Therefore, when the start time difference T1 between the start time t1 of the vibration information and the start time t2 of the sound information is less than or equal to the preset time threshold, it can be inferred that the vibration and sound start at the same time, so as to determine the Voice commands are issued by the wearer. In the case that the start time difference T1 is greater than the time threshold, it can be inferred that the vibration and sound do not start at the same time, and the sound is emitted by a nearby sound source, thereby determining that the voice command determined by the sound information is issued by the non-wearer.
  • the time threshold is 2s.
  • the starting time t1 of the vibration information is zero, the starting time t2 of the sound information is one second after zero, and the starting time difference T1 is 1s, which is less than the time threshold.
  • the identity information of the sender of the voice command is determined as the wearer.
  • the time threshold is 2s.
  • the starting time t1 of the vibration information is zero
  • the starting time t2 of the sound information is three seconds after zero
  • the starting time difference T1 is 3s, which is greater than the time threshold. It can be determined that the sound is emitted by a nearby sound source, thereby determining the voice command
  • the identity information of the sender is a non-wearer.
  • the identity information includes the wearer and the non-wearer
  • the time difference includes the end time difference T2
  • step S142 includes:
  • Step S1424 Determine the end time difference T2 according to the end time t3 of the sound information and the end time t4 of the vibration information;
  • Step S144 includes:
  • Step S1446 When the end time difference T2 is less than or equal to the preset time threshold, determine the identity information as the wearer;
  • Step S1448 When the end time difference T2 is greater than the time threshold, it is determined that the identity information is a non-wearer.
  • the determining module 14 is used to determine the end time difference T2 according to the end time t3 of the sound information and the end time t4 of the vibration information; and used to determine the identity information when the end time difference T2 is less than or equal to the preset time threshold.
  • the processor 101 is configured to determine the end time difference T2 according to the end time t3 of the sound information and the end time t4 of the vibration information; and to determine the identity information when the end time difference T2 is less than or equal to the preset time threshold.
  • the identification information is determined according to the end time difference T2.
  • the principle and explanation for determining the identity information according to the end time difference T2 can be referred to the above-mentioned part of determining the identity information according to the start time difference T1. In order to avoid redundancy, it will not be repeated here.
  • the time threshold is 2s.
  • the end time t4 of the vibration information is zero o'clock, the end time t3 of the sound information is one second after zero, and the end time difference T2 is 1s, which is less than the time threshold.
  • the identity information of the sender of the voice command is determined as the wearer.
  • the time threshold is 2s.
  • the end time t4 of the vibration information is zero o'clock, the end time t3 of the sound information is three seconds past zero, and the end time difference T2 is 3s, which is greater than the time threshold.
  • the identity information of the sender of the voice command is determined to be the non-wearer.
  • control method includes:
  • Step S18 If the sound information is collected within the preset time period and the vibration information is not collected, the wearable device 100 is controlled to ignore the sound information.
  • control module 16 is configured to control the wearable device 100 to ignore the sound information when the sound information is collected within the preset time period and the vibration information is not collected.
  • the processor 101 is configured to control the wearable device 100 to ignore the sound information when the sound information is collected within the preset time period and the vibration information is not collected.
  • the control of the wearable device 100 is realized when the sound information is collected within the preset time period and the vibration information is not collected. It is understandable that when a user wears electronic glasses, in addition to the user’s own sound, other sounds in the environment, such as TV sound, broadcast sound, and non-wearer’s sound, may also cause the acoustic and electrical components 110 to collect sounds. information. However, it can be inferred that the user did not make a sound by not collecting the vibration information. Therefore, when the sound information is collected within the preset time period and the vibration information is not collected, the wearable device 100 can be controlled to ignore the sound information to prevent false triggering of the wearable device 100.
  • the preset duration is 10s
  • the sound from the TV makes the acoustic and electrical element 110 collect sound information, but the vibration information is not collected within 10s. At this time, it can be inferred that the wearer did not issue a voice command. The information can be ignored.
  • control method includes:
  • Step S19 In the case that the sound information is not collected within the preset time period and the vibration information is collected, the wearable device 100 is controlled to ignore the vibration information.
  • control module 16 is configured to control the wearable device 100 to ignore the vibration information when the sound information is not collected within the preset time period and the vibration information is collected.
  • the processor 101 is configured to control the wearable device 100 to ignore the vibration information when the sound information is not collected within the preset time period and the vibration information is collected.
  • the control of the wearable device 100 is realized in the case that the sound information is not collected and the vibration information is collected within the preset time period. It can be understood that when the user wears electronic glasses, in addition to the vibration of the vocal cords, chewing, blood vessel beating, and impact may cause the vibration sensor 120 to collect vibration information. In these cases, the acousto-electric element 110 does not output information, or even if the output information of the acousto-electric element 110 is processed, the sound information from which the voice command can be extracted cannot be obtained. Therefore, in the case that no sound information is collected within the preset time period and vibration information is collected, the wearable device 100 can be controlled to ignore the vibration information.
  • the preset duration is 10s, and the user's blood vessel beats the vibration sensor 120 to collect vibration information, but within 10s the acoustoelectric element 110 does not output output information, nor does it collect sound information. At this time, it can be inferred The wearer did not issue a voice command, and the voice message can be ignored.
  • the preset duration is 10s
  • the user's chewing action causes the vibration sensor 120 to collect vibration information.
  • the acoustoelectric element 110 outputs the output information, but the output information cannot obtain the sound information that can extract the voice command. , That is, no sound information is collected. At this time, it can be inferred that the wearer has not issued a sound command and the sound information can be ignored.
  • the number of acousto-electric elements 110 is multiple, and the control method includes:
  • Step S11 De-reverberation processing is performed on the output information of the plurality of acousto-electric elements 110 to obtain sound information.
  • the acquisition module 12 is used to perform de-reverberation processing on the output information of the multiple acousto-electric elements 110 to obtain sound information.
  • the processor 101 is configured to perform de-reverberation processing on the output information of the multiple acoustoelectric elements 110 to obtain sound information.
  • the sound information is obtained from the output information of the acoustoelectric element 110.
  • a plurality of acousto-electric elements 110 form an array, and the output information can be de-reverberated through a special algorithm to obtain sound information.
  • methods based on blind speech enhancement Blind Signal Enhancement Approach
  • methods based on beamforming Beamforming based approach
  • methods based on inverse filtering An inverse filtering approach
  • multiple acousto-electric elements 110 form an array, which can realize sound source localization.
  • the issuer of the voice command is a non-wearer
  • the source and location of the voice command are further determined.
  • the information collected by the array of acousto-electric elements 110 can be used to calculate the angle and distance of the sound source, so as to realize the tracking of the sound source and the subsequent directional voice pickup.
  • the acoustic and electrical element 110 is a microphone, and the number of microphones is three, and the position coordinates of the three microphones are respectively denoted as o1, o2, and o3.
  • the sender serves as the sound source 200, and the wearable device 100 receives voice commands from the sound source 200.
  • the time for the sound waves emitted by the sound source 200 to reach each microphone is different. It is assumed that the time for the sound wave emitted by the sound source 200 to reach each microphone is t1, t2, and t3, respectively.
  • the distances from the sound source 200 to each microphone are vt1, vt2, and vt3, respectively. Among them, v is the propagation speed of sound in the air.
  • the three microphones can be used as the origin respectively, and the distance between the sound source 200 and the corresponding microphone can be used as a radius to draw a spherical surface.
  • intersection of the three spheres is calculated, and the intersection of the three spheres is the position of the sound source 200.
  • This method can be implemented by algorithms.
  • an embodiment of the present application provides a wearable device 100.
  • the wearable device 100 includes a processor 101 and a memory 102.
  • the memory 102 stores one or more programs, and when the programs are executed by the processor 101, the control method of the wearable device 100 of any one of the foregoing embodiments is implemented.
  • Step S12 Acquire the sound information collected by the acousto-electric element 110 and the vibration information collected by the vibration sensor 120
  • Step S14 Determine the identity information of the sender of the sound command based on the sound information and the vibration information, and the sound command is determined by the sound information
  • Step S16 Control the wearable device 100 to execute the voice command or ignore the voice command according to the identity information.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • a non-volatile computer-readable storage medium containing computer-executable instructions.
  • the processor 101 is caused to execute the control method of any one of the foregoing embodiments.
  • the wearable device 100 and the computer-readable storage medium of the embodiment of the present application determine the identity information of the sender of the voice command based on the sound information and vibration information, thereby controlling the wearable device 100 to execute or ignore the voice command, which can avoid false triggering of the wearable device 100 , Making the control of the wearable device 100 more accurate.
  • FIG. 20 is a schematic diagram of internal modules of the wearable device 100 in an embodiment.
  • the wearable device 100 includes a processor 101, a memory 102 (for example, a non-volatile storage medium), an internal memory 103, a display device 104, and an input device 105 connected through a system bus 109.
  • the processor 101 can be used to provide calculation and control capabilities, and support the operation of the entire wearable device 100.
  • the internal memory 103 of the wearable device 100 provides an environment for the execution of computer readable instructions in the memory 102.
  • the display device 104 of the wearable device 100 may be a display 40 provided on the wearable device 100, and the input device 105 may be an acoustic and electrical element 110 and a vibration sensor 120 provided on the wearable device 100, or may be a button provided on the wearable device 100 , Trackball or touchpad, it can also be an external keyboard, touchpad or mouse.
  • the wearable device 100 may be a smart bracelet, smart watch, smart helmet, electronic glasses, etc.
  • the structure shown in the figure is only a schematic diagram of part of the structure related to the solution of the present application, and does not constitute a limitation to the wearable device 100 to which the solution of the present application is applied.
  • the specific wearable device 100 It may include more or fewer components than shown in the figures, or combine certain components, or have a different component arrangement.
  • the program can be stored in a non-volatile computer-readable storage medium.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Computer Security & Cryptography (AREA)
  • Otolaryngology (AREA)
  • Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • User Interface Of Digital Computer (AREA)
  • Prostheses (AREA)

Abstract

一种穿戴设备的控制方法、穿戴设备(100)和存储介质。穿戴设备(100)包括声电元件(110)和振动传感器(120),控制方法包括:(步骤S12)获取声电元件(110)采集的声音信息和振动传感器(120)采集的振动信息;(步骤S14)根据声音信息和振动信息确定声音命令的发出者的身份信息,声音命令由声音信息确定;(步骤S16)根据身份信息控制穿戴设备(100)执行声音命令或忽略声音命令。

Description

控制方法、穿戴设备和存储介质
优先权信息
本申请请求2019年06月10日向中国国家知识产权局提交的、专利申请号为201910496570.7的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本申请涉及电子技术领域,特别涉及一种控制方法、穿戴设备和存储介质。
背景技术
相关技术可通过语音与穿戴设备进行交互。
发明内容
本申请提供了一种控制方法、穿戴设备和存储介质。
本申请实施方式提供了一种穿戴设备的控制方法。所述穿戴设备包括声电元件和振动传感器,所述控制方法包括:
获取所述声电元件采集的声音信息和所述振动传感器采集的振动信息;
根据所述声音信息和所述振动信息确定声音命令的发出者的身份信息,所述声音命令由所述声音信息确定;
根据所述身份信息控制所述穿戴设备执行所述声音命令或忽略所述声音命令。
本申请实施方式的穿戴设备包括外壳、处理器、声电元件和振动传感器,所述声电元件设置在所述外壳,所述处理器连接所述声电元件和所述振动传感器,所述处理器用于获取所述声电元件采集的声音信息和所述振动传感器采集的振动信息;及用于根据所述声音信息和所述振动信息确定声音命令的发出者的身份信息,所述声音命令由所述声音信息确定;以及用于根据所述身份信息控制所述穿戴设备执行所述声音命令或忽略所述声音命令。
一种包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行以上所述的穿戴设备的控制方法。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1是本申请实施方式的穿戴设备的立体示意图;
图2是本申请另一实施方式的穿戴设备的平面示意图;
图3是本申请实施方式的穿戴设备部分结构的平面示意图;
图4是本申请实施方式的穿戴设备的调节过程的示意图;
图5是本申请实施方式的穿戴设备的调节过程的另一示意图;
图6是本申请另一实施方式的穿戴设备部分结构的平面示意图;
图7是本申请又一实施方式的穿戴设备部分结构的平面示意图;
图8是本申请实施方式的穿戴设备的控制方法的流程示意图;
图9是本申请实施方式的穿戴设备的控制方法的场景示意图;
图10是本申请实施方式的穿戴设备的控制装置的模块示意图;
图11是本申请再一实施方式的穿戴设备的控制方法的流程示意图;
图12是本申请另一实施方式的穿戴设备的控制方法的流程示意图;
图13是本申请又一实施方式的穿戴设备的控制方法的流程示意图;
图14是本申请实施方式的穿戴设备的控制方法的振动信息和声音信息的示意图;
图15是本申请再一实施方式的穿戴设备的控制方法的流程示意图;
图16是本申请另一实施方式的穿戴设备的控制方法的流程示意图;
图17是本申请又一实施方式的穿戴设备的控制方法的流程示意图;
图18是本申请再一实施方式的穿戴设备的控制方法的流程示意图;
图19是本申请实施方式的穿戴设备的控制方法的另一场景示意图;
图20是本申请实施方式的穿戴设备的另一模块示意图;
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
请参阅图1和图2,本申请实施方式的穿戴设备100包括外壳20、支撑部件30、显示器40、屈光部件50和调节机构60。
外壳20为穿戴设备100的外部零部件,起到了保护和固定穿戴设备100的内部零部件的作用。通过外壳20将内部零部件包围起来,可以避免外界因素对这些内部零部件造成直接的损坏。
具体地,在本实施方式中,外壳20可用于收容和固定显示器40、屈光部件50和调节机构60中的至少一个。在图2的示例中,外壳20形成有收容槽22,显示器40和屈光部件50收容在收容槽22中。调节机构60部分地从外壳20露出。
外壳20还包括外壳前壁21、外壳顶壁24、外壳底壁26和外壳侧壁28。外壳底壁26的中部朝向外壳顶壁24形成缺口262。或者说,外壳20大致呈“B”字型。在用户佩戴穿戴设备100时,穿戴设备100可通过缺口262架设在用户的鼻梁上,这样既可以保证穿戴设备100的稳定性,又可以保证用户佩戴的舒适性。调节机构60可部分地从外壳侧壁28露出,以便用户对屈光部件50进行调节。
另外,外壳20可以通过计算机数控(Computerized Numerical Control,CNC)机床加工铝合金形成,也可以采用聚碳酸酯(Polycarbonate,PC)或者PC和丙烯腈-丁二烯-苯乙烯塑料(Acrylonitrile Butadiene Styrene plastic,ABS)注塑成型。在此不对外壳20的具体制造方式和具体材料进行限定。
支撑部件30用于支撑穿戴设备100。在用户佩戴穿戴设备100时,穿戴设备100可通过支撑部件30固定在用户的头部。在图2的示例中,支撑部件30包括第一支架32、第二支架34和弹性带36。
第一支架32和第二支架34关于缺口262对称设置。具体地,第一支架32和第二支架34可转动地设置在外壳20的边缘,在用户不需要使用穿戴设备100时,可将第一支架32和第二支架34贴近外壳20叠放,以便于收纳。在用户需要使用穿戴设备100时,可将第一支架32和第二支架34展开,以实现第一支架32和第二支架34支撑的功能。
第一支架32远离外壳20的一端形成有第一弯折部322,第一弯折部322朝向外壳底壁26弯折。这样,用户在佩戴穿戴设备100时,第一弯折部322可架设在用户的耳朵上,从而使穿戴设备100不易滑落。
类似地,第二支架34远离外壳20的一端形成有第二弯折部342,第二弯折部342朝向外壳底壁26弯折。第二弯折部342的解释和说明可参照第一弯折部322,为避免冗余,在此不再赘述。
弹性带36可拆卸地连接第一支架32和第二支架34。如此,在用户佩戴穿戴设备100进行剧烈活动时,可以通过弹性带36进一步固定穿戴设备100,防止穿戴设备100在剧烈活动中松动甚至掉落。可以理解,在其他的示例中,弹性带36也可以省略。
在本实施方式中,显示器40包括OLED显示屏。OLED显示屏无需背光灯,有利于穿戴设备100的轻薄化。而且,OLED屏幕可视角度大,耗电较低,有利于节省耗电量。
当然,显示器40也可以采用LED显示器或Micro LED显示器。这些显示器仅作为示例而本申请的实施例并不限于此。
请一并参阅图3,屈光部件50设置在显示器40一侧。屈光部件50包括屈光腔52、透光液体54、第一膜层56、第二膜层58和侧壁59。
透光液体54设置在屈光腔52内。调节机构60用于调节透光液体54的量以调节屈光部件50的形态。具体地,第二膜层58相对于第一膜层56设置,侧壁59连接第一膜层56和第二膜层58,第一膜层56、第二膜层58和侧壁59围成屈光腔52,调节机构60用于调节透光液体54的量以改变第一膜层56和/或第二膜层58的形状。
如此,实现屈光部件50屈光功能的实现。具体地,“改变第一膜层56和/或第二膜层58的形状”包括三种情况:第一种情况:改变第一膜层56的形状且不改变第二膜层58的形状;第二种情况:不改变第一膜层56的形状且改变第二膜层58的形状;第三种情况:改变第一膜层56的形状且改变第二膜层58的形状。请注意,为方便解释,在本实施方式中,以第一种情况为例进行说明。
第一膜层56可具有弹性。可以理解,在屈光腔52中的透光液体54的量变化的情况下,屈光腔52内的压强也随之变化,从而使得屈光部件50的形态发生变化。
在一个例子中,调节机构60将屈光腔52中透光液体54的量减少,屈光腔52内的压强减小,屈光腔52外的压强与屈光腔52内的压强的压差增大,屈光腔52更加凹陷。
在另一个例子中,调节机构60将屈光腔52中透光液体54的量增多,屈光腔52内的压强增大,屈光腔52外的压强与屈光腔52内的压强的压差减小,屈光腔52更加凸出。
这样,就实现了通过调节透光液体54的量来调节屈光部件50的形态。
调节机构60连接屈光部件50。调节机构60用于调节屈光部件50的形态以调节屈光部件50的屈光度。具体地,调节机构60包括腔体62、滑动件64、驱动部件66、调节腔68和开关61。
滑动件64滑动地设置在腔体62中,驱动部件66与滑动件64连接,腔体62和滑动件64共同限定出调节腔68,调节腔68通过侧壁59连通屈光腔52,驱动部件66用于驱动滑动件64相对于腔体62滑动以调整调节腔68的容积以调节屈光腔52内的透光液体54的量。
如此,实现通过滑动件64来调整调节腔68的容积,以调节屈光腔52内的透光液体54的量。在一个例子中,请参阅图4,滑动件64往背离侧壁59的方向滑动,调节腔68的容积增大,调节腔68内的压强减小,屈光腔52内的透光液体54进入调节腔68,第一膜层56愈发向内凹陷。
在另一个例子中,请参阅图5,滑动件64往朝向侧壁59的方向滑动,调节腔68的容积减小,调节腔68内的压强增大,调节腔68内的透光液体54进入屈光腔52,第一膜层56愈发向外凸出。
侧壁59形成有流动通道591,流动通道591连通调节腔68和屈光腔52。调节机构60包括设置在流动通道591的开关61,开关61用于控制流动通道591的开闭状态。
在本实施方式中,开关61的数量为两个,两个开关61均为单向开关,其中一个开关61用于控制透光液体54从调节腔68流至屈光腔52,另一个开关61用于控制透光液体54从屈光腔52流至调节腔68。
如此,通过开关61实现透光液体54在调节腔68和屈光腔52之间的流动,以保持侧壁59两侧的压强平衡。如前所述,调节腔68容积的改变,会引起调节腔68中压强的变化,从而引起现透光液体54在调节腔68和屈光腔52之间的流动。而开关61通过控制流动通道591的开闭状态,来控制透光液体54在调节腔68和屈光腔52之间的流动能否实现,从而控制屈光部件50的形态的调节。
在一个例子中,请参阅图4,控制透光液体54从屈光腔52流至调节腔68的开关61打开,滑动件64往背离侧壁59的方向滑动,调节腔68的容积增大,调节腔68内的压强减小,屈光腔52内的透光液体54通过开关61进入调节腔68,第一膜层56愈发向内凹陷。
在另一个例子中,控制透光液体54从屈光腔52流至调节腔68的开关61关闭,即使滑动件64往背离侧壁59的方向滑动,调节腔68的容积增大,调节腔68内的压强减小,屈光腔52内的透光液体54也无法进入调节腔68,第一膜层56的形态不发生改变。
在又一个例子中,请参阅图5,控制透光液体54从调节腔68流至屈光腔52的开关61打开,滑动件64往朝向侧壁59的方向滑动,调节腔68的容积减小,调节腔68内的压强增大,调节 腔68内的透光液体54通过开关61进入屈光腔52,第一膜层56愈发向外凸出。
在又一个例子中,控制透光液体54从调节腔68流至屈光腔52的开关61关闭,即使滑动件64往朝向侧壁59的方向滑动,调节腔68的容积减小,调节腔68内的压强增大,调节腔68内的透光液体54也无法进入屈光腔52,第一膜层56的形态不发生改变。
驱动部件66可基于多种结构和原理实现其驱动滑动件64滑动的功能。
在图1、图2、图3、图4和图5的示例中,驱动部件66包括旋钮662和丝杠664,丝杠664连接旋钮662和滑动件64,旋钮662用于驱动丝杠664转动以带动滑动件64相对于腔体62滑动。
如此,实现通过旋钮662和丝杠664来驱动滑动件64。由于丝杠664和旋钮662的配合可将旋钮662的回转运动转化为丝杠664直线运动,在用户旋转旋钮662时,丝杠664即可带动滑动件64相对于腔体62滑动,从而引起调节腔68容积的变化,进而调节屈光腔52内的透光液体54的量。旋钮662可自外壳20露出,以方便用户旋转。
具体地,旋钮662上形成有螺纹部,丝杠664上形成有与旋钮662配合的螺纹部,旋钮662和丝杠664螺纹连接。
在旋钮662旋转的同时,开关61可对应地打开。如此,使得透光液体54可以流动,保证侧壁59两侧的压强平衡。
在一个例子中,旋钮662顺时针旋转,滑动件64往背离侧壁59的方向滑动,则将控制透光液体54从屈光腔52流至调节腔68的开关61打开。在另一个例子中,旋钮662逆时针旋转,滑动件64往朝向侧壁59的方向滑动,则将控制透光液体54从调节腔68流至屈光腔52的开关61打开。
请注意,本实施方式中,没有关联旋钮662的旋转角度与屈光部件50的屈光度数,用户将旋钮662旋转到视觉体验最佳的位置即可。当然,在其他的实施方式中,也可以关联旋钮662的旋转角度与屈光部件50的屈光度数。在此,不对旋钮662的旋转角度与屈光部件50的屈光度数是否关联进行限定。
请参阅图6,驱动部件66包括齿轮666和与齿轮666啮合的齿条668,齿条668连接齿轮666和滑动件64,齿轮666用于驱动齿条668移动以带动滑动件64相对于腔体62滑动。
如此,实现通过齿轮666和齿条668来驱动滑动件64。由于齿轮666和齿条668的配合可将齿轮666的回转运动转化为齿条668直线运动,在用户旋转齿轮666时,齿条668即可带动滑动件64相对于腔体62滑动,从而引起调节腔68容积的变化,进而调节屈光腔52内的透光液体54的量。齿轮666可自外壳20露出,以方便用户旋转。
类似地,在齿轮666旋转的同时,开关61可对应地打开。如此,使得透光液体54可以流动,保证侧壁59两侧的压强平衡。
在一个例子中,齿轮666顺时针转动使得齿条668啮合在齿轮666上,齿条668的长度缩短,拉动滑动件64往背离侧壁59的方向移动,则将控制透光液体54从屈光腔52流至调节腔68的开关61打开。
在另一个例子中,齿轮666逆时针转动使得啮合在齿轮666上的齿条668从齿轮666脱离,齿条668的长度增长,推动滑动件64往朝向侧壁59的方向移动,则将控制透光液体54从调节腔68流至屈光腔52的开关61打开。
类似地,本实施方式中,没有关联齿轮666的旋转角度与屈光部件50的屈光度数,用户将齿轮666旋转到视觉体验最佳的位置即可。当然,在其他的实施方式中,也可以关联齿轮666的旋转角度与屈光部件50的屈光度数。在此,不对齿轮666的旋转角度与屈光部件50的屈光度数是否关联进行限定
请参阅图7,驱动部件66包括驱动电机669,驱动电机669的电机轴6691连接滑动件64,驱动电机669用于驱动滑动件64相对于腔体62滑动。
如此,实现通过驱动电机668驱动滑动件64。具体地,驱动电机669可为线性电机。线性电机结构简单,不需要经过中间转换机构而直接产生直线运动,可以减小运动惯量并提高动态响应性能和定位精度。通过驱动电机668驱动滑动件64,使得对滑动件64的驱动具有可编辑性。 例如,可以通过事先的校准,将驱动电机668与屈光的度数关联起来。用户可以直接输入屈光的度数,驱动电机668自动运转驱动滑动件64滑动到对应的位置。
进一步地,驱动部件66还可以包括输入器6692,输入器6692包括但不限于按键、旋钮或触摸屏等装置。在图7的示例中,输入器6692为按键,两个按键分别设置在腔体62的相对两侧。按键可自外壳20露出,以方便用户按压。按键可根据外力按压的次数或时长控制驱动电机669的工作时长,从而控制滑动件64的滑动距离。
类似地,在驱动电机669工作的同时,开关61可对应地打开。如此,使得透光液体54可以流动,保证侧壁59两侧的压强平衡。
在一个例子中,用户按压两个按键中的一个按键,驱动电机轴6691伸长,电机轴6691推动滑动件64往朝向侧壁59的方向移动,则将控制透光液体54从调节腔68流至屈光腔52的开关61打开。
在另一个例子中,用户按压两个按键中的另一个按键,驱动电机轴6691缩短,电机轴6691拉动滑动件64往背离侧壁59的方向移动,则将控制透光液体54从屈光腔52流至调节腔68的开关61打开。
需要注意的是,屈光部件50的结构不仅包括以上的屈光腔52、透光液体54、第一膜层56、第二膜层58和侧壁59,只要保证屈光部件50可以实现屈光度的改变的效果即可。例如,在其他方式中,屈光部件50包括多个镜片和驱动件,驱动件用于驱动每个镜片从收容位置移动到屈光位置。这样,即可通过多个镜片的组合,来改变屈光部件50的屈光度。当然,驱动件也可驱动移动到屈光位置上的每个镜片在屈光光轴上移动,从而改变屈光部件50的屈光度。
因此,以上所述的屈光部件的形态包括屈光部件的形状和状态,以上屈光腔52、透光液体54、第一膜层56、第二膜层58和侧壁59的结构方式通过改变第一膜层56和/或第二膜层58的形状以实现屈光度的改变;以上多个镜片和驱动件的结构方式,通过改变镜片的状态以实现屈光度的改变。
综合以上,本申请实施方式提供了一种穿戴设备100,其包括显示器40、屈光部件50和调节机构60。屈光部件50设置在显示器40一侧。调节机构60连接屈光部件50,调节机构60用于调节屈光部件50的形态以调节屈光部件50的屈光度。
本申请实施方式的穿戴设备100,通过调节机构60调节屈光部件50的形态,以调节屈光部件50的屈光度,使得屈光不正的用户能够看清显示器40显示的图像,有利于提高用户体验。
而且,本申请实施方式的穿戴设备100中,屈光部件50和调节机构60可线性矫正屈光度数,使每个不同屈光度数的人都可以灵活佩戴。同时,屈光部件50和调节机构60的体积较小,不影响穿戴设备100的佩戴体验。用户不需要购买很多镜片,可以降低价格。
相关技术可通过语音与穿戴设备进行交互。然而,在语音环境复杂的情况下,这样交互容易造成穿戴设备误触发。另外,由于相关技术一般通过声纹技术识别语音操控者,操控者需要提前录入声纹,且使用时周围不能有其他录入过声纹的人,否则容易引起误触发甚至无法识别。这样,操作繁琐,不利于提高用户体验。
请参阅图8和图9,本申请实施方式提供了一种穿戴设备100的控制方法。穿戴设备100包括声电元件110和振动传感器120。
控制方法包括:
步骤S12:获取声电元件110采集的声音信息和振动传感器120采集的振动信息;
步骤S14:根据声音信息和振动信息确定声音命令的发出者的身份信息,声音命令由声音信息确定;
步骤S16:根据身份信息控制穿戴设备100执行声音命令或忽略声音命令。
请参阅图10,本申请实施方式提供了一种穿戴设备100的控制装置10。穿戴设备100包括声电元件110和振动传感器120,控制装置10包括获取模块12、确定模块14和控制模块16,获取模块12用于获取声电元件110采集的声音信息和振动传感器120采集的振动信息;确定模块14用于根据声音信息和振动信息确定声音命令的发出者的身份信息,声音命令由声音信息确定;控制模块16用于根据身份信息控制穿戴设备100执行声音命令或忽略声音命令。
本申请实施方式提供了一种穿戴设备100。穿戴设备包括外壳20、处理器101、声电元件110和振动传感器120,声电元件110设置在外壳20,处理器101连接声电元件110和振动传感器120,处理器101用于获取声电元件110采集的声音信息和振动传感器120采集的振动信息;及用于根据声音信息和振动信息确定声音命令的发出者的身份信息,声音命令由声音信息确定;以及用于根据身份信息控制穿戴设备100执行声音命令或忽略声音命令。
本申请实施方式的穿戴设备100的控制方法、控制装置10和穿戴设备100,根据声音信息和振动信息确定声音命令的发出者的身份信息,从而控制穿戴设备100执行或忽略声音命令,可以避免穿戴设备100的误触发,使得对穿戴设备100的控制更加准确。
具体地,穿戴设备100可以为电子眼镜、电子衣服、电子手镯、电子项链、电子纹身、手表、入耳式耳机、吊坠、头戴式耳机等电子装置。穿戴设备100还可以为电子设备或智能手表的头戴式设备(head mount display,HMD)。在此不对穿戴设备100的具体形式进行限定。
请注意,为方便说明,本申请实施方式以穿戴设备100是电子眼镜为例对本申请实施方式的穿戴设备100的控制方法进行解释。这并不代表对穿戴设备100的具体形式进行限定。
声电元件110的数量为三个,外壳20包括外壳前壁21,三个声电元件110分别设置在外壳前壁21的第一预设位置211、第二预设位置212和第三预设位置213。
外壳20包括外壳侧壁28和外壳顶壁24,外壳侧壁28的数量为两个,两个外壳侧壁28分别设置在外壳顶壁24的相背两侧,第一预设位置211靠近外壳顶壁24和其中一个外壳侧壁28,第二预设位置212靠近外壳顶壁24和其中另一个外壳侧壁28。
外壳20包括外壳顶壁24和外壳底壁26,外壳顶壁24和外壳底壁26分别设置在外壳前壁21的相背两侧,外壳底壁26的中部朝向外壳顶壁24形成缺口262,第三预设位置213靠近缺口262。
如此,可以使得三个声电元件110的分布较为分散,不仅可以使得穿戴设备100的外形美观,而且在后续对声电元件110的输出信息去混响以得到声音信息时,可以提高去混响的效果。
在图1的例子中,声电元件110为麦克风。声电元件110的数量为3个,3个声电元件110分别设置在第一预设位置211、第二预设位置212和第三预设位置213。
可以理解,在其他的例子中,声电元件110的数量可为1个、2个、4个、6个或其他数量。声电元件110可设置在第一支架32、第二支架34或穿戴设备100的其他位置。在此不对声电元件110的具体数量和设置的具体位置进行限定。
穿戴设备100包括连接外壳20的支撑部件30,支撑部件30包括第一支架32和第二支架34,振动传感器120设置在第一支架32和/或第二支架34。
进一步地,第一支架32远离外壳20的一端形成有第一弯折部322,第二支架34远离外壳20的一端形成有第二弯折部342,外壳20包括外壳底壁26,第一弯折部322和第二弯折部342朝向外壳底壁26弯折,振动传感器120设置在第一弯折部322和/或第二弯折部342。
在图1的例子中,振动传感器120为陀螺仪。振动传感器120的数量为一个,振动传感器120设置在穿戴设备100的第一支架32的第一弯折部322。
在另一个例子中,振动传感器120的数量为2个,其中一个振动传感器120设置在第一弯折部322,其中的另一个振动传感器120设置在第二弯折部342。
当然,在其他的例子中,振动传感器120的数量为一个,振动传感器120也可设置在第二弯折部342。
可以理解,用户在说话时,声带的振动会带动脸部肌肉的同步微小振动。因此,将振动传感器120设置在支撑部件30与用户头部有接触的部位,如第一弯折部322和第二弯折部342,可以使得振动传感器120可采集到更多、更准确的振动信息,从而使根据振动信息对穿戴设备100的控制更加准确。
在其他的例子中,振动传感器120的数量可为3个、4个、5个或其他数量。振动传感器120可设置在穿戴设备100的其他位置。在此不对振动传感器120的具体数量和设置的具体位置进行限定。
请注意,此处“声音命令”可指可被穿戴设备100识别且可对穿戴设备100进行控制的信息。 此处“声音信息”可指可提取出声音命令的信息。“声音信息”可包括声音信息的起始时刻、声音信息的结束时刻、声纹信息等。
此处“振动信息”可包括振动信息的起始时刻、振动信息的结束时刻、振动的频率、振幅等。
此处“身份信息”可指发出者的固有身份(例如身份证号码所唯一确定的身份),也可指发出者由于职位、行为、状态等因素所具有的身份(例如,穿戴设备100的机主、非穿戴设备100的机主、穿戴设备100的穿戴者、非穿戴设备100的穿戴者)。
在此,不对声音信息、振动信息、身份信息的具体形式和具体内容进行限定。
在一个例子中,声音命令为:“将开机密码改为123456”,根据声音信息的声纹信息可确定声音命令的发出者是穿戴设备100的机主,根据振动信息可确定该声音命令是由穿戴设备100的穿戴者发出的,也即是说,根据声音信息和振动信息可确定该声音命令的发出者的身份信息为:“机主”、“穿戴者”。此时,可控制该穿戴设备100,将开机密码改为“123456”。这样可防止拥有多个穿戴设备100的用户在需要修改正穿戴的穿戴设备100的开机密码时,误修改其他未穿戴的穿戴设备100的开机密码。
在另一个例子中,声音命令为:“将开机密码改为123456”,根据声音信息的声纹信息可确定声音命令的发出者不是穿戴设备100的机主,根据振动信息可确定该声音命令是由穿戴设备100的穿戴者发出的,也即是说,根据声音信息和振动信息可确定该声音命令的发出者的身份信息为:“非机主”、“穿戴者”。此时,可控制该穿戴设备100忽略该声音命令。这样可防止穿戴设备100在被非机主的用户穿戴时,被非机主的用户趁机篡改开机密码。
请参阅图11,在某些实施方式中,身份信息包括穿戴者和非穿戴者,步骤S16包括:
步骤S162:在身份信息为穿戴者的情况下,控制穿戴设备100执行声音命令;
步骤S164:在身份信息为非穿戴者的情况下,控制穿戴设备100忽略声音命令。
对应地,控制模块16用于在身份信息为穿戴者的情况下,控制穿戴设备100执行声音命令;以及用于在身份信息为非穿戴者的情况下,控制穿戴设备100忽略声音命令。
对应地,处理器101用于在身份信息为穿戴者的情况下,控制穿戴设备100执行声音命令;以及用于在身份信息为非穿戴者的情况下,控制穿戴设备100忽略声音命令。
如此,实现根据身份信息控制穿戴设备100执行声音命令或忽略声音命令。可以理解,在穿戴设备100处于较为喧闹嘈杂的环境中时,如果不区分声音命令的发出者是穿戴者还是非穿戴者,容易导致穿戴设备100被环境中的其他声音误触发。本实施方式中,在确定音命令的发出者为穿戴者的情况下,才控制穿戴设备100执行声音命令,提高了穿戴设备100对环境的适应性,使得穿戴设备100在声音混乱的环境中也可以正常工作。
在一个例子中,三个用户分别穿戴3个穿戴设备100,并通过声音对各自穿戴的穿戴设备100进行控制。1号用户穿戴1号穿戴设备100,并发出“打开文档A”的声音命令;2号用户穿戴2号穿戴设备100,并发出“打开文档B”的声音命令;3号用户穿戴3号穿戴设备100,并发出“打开文档C”的声音命令。
对于1号穿戴设备100而言,可通过声音信息和振动信息确定“打开文档A”的声音命令的发出者是穿戴者,也即是1号用户,“打开文档B”和“打开文档C”的声音命令的发出者是非穿戴者。此时,1号穿戴设备100执行“打开文档A”的声音命令,忽略“打开文档B”和“打开文档C”的声音命令。
对于2号穿戴设备100而言,可通过声音信息和振动信息确定“打开文档B”的声音命令的发出者是穿戴者,也即是2号用户,“打开文档A”和“打开文档C”的声音命令的发出者是非穿戴者。此时,2号穿戴设备100执行“打开文档B”的声音命令,忽略“打开文档A”和“打开文档C”的声音命令。
对于3号穿戴设备100而言,可通过声音信息和振动信息确定“打开文档C”的声音命令的发出者是穿戴者,也即是3号用户,“打开文档B”和“打开文档A”的声音命令的发出者是非穿戴者。此时,3号穿戴设备100执行“打开文档C”的声音命令,忽略“打开文档B”和“打开文档A”的声音命令。
这样,即使环境中充斥着“打开文档A”、“打开文档B”、“打开文档C”的声音命令。 穿戴设备100也可准确地执行对应穿戴者的声音命令。
请参阅图12,在某些实施方式中,步骤S14包括:
步骤S142:确定声音信息和振动信息的时间差;
步骤S144:根据时间差确定身份信息。
对应地,确定模块14用于确定声音信息和振动信息的时间差;以及用于根据时间差确定身份信息。
对应地,处理器101用于确定声音信息和振动信息的时间差;以及用于根据时间差确定身份信息。
如此,实现根据声音信息和振动信息确定声音命令的发出者的身份信息。可以理解,声音产生的时刻和声带开始振动的时刻是相同的,声音的传播和振动的传播都需要时间。因此,可以根据声音信息和振动信息的时间差确定声音命令的发出者的身份信息。
请参阅图14和图15,在某些实施方式中,身份信息包括穿戴者和非穿戴者,时间差包括起始时间差T1,步骤S142包括:
步骤S1422:根据声音信息的起始时刻t2和振动信息的起始时刻t1确定起始时间差T1;
步骤S144包括:
步骤S1442:在起始时间差T1小于或等于预设的时间阈值的情况下,确定身份信息为穿戴者;
步骤S1444:在起始时间差T1大于时间阈值的情况下,确定身份信息为非穿戴者。
对应地,确定模块14用于根据声音信息的起始时刻t2和振动信息的起始时刻t1确定起始时间差T1;及用于在起始时间差T1小于或等于预设的时间阈值的情况下,确定身份信息为穿戴者;以及用于在起始时间差T1大于时间阈值的情况下,确定身份信息为非穿戴者。
对应地,处理器101用于根据声音信息的起始时刻t2和振动信息的起始时刻t1确定起始时间差T1;及用于在起始时间差T1小于或等于预设的时间阈值的情况下,确定身份信息为穿戴者;以及用于在起始时间差T1大于时间阈值的情况下,确定身份信息为非穿戴者。
如此,实现根据起始时间差T1确定身份信息。具体地,时间阈值可预先通过实验得出,并存储在穿戴设备100中。
可以理解,振动传感器120采集的振动信息来源于声带振动引起的脸部肌肉同步微小振动,因此,振动信息120反映的是穿戴设备100的穿戴者的信息,振动信息的起始时刻t1可推断为穿戴者开始发声的时刻。
而声音可通过空气传播,声电元件110采集的声音信息可能反映穿戴者的信息,也可能反映非穿戴者的信息。因此,在振动信息的起始时刻t1与声音信息的起始时刻t2的起始时间差T1小于或等于预设的时间阈值的情况下,可推断振动与声音同时开始,从而确定由声音信息确定的声音命令是由穿戴者发出的。在起始时间差T1大于时间阈值的情况下,可推断振动与声音并非同时开始,声音是临近的声源发出的,从而确定由声音信息确定的声音命令是由非穿戴者发出的。
在一个例子中,时间阈值为2s。振动信息的起始时刻t1为零点整,声音信息的起始时刻t2为零点过一秒,起始时间差T1为1s,小于时间阈值,确定声音命令的发出者的身份信息为穿戴者。
在另一个例子中,时间阈值为2s。振动信息的起始时刻t1为零点整,声音信息的起始时刻t2为零点过三秒,起始时间差T1为3s,大于时间阈值,可确定声音是临近的声源发出的,从而确定声音命令的发出者的身份信息为非穿戴者。
请参阅图15和图14,在某些实施方式中,身份信息包括穿戴者和非穿戴者,时间差包括结束时间差T2,步骤S142包括:
步骤S1424:根据声音信息的结束时刻t3和振动信息的结束时刻t4确定结束时间差T2;
步骤S144包括:
步骤S1446:在结束时间差T2小于或等于预设的时间阈值的情况下,确定身份信息为穿戴者;
步骤S1448:在结束时间差T2大于时间阈值的情况下,确定身份信息为非穿戴者。
对应地,确定模块14用于根据声音信息的结束时刻t3和振动信息的结束时刻t4确定结束时间差T2;及用于在结束时间差T2小于或等于预设的时间阈值的情况下,确定身份信息为穿戴者;以及用于在结束时间差T2大于时间阈值的情况下,确定身份信息为非穿戴者。
对应地,处理器101用于根据声音信息的结束时刻t3和振动信息的结束时刻t4确定结束时间差T2;及用于在结束时间差T2小于或等于预设的时间阈值的情况下,确定身份信息为穿戴者;以及用于在结束时间差T2大于时间阈值的情况下,确定身份信息为非穿戴者。
如此,实现根据结束时间差T2确定身份信息。根据结束时间差T2确定身份信息的原理和解释说明可参照上述根据起始时间差T1确定身份信息的部分,为避免冗余,在此不再赘述。
在一个例子中,时间阈值为2s。振动信息的结束时刻t4为零点整,声音信息的结束时刻t3为零点过一秒,结束时间差T2为1s,小于时间阈值,确定声音命令的发出者的身份信息为穿戴者。
在另一个例子中,时间阈值为2s。振动信息的结束时刻t4为零点整,声音信息的结束时刻t3为零点过三秒,结束时间差T2为3s,大于时间阈值,确定声音命令的发出者的身份信息为非穿戴者。
请参阅图16,在某些实施方式中,控制方法包括:
步骤S18:在预设时长内采集到声音信息且未采集到振动信息的情况下,控制穿戴设备100忽略声音信息。
对应地,控制模块16用于在预设时长内采集到声音信息且未采集到振动信息的情况下,控制穿戴设备100忽略声音信息。
对应地,处理器101用于在预设时长内采集到声音信息且未采集到振动信息的情况下,控制穿戴设备100忽略声音信息。
如此,实现在预设时长内采集到声音信息且未采集到振动信息的情况下,对穿戴设备100的控制。可以理解,用户在佩戴电子眼镜时,除用户自己发出的声音外,环境中的其他声音,例如电视的声音、广播的声音、非穿戴者的声音等,也可能使得声电元件110采集到声音信息。而通过未采集到振动信息可推断用户并未发声。因此,在预设时长内采集到声音信息且未采集到振动信息的情况下,可控制穿戴设备100忽略声音信息,防止穿戴设备100的误触发。
在一个例子中,预设时长为10s,电视机发出的声音使得声电元件110采集到声音信息,但是在10s内并未采集到振动信息,此时可推断穿戴者并没有发出声音命令,声音信息可忽略。
请参阅图18,在某些实施方式中,控制方法包括:
步骤S19:在预设时长内未采集到声音信息且采集到振动信息的情况下,控制穿戴设备100忽略振动信息。
对应地,控制模块16用于在预设时长内未采集到声音信息且采集到振动信息的情况下,控制穿戴设备100忽略振动信息。
对应地,处理器101用于在预设时长内未采集到声音信息且采集到振动信息的情况下,控制穿戴设备100忽略振动信息。
如此,实现在预设时长内未采集到声音信息且采集到振动信息的情况下,对穿戴设备100的控制。可以理解,用户在佩戴电子眼镜时,除了声带振动之外,咀嚼、血管跳动、受到撞击都可能使得振动传感器120采集到振动信息。而这些情况下,声电元件110没有输出信息,或者即使将声电元件110的输出信息进行处理也无法得到可提取出声音命令的声音信息。因此,在预设时长内未采集到声音信息且采集到振动信息的情况下,可控制穿戴设备100忽略振动信息。
在一个例子中,预设时长为10s,用户的血管跳动使得振动传感器120采集到振动信息,但是在10s内声电元件110并没有输出输出信息,也不会采集到声音信息,此时可推断穿戴者并没有发出声音命令,声音信息可忽略。
在另一个例子中,预设时长为10s,用户咀嚼的动作使得振动传感器120采集到振动信息,10s内声电元件110输出了输出信息,但是根据输出信息无法得到可提取出声音命令的声音信息,也即是未采集到声音信息,此时可推断穿戴者并没有发出声音命令,声音信息可忽略。
请参阅图18,在某些实施方式中,声电元件110的数量为多个,控制方法包括:
步骤S11:对多个声电元件110的输出信息进行去混响处理以得到声音信息。
对应地,获取模块12用于对多个声电元件110的输出信息进行去混响处理以得到声音信息。
对应地,处理器101用于对多个声电元件110的输出信息进行去混响处理以得到声音信息。
如此,实现从声电元件110的输出信息得到声音信息。具体地,多个声电元件110形成阵列,可通过专门的算法对输出信息进行去混响,以得到声音信息。例如,基于盲语音增强的方法(Blind signal enhancement approach)、基于波束形成的方法(Beamforming based approach)或基于逆滤波的方法(An inverse filtering approach)等。这样,可实现纯净信号的还原,并提升了从声音信息提取声音命令的识别效果。
另外,多个声电元件110形成阵列,可实现声源定位。在声音命令的发出者为非佩戴者时,进一步确定声音命令的来源和位置。具体地,可声电元件110的阵列采集到的信息,来计算声源的角度和距离,从而实现对声源的跟踪以及后续的语音定向拾取。
请参阅图19,声电元件110为麦克风,麦克风的数量为三个,三个麦克风的位置坐标分别记为:o1、o2和o3。发出者作为声源200,穿戴设备100接收来自声源200的语音命令。
由于三个麦克风的位置不同,因此,声源200发出的声波传到每个麦克风的时间不同。假设声源200发出的声波传到每个麦克风的时间分别为t1、t2和t3。则声源200到每个麦克风的距离分别为vt1,vt2和vt3。其中,v为声音在空气中的传播速度。
然后,可以分别以三个麦克风为原点,以声源200到对应麦克风的距离为半径画球面。也即是说,以o1为原点,以vt1为半径,画第一个球面;以o2为原点,以vt2为半径,画第二个球面;以o3为原点,以vt3为半径,画第三个球面。
最后计算三个球面的交点,三个球面的交点就是声源200的位置。这个方法可以通过算法实现。
请参阅图20,本申请实施方式提供了一种穿戴设备100。穿戴设备100包括处理器101和存储器102。存储器102存储有一个或多个程序,程序被处理器101执行时实现上述任一实施方式的穿戴设备100的控制方法。
例如执行:步骤S12:获取声电元件110采集的声音信息和振动传感器120采集的振动信息;步骤S14:根据声音信息和振动信息确定声音命令的发出者的身份信息,声音命令由声音信息确定;步骤S16:根据身份信息控制穿戴设备100执行声音命令或忽略声音命令。
本申请实施方式还提供了一种计算机可读存储介质。一种包含计算机可执行指令的非易失性计算机可读存储介质,当计算机可执行指令被一个或多个处理器101执行时,使得处理器101执行上述任一实施方式的控制方法。
本申请实施方式的穿戴设备100和计算机可读存储介质,根据声音信息和振动信息确定声音命令的发出者的身份信息,从而控制穿戴设备100执行或忽略声音命令,可以避免穿戴设备100的误触发,使得对穿戴设备100的控制更加准确。
图20为一个实施例中的穿戴设备100的内部模块示意图。穿戴设备100包括通过系统总线109连接的处理器101、存储器102(例如为非易失性存储介质)、内存储器103、显示装置104和输入装置105。
处理器101可用于提供计算和控制能力,支撑整个穿戴设备100的运行。穿戴设备100的内存储器103为存储器102中的计算机可读指令运行提供环境。穿戴设备100的显示装置104可以是设置在穿戴设备100上的显示器40,输入装置105可以是设置在穿戴设备100上的声电元件110和振动传感器120,也可以是穿戴设备100上设置的按键、轨迹球或触控板,也可以是外接的键盘、触控板或鼠标等。该穿戴设备100可以是智能手环、智能手表、智能头盔、电子眼镜等。
本领域技术人员可以理解,图中示出的结构,仅仅是与本申请方案相关的部分结构的示意图,并不构成对本申请方案所应用于其上的穿戴设备100的限定,具体的穿戴设备100可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,的程序可存储于一非易失性计算机可读存储介质中,该程序在执 行时,可包括如上述各方法的实施例的流程。其中,的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种穿戴设备的控制方法,其特征在于,所述穿戴设备包括声电元件和振动传感器,所述控制方法包括:
    获取所述声电元件采集的声音信息和所述振动传感器采集的振动信息;
    根据所述声音信息和所述振动信息确定声音命令的发出者的身份信息,所述声音命令由所述声音信息确定;
    根据所述身份信息控制所述穿戴设备执行所述声音命令或忽略所述声音命令。
  2. 根据权利要求1所述的穿戴设备的控制方法,其特征在于,根据所述声音信息和所述振动信息确定声音命令的发出者的身份信息,包括:
    确定所述声音信息和所述振动信息的时间差;
    根据所述时间差确定所述身份信息。
  3. 根据权利要求2所述的穿戴设备的控制方法,其特征在于,所述身份信息包括穿戴者和非穿戴者,所述时间差包括起始时间差,确定所述声音信息和所述振动信息的时间差,包括:
    根据所述声音信息的起始时刻和所述振动信息的起始时刻确定所述起始时间差;
    根据所述时间差确定所述身份信息,包括:
    在所述起始时间差小于或等于预设的时间阈值的情况下,确定所述身份信息为所述穿戴者;
    在所述起始时间差大于所述时间阈值的情况下,确定所述身份信息为所述非穿戴者。
  4. 根据权利要求2所述的穿戴设备的控制方法,其特征在于,所述身份信息包括穿戴者和非穿戴者,所述时间差包括结束时间差,确定所述声音信息和所述振动信息的时间差,包括:
    根据所述声音信息的结束时刻和所述振动信息的结束时刻确定所述结束时间差;
    根据所述时间差确定所述身份信息,包括:
    在所述结束时间差小于或等于预设的时间阈值的情况下,确定所述身份信息为所述穿戴者;
    在所述结束时间差大于所述时间阈值的情况下,确定所述身份信息为所述非穿戴者。
  5. 根据权利要求1所述的穿戴设备的控制方法,其特征在于,所述身份信息包括穿戴者和非穿戴者,根据所述身份信息控制所述穿戴设备执行所述声音命令或忽略所述声音命令,包括:
    在所述身份信息为所述穿戴者的情况下,控制所述穿戴设备执行所述声音命令;
    在所述身份信息为所述非穿戴者的情况下,控制所述穿戴设备忽略所述声音命令。
  6. 根据权利要求1所述的穿戴设备的控制方法,其特征在于,所述控制方法包括:
    在预设时长内采集到所述声音信息且未采集到所述振动信息的情况下,控制所述穿戴设备忽略所述声音信息。
  7. 根据权利要求1所述的穿戴设备的控制方法,其特征在于,所述控制方法包括:
    在预设时长内未采集到所述声音信息且采集到所述振动信息的情况下,控制所述穿戴设备忽略所述振动信息。
  8. 根据权利要求1所述的穿戴设备的控制方法,其特征在于,所述声电元件的数量为多个,所述控制方法包括:
    对多个所述声电元件的输出信息进行去混响处理以得到所述声音信息。
  9. 一种穿戴设备,其特征在于,所述穿戴设备包括外壳、处理器、声电元件和振动传感器,所述声电元件设置在所述外壳,所述处理器连接所述声电元件和所述振动传感器,所述处理器用于获取所述声电元件采集的声音信息和所述振动传感器采集的振动信息;及用于根据所述声音信 息和所述振动信息确定声音命令的发出者的身份信息,所述声音命令由所述声音信息确定;以及用于根据所述身份信息控制所述穿戴设备执行所述声音命令或忽略所述声音命令。
  10. 根据权利要求9所述的穿戴设备,其特征在于,所述穿戴设备包括连接所述外壳的支撑部件,所述支撑部件包括第一支架和第二支架,所述振动传感器设置在所述第一支架和/或所述第二支架。
  11. 根据权利要求10所述的穿戴设备,其特征在于,所述第一支架远离所述外壳的一端形成有第一弯折部,所述第二支架远离所述外壳的一端形成有第二弯折部,所述外壳包括外壳底壁,所述第一弯折部和所述第二弯折部朝向所述外壳底壁弯折,所述振动传感器设置在所述第一弯折部和/或所述第二弯折部。
  12. 根据权利要求11所述的穿戴设备,其特征在于,所述振动传感器的数量为两个,其中一个所述振动传感器设置在所述第一弯折部,其中的另一个所述振动传感器设置在所述第二弯折部。
  13. 根据权利要求9所述的穿戴设备,其特征在于,所述声电元件的数量为三个,所述外壳包括外壳前壁,三个所述声电元件分别设置在所述外壳前壁的第一预设位置、第二预设位置和第三预设位置。
  14. 根据权利要求13所述的穿戴设备,其特征在于,所述外壳包括外壳侧壁和外壳顶壁,所述外壳侧壁的数量为两个,两个所述外壳侧壁分别设置在所述外壳顶壁的相背两侧,所述第一预设位置靠近所述外壳顶壁和其中一个所述外壳侧壁,所述第二预设位置靠近所述外壳顶壁和其中另一个所述外壳侧壁。
  15. 根据权利要求13所述的穿戴设备,其特征在于,所述外壳包括外壳顶壁和外壳底壁,所述外壳顶壁和所述外壳底壁分别设置在所述外壳前壁的相背两侧,所述外壳底壁的中部朝向所述外壳顶壁形成缺口,所述第三预设位置靠近所述缺口。
  16. 根据权利要求9所述的穿戴设备,其特征在于,所述处理器用于确定所述声音信息和所述振动信息的时间差;以及用于根据所述时间差确定所述身份信息。
  17. 根据权利要求16所述的穿戴设备,其特征在于,所述身份信息包括穿戴者和非穿戴者,所述时间差包括起始时间差,所述处理器用于根据所述声音信息的起始时刻和所述振动信息的起始时刻确定所述起始时间差;及用于在所述起始时间差小于或等于预设的时间阈值的情况下,确定所述身份信息为所述穿戴者;以及用于在所述起始时间差大于所述时间阈值的情况下,确定所述身份信息为所述非穿戴者。
  18. 根据权利要求16所述的穿戴设备,其特征在于,所述身份信息包括穿戴者和非穿戴者,所述时间差包括结束时间差,所述处理器用于根据所述声音信息的结束时刻和所述振动信息的结束时刻确定所述结束时间差;及用于在所述结束时间差小于或等于预设的时间阈值的情况下,确定所述身份信息为所述穿戴者;以及用于在所述结束时间差大于所述时间阈值的情况下,确定所述身份信息为所述非穿戴者。
  19. 根据权利要求9所述的穿戴设备,其特征在于,所述身份信息包括穿戴者和非穿戴者,所述处理器用于在所述身份信息为所述穿戴者的情况下,控制所述穿戴设备执行所述声音命令;以及用于在所述身份信息为所述非穿戴者的情况下,控制所述穿戴设备忽略所述声音命令。
  20. 一种包含计算机可执行指令的非易失性计算机可读存储介质,其特征在于,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行权利要求1-8中任一项所述的穿戴设备的控制方法。
PCT/CN2020/090980 2019-06-10 2020-05-19 控制方法、穿戴设备和存储介质 WO2020248778A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020217039094A KR20220002605A (ko) 2019-06-10 2020-05-19 제어 방법, 웨어러블 기기 및 저장 매체
JP2021571636A JP7413411B2 (ja) 2019-06-10 2020-05-19 制御方法、ウェアラブルデバイス及び記憶媒体
EP20822293.5A EP3968320A4 (en) 2019-06-10 2020-05-19 ORDERING METHOD, WEARABLE DEVICE AND STORAGE MEDIA
US17/528,889 US20220076684A1 (en) 2019-06-10 2021-11-17 Method for Controlling Wearable Device, Wearable Device, and Storage Medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910496570.7A CN112071311A (zh) 2019-06-10 2019-06-10 控制方法、控制装置、穿戴设备和存储介质
CN201910496570.7 2019-06-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/528,889 Continuation US20220076684A1 (en) 2019-06-10 2021-11-17 Method for Controlling Wearable Device, Wearable Device, and Storage Medium

Publications (1)

Publication Number Publication Date
WO2020248778A1 true WO2020248778A1 (zh) 2020-12-17

Family

ID=73658196

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/090980 WO2020248778A1 (zh) 2019-06-10 2020-05-19 控制方法、穿戴设备和存储介质

Country Status (6)

Country Link
US (1) US20220076684A1 (zh)
EP (1) EP3968320A4 (zh)
JP (1) JP7413411B2 (zh)
KR (1) KR20220002605A (zh)
CN (1) CN112071311A (zh)
WO (1) WO2020248778A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022225912A1 (en) * 2021-04-21 2022-10-27 Hourglass Medical Llc Methods for voice blanking muscle movement controlled systems
US11553313B2 (en) 2020-07-02 2023-01-10 Hourglass Medical Llc Clench activated switch system
US11698678B2 (en) 2021-02-12 2023-07-11 Hourglass Medical Llc Clench-control accessory for head-worn devices

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112951236A (zh) * 2021-02-07 2021-06-11 北京有竹居网络技术有限公司 一种语音翻译设备及方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140270259A1 (en) * 2013-03-13 2014-09-18 Aliphcom Speech detection using low power microelectrical mechanical systems sensor
CN104657650A (zh) * 2015-01-06 2015-05-27 三星电子(中国)研发中心 用于数据输入或验证身份的方法及装置
CN104850222A (zh) * 2015-04-15 2015-08-19 郑德豪 一种指令识别方法及电子终端
CN106468780A (zh) * 2015-08-20 2017-03-01 联发科技股份有限公司 可携带装置与相关的震动侦测方法
CN108735219A (zh) * 2018-05-09 2018-11-02 深圳市宇恒互动科技开发有限公司 一种声音识别控制方法及装置
CN109064720A (zh) * 2018-06-27 2018-12-21 Oppo广东移动通信有限公司 位置提示方法、装置、存储介质及电子设备

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002091466A (ja) 2000-09-12 2002-03-27 Pioneer Electronic Corp 音声認識装置
JP2002358089A (ja) 2001-06-01 2002-12-13 Denso Corp 音声処理装置及び音声処理方法
JP2005084253A (ja) 2003-09-05 2005-03-31 Matsushita Electric Ind Co Ltd 音響処理装置、方法、プログラム及び記憶媒体
JP5943344B2 (ja) 2012-07-12 2016-07-05 国立大学法人佐賀大学 健康管理システム、その方法及びプログラム並びに眼鏡型生体情報取得装置
WO2014163797A1 (en) 2013-03-13 2014-10-09 Kopin Corporation Noise cancelling microphone apparatus
US20140378083A1 (en) * 2013-06-25 2014-12-25 Plantronics, Inc. Device Sensor Mode to Identify a User State
US9564128B2 (en) * 2013-12-09 2017-02-07 Qualcomm Incorporated Controlling a speech recognition process of a computing device
JP2016127300A (ja) 2014-12-26 2016-07-11 アイシン精機株式会社 音声処理装置
US10896591B2 (en) 2015-07-31 2021-01-19 Motorola Mobility Llc Eyewear with proximity sensors to detect outside line of sight presence and corresponding methods
EP3196643A1 (en) 2016-01-22 2017-07-26 Essilor International A head mounted device comprising an environment sensing module
US20170303052A1 (en) * 2016-04-18 2017-10-19 Olive Devices LLC Wearable auditory feedback device
WO2018061743A1 (ja) 2016-09-28 2018-04-05 コニカミノルタ株式会社 ウェアラブル端末
US10678502B2 (en) * 2016-10-20 2020-06-09 Qualcomm Incorporated Systems and methods for in-ear control of remote devices
CN108877813A (zh) * 2017-05-12 2018-11-23 阿里巴巴集团控股有限公司 人机识别的方法、装置和系统
GB201801530D0 (en) * 2017-07-07 2018-03-14 Cirrus Logic Int Semiconductor Ltd Methods, apparatus and systems for authentication
GB201801526D0 (en) * 2017-07-07 2018-03-14 Cirrus Logic Int Semiconductor Ltd Methods, apparatus and systems for authentication
CN108629167B (zh) * 2018-05-09 2020-10-27 西安交通大学 一种结合可穿戴设备的多智能设备身份认证方法
CN109119080A (zh) * 2018-08-30 2019-01-01 Oppo广东移动通信有限公司 声音识别方法、装置、穿戴式设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140270259A1 (en) * 2013-03-13 2014-09-18 Aliphcom Speech detection using low power microelectrical mechanical systems sensor
CN104657650A (zh) * 2015-01-06 2015-05-27 三星电子(中国)研发中心 用于数据输入或验证身份的方法及装置
CN104850222A (zh) * 2015-04-15 2015-08-19 郑德豪 一种指令识别方法及电子终端
CN106468780A (zh) * 2015-08-20 2017-03-01 联发科技股份有限公司 可携带装置与相关的震动侦测方法
CN108735219A (zh) * 2018-05-09 2018-11-02 深圳市宇恒互动科技开发有限公司 一种声音识别控制方法及装置
CN109064720A (zh) * 2018-06-27 2018-12-21 Oppo广东移动通信有限公司 位置提示方法、装置、存储介质及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3968320A4

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11553313B2 (en) 2020-07-02 2023-01-10 Hourglass Medical Llc Clench activated switch system
US11778428B2 (en) 2020-07-02 2023-10-03 Hourglass Medical Llc Clench activated switch system
US11698678B2 (en) 2021-02-12 2023-07-11 Hourglass Medical Llc Clench-control accessory for head-worn devices
WO2022225912A1 (en) * 2021-04-21 2022-10-27 Hourglass Medical Llc Methods for voice blanking muscle movement controlled systems
US11662804B2 (en) 2021-04-21 2023-05-30 Hourglass Medical Llc Voice blanking muscle movement controlled systems

Also Published As

Publication number Publication date
CN112071311A (zh) 2020-12-11
EP3968320A1 (en) 2022-03-16
EP3968320A4 (en) 2022-06-15
JP7413411B2 (ja) 2024-01-15
KR20220002605A (ko) 2022-01-06
US20220076684A1 (en) 2022-03-10
JP2022535250A (ja) 2022-08-05

Similar Documents

Publication Publication Date Title
WO2020248778A1 (zh) 控制方法、穿戴设备和存储介质
US10733992B2 (en) Communication device, communication robot and computer-readable storage medium
US10261579B2 (en) Head-mounted display apparatus
CN109121038A (zh) 一种抑制漏音的穿戴式设备、抑制漏音方法及存储介质
US9081416B2 (en) Device, head mounted display, control method of device and control method of head mounted display
EP2929424B1 (en) Multi-touch interactions on eyewear
US11922590B2 (en) Devices, methods, and graphical user interfaces for interacting with three-dimensional environments
WO2020238462A1 (zh) 控制方法、穿戴设备和存储介质
US20130304479A1 (en) Sustained Eye Gaze for Determining Intent to Interact
US20160349509A1 (en) Mixed-reality headset
CN109259724B (zh) 一种用眼监控方法、装置、存储介质及穿戴式设备
JP5953714B2 (ja) 装置、頭部装着型表示装置、装置の制御方法および頭部装着型表示装置の制御方法
BR112015032026B1 (pt) Sistema de reconhecimento de evento adaptativo, método para reconhecer um evento alvo e meio legível por computador
WO2021036591A1 (zh) 控制方法、控制装置、电子装置和存储介质
WO2013081632A1 (en) Techniques for notebook hinge sensors
EP3025185B1 (en) Head mounted display and method of controlling therefor
CN109068126B (zh) 视频播放方法、装置、存储介质及穿戴式设备
WO2021047331A1 (zh) 控制方法、电子装置和存储介质
US20220174764A1 (en) Interactive method, head-mounted device, interactive system and storage medium
WO2017056520A1 (ja) ロボット装置
JP2019130610A (ja) コミュニケーションロボットおよびその制御プログラム
CN207611201U (zh) 一种vr头戴设备壳体
CN212694166U (zh) 一种头戴式显示设备
CN112083796A (zh) 控制方法、头戴设备、移动终端和控制系统
CN111948807B (zh) 控制方法、控制装置、穿戴设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20822293

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217039094

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021571636

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020822293

Country of ref document: EP

Effective date: 20211208