WO2023001195A1 - 智能眼镜及其控制方法和系统 - Google Patents

智能眼镜及其控制方法和系统 Download PDF

Info

Publication number
WO2023001195A1
WO2023001195A1 PCT/CN2022/106802 CN2022106802W WO2023001195A1 WO 2023001195 A1 WO2023001195 A1 WO 2023001195A1 CN 2022106802 W CN2022106802 W CN 2022106802W WO 2023001195 A1 WO2023001195 A1 WO 2023001195A1
Authority
WO
WIPO (PCT)
Prior art keywords
smart glasses
wireless communication
communication module
data
voice data
Prior art date
Application number
PCT/CN2022/106802
Other languages
English (en)
French (fr)
Inventor
罗国华
苏超明
张惠权
Original Assignee
所乐思(深圳)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 所乐思(深圳)科技有限公司 filed Critical 所乐思(深圳)科技有限公司
Publication of WO2023001195A1 publication Critical patent/WO2023001195A1/zh
Priority to US18/418,377 priority Critical patent/US20240163603A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/326Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C11/00Non-optical adjuncts; Attachment thereof
    • G02C11/06Hearing aids
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/21Combinations with auxiliary equipment, e.g. with clocks or memoranda pads
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/02Details casings, cabinets or mounting therein for transducers covered by H04R1/02 but not provided for in any of its subgroups
    • H04R2201/023Transducers incorporated in garment, rucksacks or the like
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/10Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups
    • H04R2201/107Monophonic and stereophonic headphones with microphone for two-way hands free communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • H04R5/0335Earpiece support, e.g. headbands or neckrests

Definitions

  • the embodiments of the present application relate to the technical fields of electronic devices and communications, and in particular, to smart glasses and a control method and system thereof.
  • Existing smart glasses usually combine microcontrollers, sensors (such as: pedometer, heart rate sensor, accelerometer, gyroscope, GPS (Global Positioning System) Positioning System, GPS, etc.) and other electronic components are all placed on the main body of the glasses, and all intelligent functions are calculated using the microcontroller built into the smart glasses to receive sensor data.
  • sensors such as: pedometer, heart rate sensor, accelerometer, gyroscope, GPS (Global Positioning System) Positioning System, GPS, etc.
  • GPS Global Positioning System
  • the main body of the glasses is loaded with too many electronic components, the above-mentioned smart glasses are usually heavy, cannot be worn all day, and consume a lot of power.
  • most smart glasses since most smart glasses only focus on intelligent functions based on device control, they lack humanized control and cannot provide hearing aid functions.
  • the distance between the electronic components is very close, which will interfere with each other, especially a loop will be generated between the speaker and the sound pickup device, which will easily lead to a response or a howling sound.
  • the embodiment of the present application provides a kind of smart glasses and its control method and system, which are used to realize the phone call function and hearing aid function based on the same hardware platform of the smart glasses, and can reduce the weight of the smart glasses, reduce power consumption, and reduce The manufacturing cost of smart glasses.
  • An embodiment of the present application provides smart glasses on the one hand, including: a frame, a plurality of temples, at least one sound pickup device, and a wireless communication module;
  • the mirror legs are connected to the mirror frame, the at least one sound pickup device is set on at least one of the multiple mirror legs, and the wireless communication module is set on any one of the multiple mirror legs.
  • the at least one sound pickup device is set on at least one of the multiple mirror legs
  • the wireless communication module is set on any one of the multiple mirror legs.
  • the wireless communication module is used to control and switch the working mode of the smart glasses, and the working mode includes a call mode and a hearing aid mode;
  • the wireless communication module is further configured to perform a first beamforming process on the voice data acquired by the at least one sound pickup device in the talking mode, so that the sound beam of the at least one sound pickup device points downward; as well as
  • the wireless communication module is further configured to perform a second beamforming process on the voice data in the hearing aid mode, so that the sound beam of the at least one sound pickup device points forward.
  • an embodiment of the present application provides a smart glasses control system, including: a smart mobile terminal and the smart glasses provided in the above embodiments;
  • the smart mobile terminal is used for data interaction with the smart glasses.
  • an embodiment of the present application provides a method for controlling smart glasses.
  • the smart glasses include: a wireless communication module and a sound pickup device electrically connected to the wireless communication module.
  • the method includes:
  • the first beamforming processing is performed on the voice data through the wireless communication module, so that the sound beam of the sound pickup device points downward;
  • a second beamforming process is performed on the voice data through the wireless communication module, so that the sound beam of the sound pickup device points forward.
  • the wireless communication module performs the first beamforming process on the voice data acquired by the sound pickup device in the call mode , so that the sound beam of the sound pickup device points downward; on the other hand, the wireless communication module performs a second beamforming process on the voice data in the hearing aid mode, so that the sound beam of the sound pickup device points forward, thereby Realize the phone call function and hearing aid function based on the same hardware platform of smart glasses, and expand the functions of smart glasses.
  • the weight of the smart glasses can be reduced, power consumption can be reduced, and the manufacturing cost of the smart glasses can be reduced.
  • FIG. 1 is a schematic structural diagram of smart glasses provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of the external structure of smart glasses provided by another embodiment of the present application.
  • Fig. 3 is a schematic diagram of the internal structure of the smart glasses in the embodiment shown in Fig. 2;
  • Fig. 4 is a schematic structural diagram of a microphone array in the smart glasses in the embodiment shown in Fig. 2;
  • FIG. 5 is a schematic structural diagram of a smart glasses control system provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a smart glasses control system provided by another embodiment of the present application.
  • Fig. 7 is a schematic diagram of the hardware structure of the smart mobile terminal in the smart glasses control system shown in Fig. 5 and Fig. 6;
  • FIG. 8 is a schematic diagram of the implementation flow of the smart glasses control method provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of the running state and working mode of the smart glasses in the smart glasses control method provided by the embodiment of the present application.
  • FIG. 10 is a schematic diagram of the sound beam direction of the microphone array in different working modes in the smart glasses control method provided by the embodiment of the present application;
  • FIG. 11 is a schematic diagram of the sound beam direction of the microphone array in the call mode in the smart glasses control method provided by the embodiment of the present application;
  • FIG. 12 is a schematic diagram of the sound beam direction of the microphone array in the hearing aid mode in the smart glasses control method provided by the embodiment of the present application;
  • FIG. 13 is a schematic diagram of a voice signal processing process in a call mode in a smart glasses control method provided by an embodiment of the present application;
  • FIG. 14 is a schematic diagram of the processing process of the voice signal and the call voice signal in the hearing aid mode in the smart glasses control method provided by the embodiment of the present application;
  • FIG. 15 is a schematic diagram of volume adjustment control of smart glasses in a method for controlling smart glasses provided by other embodiments of the present application.
  • FIG. 1 is a schematic structural diagram of smart glasses provided by an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown in the figure.
  • the smart glasses include: a mirror frame 101 , multiple mirror legs 102 , at least one sound pickup device 103 (only one is shown in the figure for ease of understanding) and a wireless communication module 104 .
  • the mirror leg 102 is connected to the mirror frame 101, at least one sound pickup device 103 is arranged on at least one mirror leg 102 in the multiple mirror legs 102, and the wireless communication module 104 is arranged on any one of the mirror legs 102 in the multiple mirror legs 102.
  • the cavity is electrically connected to at least one sound pickup device 103 .
  • the wireless communication module 104 is used to control switching of the working modes of the smart glasses, and the working modes include a call mode and a hearing aid mode (or a dialogue mode).
  • the wireless communication module 104 is further configured to perform a first beamforming process on the voice data acquired by the at least one sound pickup device 103 in the call mode, so that the sound beam of the at least one sound pickup device 103 points downward.
  • the sound source of the voice data may be the wearer of the smart glasses.
  • the wireless communication module 104 is further configured to perform second beamforming processing on the voice data acquired by the at least one sound pickup device 103 in the hearing aid mode, so that the sound beam collected by the at least one sound pickup device 103 points forward.
  • the sound source of the voice data may be the conversation partner of the wearer of the smart glasses.
  • the wireless communication module to control and switch the working mode of the smart glasses
  • the first beamforming process is performed on the voice data acquired by the sound pickup device, so that the sound beam of the sound pickup device collects sound
  • the second beamforming process is performed on the voice data, so that the sound beam of the sound pickup device points to the front, thereby realizing the phone call function and the same hardware platform based on smart glasses.
  • the hearing aid function expands the functions of smart glasses.
  • the weight of the smart glasses can be reduced, power consumption can be reduced, and the manufacturing cost of the smart glasses can be reduced.
  • FIG. 2 is a schematic diagram of the external structure of smart glasses provided by another embodiment of the present application
  • FIG. 3 is a schematic diagram of the internal structure of the smart glasses in the embodiment shown in FIG. 2 .
  • the difference from the embodiment shown in Figure 1 is that in this embodiment:
  • the wireless communication module 104 is also configured to receive the direction control instruction sent by the smart mobile terminal.
  • the wireless communication module 104 is further configured to perform the above-mentioned second beamforming processing on the voice data in the hearing aid mode when the direction indicated by the sound collection direction control instruction is forward, so that the at least one sound pickup device 103 The sound beam is directed forward.
  • the wireless communication module 104 is also configured to not perform the second beamforming process on the voice data in the hearing aid mode when the direction indicated by the sound collection direction control command is omnidirectional, so that the at least one sound pickup device 103 The sound beam radio points in all directions. Among them, the sound beam is directed in all directions, that is, 360° omnidirectional radio.
  • the multiple mirror legs 102 include a first mirror leg 102A and a second mirror leg 102B, the front ends of the first mirror leg 102A and the second mirror leg 102B are respectively connected to both sides of the mirror frame 101, and at least one sound pickup device 103 is mounted on the front end of the first temple 102A.
  • the smart glasses further include: a first speaker 201 and a second speaker 202 electrically connected to the wireless communication module 104 .
  • the first speaker 201 and the second speaker 202 are used to output voice data or music data.
  • the voice data includes: the voice data obtained by the sound pickup device 103 processed by the wireless communication module 104 , and the call voice data sent by the smart mobile terminal received by the wireless communication module 104 .
  • the music data includes: the music data sent by the smart mobile terminal received by the wireless communication module 104 .
  • the first speaker 201 is installed on the first mirror leg 102A, and the output port of the first speaker 201 is located at the rear end of the first mirror leg 102A.
  • the second speaker 202 is installed on the second mirror leg 102B, and the output port of the second speaker 202 is located at the rear end of the second mirror leg 102B.
  • the sound pickup device is installed at the front end of the mirror leg, and the output ports of each speaker are installed at the tail end of the mirror leg, there can be a sufficient distance between the sound pickup device and the output ports of each speaker, so it can effectively reduce the noise in the sound pickup.
  • the condition of creating a loop between the device and the speaker reduces the chance of echo and howling during use.
  • the tail end of the mirror leg is closest to the user's ear, and installing the speaker outlet at the tail end of the mirror leg can make the speaker outlet closest to the user's ear, thereby improving the efficiency of sound output.
  • the first speaker 201 and the second speaker 202 are preferably monaural speakers.
  • the mono speakers installed on the two temples are combined to achieve stereo sound.
  • the wireless communication module 104 includes: a controller 1041 , a voice data processor 1042 and a wireless signal transceiver 1043 .
  • the controller 1041, the voice data processor 1042 and the wireless signal transceiver 1043 may be connected through a bus.
  • controller 1041 is used to control switching of the working modes of the smart glasses.
  • Controller 1041 is preferably MCU (Microcontroller Unit, micro control unit).
  • Voice data processor 1042 configured to process voice data.
  • Voice data processor 1042 is preferably DSP (Digital Signal Processing, digital signal processor) or voice data processing integrated circuit.
  • DSP Digital Signal Processing, digital signal processor
  • voice data processing integrated circuit is a commonly used circuit, and this application does not specifically limit its structure.
  • the wireless signal transceiver 1043 is used for data interaction with the smart mobile terminal.
  • the wireless communication module 104 uses Bluetooth protocol, WiFi (Wireless Fidelity) protocol, NFC (Near Field Communication, near field communication) protocol, ZigBee (Zigbee), DLNA (DIGITAL LIVING NETWORK ALLIANCE (Digital Living Network Alliance) protocol, UWB (Ultra Wideband, carrierless communication), RFID (Radio Frequency Identification, radio frequency identification) protocol, and at least one of the cellular mobile communication (Cellular Mobile Communication) protocol as an intelligent mobile terminal A communication protocol for data exchange.
  • WiFi Wireless Fidelity
  • NFC Near Field Communication, near field communication
  • ZigBee ZigBee
  • DLNA DIGITAL LIVING NETWORK ALLIANCE (Digital Living Network Alliance) protocol
  • UWB User Wideband, carrierless communication
  • RFID Radio Frequency Identification, radio frequency identification
  • Cellular Mobile Communication Cellular Mobile Communication
  • the voice data processor 1042 includes a voice equalizer (Equalizer).
  • the wireless signal transceiver 1043 is also used to receive the volume adjustment control instruction sent by the smart mobile terminal and send it to the voice data processor 1042.
  • the voice data processor 1042 is also used to adjust the voice data to be output by using the voice equalizer. It is the sound data of the frequency band and volume to which the volume adjustment control command is directed, and the adjusted sound data is sent to the speaker to which the volume adjustment control command is directed.
  • the frequency band targeted by the volume adjustment control instruction may be low frequency, middle frequency or high frequency. Volume adjustment can be to increase or decrease the volume.
  • the voice data processor 1042 is also configured to perform voice equalization processing (using a voice equalizer (Equalizer)) and output volume processing on the data on the downlink channel in the call mode, and use a preset echo cancellation algorithm ( Acoustic Echo Cancellation), a beamforming algorithm (Beamforming), and a noise suppression algorithm (Noise Cancellation) perform echo cancellation processing, the first beamforming processing, and noise suppression processing on the data on the uplink channel.
  • voice equalization processing using a voice equalizer (Equalizer)
  • Echo Cancellation Acoustic Echo Cancellation
  • Beamforming beamforming algorithm
  • Noise suppression algorithm Noise Cancellation
  • the voice data processor 1042 is also used to perform voice equalization processing and output volume control on the call voice data (that is, data on the downlink channel) received by the wireless signal transceiver 1043 from the smart mobile terminal in the call mode processing, and the output volume control processed call voice data is sent to the first speaker 201 and the second speaker 202 for output;
  • the voice data processor 1042 is also used to use the output volume control processed call voice data as a reference signal , perform echo cancellation processing on the voice data (that is, data on the uplink channel) acquired by the sound pickup device 103, and perform the first beamforming processing and noise suppression processing on the voice data after the echo cancellation processing, and perform the noise suppression processing
  • the signal is sent to the smart mobile terminal through the wireless signal transceiver 1043.
  • the voice data processor 1042 is also configured to use a preset feedback cancellation algorithm (Feedback Cancellation), beamforming algorithm, noise suppression algorithm, voice equalizer algorithm (Equalizer) and user voice detection algorithm (User Talking Detection), feedback cancellation processing, second beamforming processing, noise suppression processing, voice equalization on the voice data processing and user voice detection processing, and send the voice data after voice equalization processing to the first speaker 201 and the second speaker 202 for output.
  • the voice data after the voice equalization process is also used as reference data for the above-mentioned feedback cancellation process during the processing.
  • the smart glasses further include: at least one sensor (not shown in the figure) electrically connected to the wireless communication module 104 .
  • At least one sensor is mounted on the inside and/or outside of the first temple 102A and/or the second temple 102B.
  • At least one sensor includes: at least one of a touch sensor, a proximity sensor, an accelerometer, a gyroscope, a magnetic induction sensor, and an inertial measurement unit.
  • the inertial measurement unit is a 9-axis sensor.
  • the 9-axis sensor is used to collect motion data of the user, and send it to the smart mobile terminal through the wireless communication module 104 for data processing.
  • At least one sensor includes: a 9-axis sensor 2031 , at least one touch sensor 2032 and at least one proximity sensor 2033 .
  • At least one touch sensor 2032 is installed on the outside of the first temple 102A and/or the second temple 102B.
  • At least one touch sensor is used to detect a user's first control operation, and send the detected data of the first control operation to the controller 1041, and the first control operation is used to adjust the volume.
  • the controller 1041 is further configured to control and adjust the volume of the sound output by the smart glasses in response to the first control operation according to the data of the first control operation.
  • the first control operation includes a control operation for turning up the volume and a control operation for turning down the volume.
  • the control operation for turning up the volume corresponds to the movement of the user’s finger on the touch sensor toward the ear
  • the control operation for turning the volume down corresponds to the movement of the user’s finger on the touch sensor toward the frame (that is, away from the ear).
  • Direction corresponds to the action of swiping.
  • At least one touch sensor is also used to detect a second control operation by the user, and send the detected data of the second control operation to the controller 1041 .
  • the controller 1041 is further configured to control switching of the working mode of the smart glasses in response to the second control operation according to the data of the second control operation.
  • the second control operation preferably corresponds to the action of the user clicking or long pressing the touch sensor, for example: the user can switch the working mode of the smart glasses to the call mode or the hearing aid mode by long pressing the touch sensor for more than 3 seconds.
  • At least one proximity sensor is installed on the inner side of the first mirror leg 102A and/or the second mirror leg 102B, and is used to detect whether the user wears or takes off the smart glasses and obtains the time length when the user does not wear the smart glasses, and detects The result is sent to the controller 1041.
  • the controller 1041 is configured to play the music data when the proximity sensor detects that the user wears the smart glasses according to the detection result, stop playing the music data when the proximity sensor detects that the user takes off the smart glasses, and when the proximity sensor detects that the user wears the smart glasses
  • the shutdown operation is performed when the user does not wear the smart glasses for a preset period of time.
  • the number of each of the above sensors is preferably one, so as to reduce the overall weight of the smart glasses.
  • the number of various sensors is not limited to one.
  • a proximity sensor can be provided on each of the two temples.
  • the controller 1041 is also configured to send the data acquired by each sensor to the smart mobile terminal through the wireless signal transceiver 1043 .
  • the sound pickup device 103 is a microphone array, and the microphone array includes at least two microphones.
  • the microphone array includes a first microphone M1, a second microphone M2 and a third microphone M3.
  • the distance between the third microphone M3 and the first microphone M1 is equal to the distance between the third microphone M3 and the second microphone M2.
  • the distance d1 between the first microphone M1 and the second microphone M2 is equal to the distance d2 between the third microphone M3 and the midpoint of the connecting line between the first microphone M1 and the second microphone M2 .
  • the first microphone M1, the second microphone M2 and the third microphone M3 in the microphone array are all installed on the same mirror. on the lap, and the first microphone M1 and the second microphone M2 are closer to the frame than the third microphone M3.
  • the controller 1041 is also used to control the first microphone M1 and the second microphone M2 to obtain voice data in the call mode, and to control the first microphone M1, the second microphone M2 and the third microphone M3 to obtain voice data in the hearing aid mode data.
  • controlling the microphones at different positions to pick up sound can reduce the noise in the acquired voice data and improve the speed of signal processing.
  • the smart glasses further include a battery 204, which is installed on the first temple 102A and is electrically connected to the wireless communication module 104, so as to provide power for the wireless communication module 104, each sensor, each speaker and Electronic components such as microphones provide power.
  • a battery 204 which is installed on the first temple 102A and is electrically connected to the wireless communication module 104, so as to provide power for the wireless communication module 104, each sensor, each speaker and Electronic components such as microphones provide power.
  • the smart glasses further include at least one hearing aid (not shown in the figure), the at least one hearing aid is installed on the first temple 102A and/or the second temple 102B, and is electrically connected to the wireless communication module 104 .
  • the wireless communication module 104 is also used for controlling the at least one hearing aid to output voice data in the hearing aid mode.
  • the smart glasses further include at least one control button (not shown in the figure), the at least one control button is installed on the outside of the first temple 102A and/or the second temple 102B, and is electrically connected to the wireless communication module 104 sexual connection.
  • the at least one control button is used to trigger the wireless communication module 104 to control switching of the working mode or running state of the smart glasses.
  • the running state includes an idle state and a working state
  • the working state includes the talking mode and the hearing aid mode.
  • the various electronic components of the above-mentioned smart glasses can be connected through a bus.
  • the relationship between the above-mentioned components of the smart glasses may be a substitution relationship or a superposition relationship. That is, all the above-mentioned components in this embodiment can be installed on one smart glasses, or some of the above-mentioned components can also be selectively installed according to requirements.
  • the smart glasses are also provided with a peripheral connection interface, which can be, for example, a PS/2 interface, a serial interface, a parallel interface, an IEEE1394 interface, or a USB (Universal Serial Bus, Universal Serial Bus) interface
  • a peripheral connection interface which can be, for example, a PS/2 interface, a serial interface, a parallel interface, an IEEE1394 interface, or a USB (Universal Serial Bus, Universal Serial Bus) interface
  • the function of the replaced component can be realized through the peripheral device connected to the connection interface, such as: external speaker, external sensor and so on.
  • the wireless communication module to control and switch the working mode of the smart glasses
  • the first beamforming process is performed on the voice data acquired by the sound pickup device, so that the sound beam of the sound pickup device collects sound
  • the second beamforming process is performed on the voice data, so that the sound beam of the sound pickup device points to the front, thereby realizing the phone call function and the same hardware platform based on smart glasses.
  • the hearing aid function expands the functions of smart glasses.
  • the weight of the smart glasses can be reduced, power consumption can be reduced, and the manufacturing cost of the smart glasses can be reduced.
  • FIG. 5 is a schematic structural diagram of a smart glasses control system provided by an embodiment of the present application.
  • the smart glasses control system includes: smart glasses 301 and a smart mobile terminal 302 .
  • the structure of the smart glasses 301 is the same as the smart glasses in the embodiments shown in FIGS. 1 to 4 .
  • the structure and functions of the smart glasses 301 please refer to the relevant descriptions in the above embodiments shown in FIGS. 1 to 4 .
  • Smart mobile terminals 302 may include, but are not limited to: cellular phones, smart phones, other wireless communication devices, personal digital assistants, audio players, other media players, music recorders, video recorders, cameras, other media recorders, smart radios, Laptop computers, personal digital assistants (PDAs), portable multimedia players (PMPs), Moving Picture Experts Group (MPEG-1 or MPEG-2) audio layer 3 (MP3) players, digital cameras, and smart wearable devices ( Such as smart watches, smart bracelets, etc.).
  • PDAs personal digital assistants
  • PMPs portable multimedia players
  • MPEG-1 or MPEG-2 Moving Picture Experts Group
  • MP3 Moving Picture Experts Group
  • An Android or IOS operating system is installed on the smart mobile terminal 302 .
  • the smart mobile terminal 302 is used for data interaction with the smart glasses 301, specifically, for example: receiving, storing and processing the data sent by the smart glasses 301, and when performing target tasks such as playing music data and calling, the played music data, The received call voice data is sent to the smart glasses 301 and the like.
  • the communication protocol used by the smart mobile terminal 302 when exchanging data with the smart glasses 301 is consistent with the communication protocol used by the smart glasses 301 .
  • the smart mobile terminal 302 may include a control circuit, which may include a storage and processing circuit 300 .
  • the storage and processing circuitry 300 may include memory, such as hard disk drive memory, non-volatile memory (such as flash memory or other electronically programmable limited-erasable memory for forming solid-state drives, etc.), volatile memory (such as static or dynamic random access memory, etc.), etc., which are not limited in this embodiment of the present application.
  • the processing circuitry in the storage and processing circuitry 300 may be used to control the operation of the smart mobile terminal 302 .
  • the processing circuit may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
  • the storage and processing circuit 300 can be used to run software in the smart mobile terminal 302, such as: artificial intelligence housekeeper applications, Internet browsing applications, Voice over Internet Protocol (Voice over Internet Protocol, VOIP) phone calling applications, email applications, media playback applications, operating system functions, etc.
  • These software can be used to perform some control operations, for example, data processing and analysis based on the preset analysis algorithm for the motion data sent by the smart glasses 301, camera-based image acquisition, ambient light measurement based on the ambient light sensor, proximity based Proximity sensor measurements by sensors, information display functions based on status indicators such as LED status indicators, touch event detection based on touch sensors, functions associated with displaying information on multiple (e.g. layered) displays , operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in the smart mobile terminal 302, etc., implemented in this application Examples are not limited.
  • the memory stores executable program codes
  • the processor coupled to the memory calls the executable program codes stored in the memory, and executes the relevant steps in the following method embodiments of the present application, so as to realize various implementations of the present application.
  • Various functions of the smart mobile terminal 302 in the example are described in detail below.
  • the smart mobile terminal 302 may also include an input/output circuit 420 .
  • the input/output circuit 420 can be used to enable the smart mobile terminal 302 to realize data input and output, that is, allow the smart mobile terminal 302 to receive data from external devices and also allow the smart mobile terminal 302 to output data from the smart mobile terminal 302 to external devices.
  • the input/output circuit 420 may further include the sensor 320 .
  • the sensor 320 can include an ambient light sensor, a proximity sensor based on light and capacitance, a touch sensor (for example, based on an optical touch sensor and/or a capacitive touch sensor, wherein the touch sensor can be a part of the touch screen or can be used as a The touch sensor structure is used independently), the accelerometer, and other sensors, etc.
  • Input/output circuitry 420 may also include one or more displays, such as display 140 .
  • the display 140 may include one or a combination of liquid crystal displays, organic light emitting diode displays, electronic ink displays, plasma displays, and displays using other display technologies.
  • Display 140 may include a touch sensor array (ie, display 140 may be a touchscreen display).
  • the touch sensor may be a capacitive touch sensor formed from an array of transparent touch sensor electrodes such as indium tin oxide (ITO) electrodes, or may be a touch sensor formed using other touch technologies such as acoustic touch, pressure sensitive touch, resistive touch Touch, optical touch, etc. are not limited in this embodiment of the application.
  • ITO indium tin oxide
  • the smart mobile terminal 302 can also include an audio component 360 .
  • the audio component 360 can be used to provide audio input and output functions for the smart mobile terminal 302 .
  • the audio components 360 in the smart mobile terminal 302 may include speakers, sound pickup devices, buzzers, tone generators and other components for generating and detecting sounds.
  • the communication circuit 380 can be used to provide the smart mobile terminal 302 with the ability to communicate with external devices.
  • the communication circuit 380 may include analog and digital input/output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals.
  • Wireless communication circuitry in communication circuitry 380 may include radio frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas.
  • the wireless communication circuit in the communication circuit 380 may include a device for supporting near field communication (Near Field Communication) by transmitting and receiving near field coupled electromagnetic signals. Communication, NFC) circuits.
  • communication circuitry 380 may include a near field communication antenna and a near field communication transceiver.
  • the communication circuit 380 may also include a cellular phone transceiver and antenna, a wireless local area network transceiver circuit and antenna such as Bluetooth, WiFi, ZigBee, DLNA, UWB, RFID, etc.
  • the smart mobile terminal 302 may further include a battery, a power management circuit and other input/output units 400 .
  • the input/output unit 400 may include buttons, joystick, click wheel, scroll wheel, touch pad, keypad, keyboard, camera, light emitting diodes and other status indicators, and the like.
  • the user can input commands through the input/output circuit 420 to control the operation of the smart mobile terminal 302 , and can use the output data of the input/output circuit 420 to receive status information and other outputs from the smart mobile terminal 302 .
  • the smart glasses control system further includes: a cloud smart device 303 .
  • the cloud smart device 303 can be, for example, a cloud server or a server cluster, and is used for data interaction with the smart mobile terminal 302, storing data sent by the smart mobile terminal 302, and processing the data based on preset processing logic.
  • the cloud smart device 303 analyzes the motion data sent by the smart mobile terminal 302 based on the preset analysis logic to statistically analyze the motion index parameters of the wearer of the smart glasses 301, and send the motion data to the smart mobile terminal in combination with the motion index parameters of other smart mobile terminals.
  • 302 provides reference suggestions. For example, when the cloud smart device 303 finds that the exercise indicators set by users on other smart mobile terminals are more reasonable, it can send suggestion information to the smart mobile terminal 302 to remind the user to reset.
  • the Bluetooth protocol is used as the communication protocol between the smart mobile terminal 302 and the smart glasses 301
  • the cellular mobile communication protocol (such as: 2G, 3G, 4G, 5G protocol, etc.) is used between the cloud smart device 303 and the smart mobile terminal 302. letter of agreement.
  • the smart mobile terminal 302 is also used to pair with the smart glasses 301 based on the Bluetooth protocol, and after the pairing is successful, send the played music data to the smart glasses 301, so that the music data can be played through the smart glasses 301.
  • the smart mobile terminal 302 is also configured to send the above-mentioned sound receiving direction control instruction to the smart glasses 301 according to the user's sound receiving direction selection operation;
  • the smart mobile terminal 302 is further configured to send the volume adjustment control instruction to the smart glasses 301 according to the user's volume adjustment operation.
  • the smart mobile terminal 302 has a built-in client program (APP) for controlling and modifying the relevant parameters of the smart glasses, and the user can select the direction of sound collection and adjust the volume in the human-computer interaction interface provided by the APP.
  • the selection operation of the listening direction may include, but not limited to, an operation in which the user clicks a button, key or menu preset in the human-computer interaction interface for selecting the listening direction of the smart glasses 301 .
  • the volume adjustment control instruction may include: the user clicks a button, key or button preset in the human-computer interaction interface to adjust the frequency band and/or volume of the sound data played by the smart glasses 301 and the speaker that plays the sound data. operation of the menu.
  • the smart mobile terminal 302 is also configured to acquire GPS data through a GPS module configured on the smart mobile terminal, and send the acquired GPS data to the smart glasses 301 for positioning of the smart glasses 301 .
  • the smart mobile terminal 302 is also used to receive and store the motion data sent by the smart glasses 301 in real time, perform motion index calculations based on the motion data and the GPS data, and generate motion information for notifying or reminding the user of the motion status based on the calculation results.
  • real-time voice data and send the real-time voice data to the smart glasses 301 for output.
  • the real-time voice data includes a notification or reminder voice of the calculation result of the exercise index.
  • a client program may be installed on the smart mobile terminal 302, such as the artificial intelligence housekeeper App, through which the data interaction operation with the smart glasses 301 and the processing and analysis operations of the motion data sent by the smart glasses 301 are performed, such as: Running indicator calculation, posture monitoring and reminders, etc.
  • the above music data and real-time voice data belong to the data of the downlink channel of the smart glasses 301, and the smart glasses 301 send the received music data and real-time voice data to the first speaker after voice equalization processing and output volume control processing through the voice data processor 1042 201 and the second speaker 202 for output.
  • the motion data includes the data acquired by the smart glasses 301 through the 9-axis sensor (that is, (Ax, Ay, Az; Gx, Gy, Gz; Mx, My, Mz), the motion index is a running index, and the running index includes: Speed, distance, steps, left and right head balance, steps, step distance and stride frequency.
  • the 9-axis sensor refers to the accelerometer, gyroscope and magnetic induction sensor.
  • the data measured by these three types of sensors are in the space coordinate system All can be decomposed into the force of the X, Y, and Z axes, so they are also called 3-axis accelerometers, 3-axis gyroscopes and 3-axis magnetic induction sensors.
  • the smart mobile terminal 302 is also used to perform indicator calculation, posture monitoring and exercise reminder operations based on the motion data acquired by the 9-axis sensor and local GPS data, and display the indicator calculation results in real time on the display of the smart mobile terminal 302 .
  • the smart mobile terminal 302 is also used to respond to the voice instructions sent by the smart glasses 301, perform operations of dialing, answering or hanging up calls, and send received call voice data to the smart glasses 301 during the call , so as to play the call voice data through the speaker on the smart glasses 301 .
  • the voice command is obtained by the voice data processor of the smart glasses 301 by using a preset voice recognition algorithm to perform voice command recognition processing on the voice data acquired by the sound pickup device of the smart glasses 301 .
  • the body of the smart glasses 301 in this application is provided with at least two microphones, and the user can obtain voice data through the microphones on the smart glasses 301 for issuing voice commands and answering calls.
  • the sound from the user is collected by the microphone on the smart glasses 301 and played through the speaker of the smart glasses 301 after noise reduction processing. Therefore, during a call, the user can put the smart mobile terminal 302 in the pocket or on the table, and the free hands can be used for other purposes, thereby improving the convenience of answering the call.
  • the smart glasses use the wireless communication module to switch the working mode of the smart glasses.
  • the first beamforming process is performed on the voice data acquired by the sound pickup device, so that the sound of the sound pickup device
  • the hearing aid mode the second beamforming process is performed on the voice data, so that the sound beam of the sound pickup device points forward, thus realizing the phone call based on the same hardware platform of smart glasses functions and hearing aids, expanding the capabilities of smart glasses.
  • the functions of some components are realized by the smart mobile terminal, the structure of the smart glasses can also be simplified, thereby reducing the weight of the smart glasses, reducing power consumption, and reducing the manufacturing cost of the smart glasses.
  • FIG. 8 is a schematic flowchart of an implementation of a method for controlling smart glasses provided by an embodiment of the present application.
  • the structure of the smart glasses in this embodiment is the same as that of the smart glasses in the embodiments shown in FIGS. 1 to 4 .
  • the method includes the following steps:
  • the operating states of the smart glasses include: an idle state and a working state.
  • the smart glasses include two working modes: call mode and hearing aid mode.
  • call mode the user uses the phone call function of the smart glasses to have a phone conversation with the person at the other end of the network through the mobile communication terminal and the wireless communication network.
  • hearing aid mode the user uses the hearing aid function of the smart glasses to have a face-to-face conversation with other people.
  • the smart glasses can switch between different operating states and working modes according to different preset events detected. For example, in the idle state, when an incoming call event is detected, the wireless communication module of the smart glasses is used to control the smart glasses to switch into the call mode, and then in the call mode, when the hang-up event is detected, the wireless communication module is used to control the smart glasses to switch to the call mode.
  • the glasses return to the idle state; in the hearing aid mode, when an incoming call event is detected, the wireless communication module is used to control the smart glasses to switch to the call mode, and then in the call mode, when a hang-up event is detected, the wireless communication module is used to control the smart glasses to switch to the call mode.
  • the wireless communication module controls the smart glasses to switch to the hearing aid mode, and then in the hearing aid mode, when the key event is detected again, through The wireless communication module controls the smart glasses to return to the idle state.
  • the above-mentioned incoming call event and hanging up event can be monitored by the smart mobile terminal.
  • the smart mobile terminal monitors the incoming call event or hanging up event through the built-in event monitor, it generates the notification information of the detected event and sends it to the smart glasses.
  • the smart glasses receive the notification information through the wireless communication module, and confirm that the corresponding event is detected.
  • the smart glasses can also be provided with three state control buttons, corresponding to different operating states and working modes, and the smart glasses control the smart glasses to enter the clicked state through the wireless communication module in response to the control operation of the user clicking the button.
  • the corresponding running state or working mode of the button is the corresponding running state or working mode of the button.
  • At least two speakers as sound pickup devices are installed on the smart glasses to collect voice data of a user of the smart glasses or a conversation partner of the user.
  • a speaker array composed of three speakers is installed on one leg of the smart glasses, wherein the first microphone and the second microphone are closer to the frame than the third microphone, and step S502 specifically includes:
  • Step S5021 after controlling to switch the working mode of the smart glasses to the call mode, control the first microphone and the second microphone through the wireless communication module to obtain the voice data of the user;
  • Step S5022 after controlling to switch the working mode of the smart glasses to the hearing aid mode, control the first microphone, the second microphone and the third microphone through the wireless communication module to acquire the voice data of the user's conversation partner.
  • controlling the microphones at different positions to pick up sound can reduce the noise in the acquired voice data and improve the speed of signal processing.
  • the current working mode of the smart glasses is the hearing aid mode, perform a second beamforming process on the voice data through the wireless communication module, so that the sound beam of the sound pickup device points forward.
  • this application processes the voice data by using a preset algorithm, so that in the hearing aid mode and the call mode, the sound beams of the microphone array point to different directions respectively. direction.
  • the function of the beamforming algorithm is to make the microphone array only receive the sound from below, that is, to make the sound beam of the microphone array aim at the user's mouth, and at the same time reduce the intensity of sound waves from other directions.
  • microphones 1, 2, and 3 are used to acquire the voice of the other party at the same time, and the distance from microphone 3 to microphone 1 is equal to the distance from microphone 3 to microphone 2.
  • Smart glasses use the beamforming algorithm , so that the sound beam of the microphone array points to a preset angle forward, so as to better acquire the voice data of the wearer's conversation partner from the front of the smart glasses.
  • the function of the beamforming algorithm is to delete the sound behind the smart glasses.
  • the smart glasses also execute other algorithms corresponding to the voice data through the DSP in the wireless communication module according to the state of the smart glasses and receiving user instructions.
  • the smart glasses perform voice equalization processing and output volume processing on the data of the downlink channel through the DSP, and sequentially perform echo cancellation processing, first beamforming processing and noise suppression on the data of the uplink channel deal with.
  • the data of the downlink channel is input through the wireless signal transceiver in the wireless communication module.
  • the wireless signal transceiver preferably adopts the bluetooth protocol as the communication protocol.
  • the smart glasses use the DSP to sequentially perform voice equalization processing and output on the call voice data from the smart mobile terminal input through the wireless signal transceiver (such as the wireless Bluetooth input of the smart glasses in Figure 13).
  • Volume control processing and send the call voice data after the output volume control processing to the speaker of the smart glasses for output, and at the same time, use the call voice data after the output volume processing as a reference signal for the voice data obtained by the sound pickup device through the DSP Echo cancellation processing is performed, and then the first beamforming processing and noise suppression processing are performed, and the data after noise suppression processing is output to the smart mobile terminal through the wireless signal transceiver.
  • the echo cancellation processing on the voice data of the uplink channel is to use the echo cancellation algorithm to compare the output signal of the speaker with the input signal of the microphone array, thereby canceling the echo and interrupting the loop chain between the speaker and the microphone array.
  • the noise suppression processing on the voice data of the uplink channel is to use the noise suppression algorithm to reduce or eliminate the volume of the noise, and at the same time amplify the volume of the other party's speech.
  • the noise suppression algorithm By using the noise suppression algorithm, even if the user is in a place with a lot of environmental noise, the loud environmental noise cannot be heard in the distance, and the user of the smart glasses can hear the clear voice of the smart glasses.
  • the voice equalization processing of the voice data of the downlink channel is to use the voice equalizer to perform voice equalization processing on the distant voice signal, so as to strengthen the frequency signal of the user's hearing loss, so as to achieve the purpose of compensating the frequency signal of the hearing loss.
  • the output volume control processing on the voice data of the downlink channel is to use the output volume control algorithm to adjust the output volume of the speaker.
  • the wireless signal transceiver adopts at least one of Bluetooth protocol, Wi-Fi protocol, near field communication protocol, Zigbee, Digital Living Network Alliance protocol, carrierless communication protocol, radio frequency identification protocol and cellular mobile communication protocol As a communication protocol for data interaction with smart mobile terminals.
  • the method before acquiring voice data through the sound pickup device, the method further includes:
  • the wireless communication module Based on the Bluetooth protocol, the wireless communication module performs Bluetooth pairing with the smart mobile terminal to establish a data transmission channel between the smart glasses and the smart mobile terminal. Subsequent data interaction between smart glasses and smart mobile terminals can be carried out through this data transmission channel.
  • the smart glasses perform feedback cancellation processing, second beamforming processing, noise suppression processing, voice equalization processing, and user voice detection processing on the voice data through the DSP, and the voice equalization processing
  • the speech data of the smart glasses is sent to the speaker of the smart glasses for output, and at the same time, the speech data after speech equalization processing is used as reference data for the feedback cancellation processing.
  • the voice data after the voice equalization process can also be output volume control processing, the specific processing method is the same as the output shown in Figure 13
  • the volume control processing method is the same, for details, please refer to the relevant description of the above-mentioned FIG. 13 , which will not be repeated here.
  • Feedback cancellation processing is to use the feedback cancellation algorithm to cancel the echo by comparing the output signal of the speaker with the input signal of the microphone array, and interrupt the loop chain of the speaker and the microphone array.
  • Noise suppression processing is to use the noise suppression algorithm to reduce or eliminate the noise volume, and at the same time amplify the volume of the other party's speech.
  • Speech equalization processing is to use the speech equalizer to strengthen the sound signal of the specific frequency, so as to achieve the purpose of compensating the sound signal of the specific frequency.
  • the microphone array of the smart glasses is very close to the user's mouth, when the user speaks, the microphone array picks up a loud signal and plays it on the speaker of the smart glasses, so that when the user speaks by himself, it will be heard through the speaker. Hear your own voice.
  • the user voice detection process uses the user voice detection algorithm to continuously detect and analyze the signal received by the microphone array. When the signal is detected to be the user's voice, the volume of the signal received by the microphone array is reduced to a preset level.
  • the method further includes: detecting the user's first control operation through a touch sensor installed on the temple of the smart glasses, and the first control operation is used to adjust the volume;
  • the wireless communication module controls and adjusts the volume of the sound output by the smart glasses in response to the first control operation.
  • the first control operation includes a control operation for turning up the volume and a control operation for turning down the volume.
  • the control operation for turning up the volume corresponds to the action of the user’s finger swiping toward the ear on the touch sensor
  • the control operation for turning the volume down corresponds to the action of the user’s finger on the touch sensor. Corresponds to the action of swiping upward toward the frame.
  • the detection of the control operation in step S501 may be based on touch sensors installed on the temples of the smart glasses.
  • the second control operation of the user is detected through the touch sensor, and when the second control operation is detected through the touch sensor, the wireless communication module controls the switching of the smart glasses in response to the second control operation
  • the working mode of the smart glasses is call mode or hearing aid mode, wherein the second control operation corresponds to the action of the user long pressing the touch sensor, for example, when it is detected that the user has long pressed the touch sensor for more than 3 seconds, the smart glasses
  • the working mode is switched to call mode or hearing aid mode. Whether to switch to the call mode or the hearing aid mode can be determined by the working mode before switching. If the working mode before switching is the talking mode, switch to the hearing aid mode.
  • a control button is installed on the outside of the mirror leg or the frame of the smart glasses, and when the user clicks on the control button is detected, the wireless communication module controls and switches the working mode of the smart glasses to the call mode or the hearing aid mode.
  • the method further includes the following steps:
  • Step S601. Detect whether the user wears or takes off the smart glasses through the proximity sensor installed inside the temple of the smart glasses;
  • Step S602 when the proximity sensor detects that the user is wearing smart glasses, control the speaker of the smart glasses to play audio data through the wireless communication module;
  • Step S603 when the proximity sensor detects that the user takes off the smart glasses, the wireless communication module controls the speaker to stop playing the audio data, and when the proximity sensor detects that the user takes off the smart glasses for more than a preset duration, The shutdown operation is performed through the wireless communication module to reduce power consumption.
  • the audio data played by the smart glasses includes: the music data that was not played when the user took off the smart glasses last time, the default music data stored on the built-in memory of the smart glasses, and
  • the smart glasses receive either music data or call voice data from the smart mobile terminal through the wireless signal transceiver.
  • the memory can be electrically connected to the wireless communication module of the smart glasses through a bus, or the memory can also be the memory of the MCU in the wireless communication module.
  • the method further includes the following steps:
  • Step S701 performing voice recognition on the voice data through the wireless communication module, and sending the recognized voice command to the smart mobile terminal;
  • Step S702 the smart mobile terminal responds to the voice command, and executes the operation pointed to by the voice command, wherein the operation pointed to by the voice command includes: any one of making a call, answering a call, and hanging up a call;
  • Step S703 when the operation directed by the voice command is to make or answer a call, the smart mobile terminal sends the received call voice data to the smart glasses, so as to output the call voice data through the speaker of the smart glasses.
  • the method further includes the following steps:
  • Step S801 acquire the user's motion data through the motion sensor installed on the smart glasses, and send the motion sensor to the smart mobile terminal in real time through the wireless communication module;
  • Step S802 the smart mobile terminal receives and stores the exercise data in real time, calculates the exercise index according to the exercise data and GPS data, and generates real-time voice data for notifying or reminding the user of the exercise state according to the calculation result, and sends the real-time voice data
  • the data is sent to the smart glasses to output the real-time voice data through the speakers of the smart glasses.
  • the GPS data is acquired through the GPS module of the smart mobile terminal.
  • the motion sensor is a 9-axis sensor
  • the motion index is a running index
  • the running index includes: pace, distance, number of steps, left and right balance of the head, number of steps, step distance and stride frequency;
  • the method also includes the steps of:
  • Step S803 the smart mobile terminal performs index calculation, posture monitoring and exercise reminder operations according to the exercise data and the GPS data, and displays the index calculation results in real time on the display of the smart mobile terminal.
  • the motion data includes sensing data acquired by a 9-axis sensor, such as: 3-dimensional accelerometer Ax, Ay, Az; 3-dimensional gyroscope data Gx, Gy, Gz; and 3-dimensional magnetic induction sensor data Mx, My, Mz .
  • the detection data of the 9-axis sensor may include, but is not limited to: pedometer data, single-click or double-click operation data.
  • the smart mobile terminal performs algorithmic processing and analysis on the above exercise data in combination with the local GPS data, calculates the user's exercise index, and analyzes the exercise index to obtain the user's exercise state.
  • the method further includes the following steps:
  • the smart mobile terminal When the smart mobile terminal detects the user's radio direction selection operation in the preset client program, it sends a radio direction control command to the smart glasses, and the radio direction control command includes the target direction indicated by the radio direction selection operation;
  • the smart glasses receive the sound receiving direction control instruction. If the target direction is forward, then in the hearing aid mode, the second beamforming process is performed on the voice data, so that the sound beam of at least one sound pickup device points forward. If the target If the direction is omnidirectional, in the hearing aid mode, the second beamforming processing is not performed on the voice data, so that the sound beam of at least one sound pickup device points in all directions.
  • the user can control the sound direction of the smart glasses in the hearing aid mode through the app in the smart mobile terminal.
  • the user can use the app to select the sound beam of the smart glasses to point forward or 306° omnidirectional sound collection, whereby, the convenience and flexibility of smart glasses radio direction control can be improved.
  • the method further includes the following steps:
  • the smart mobile terminal When the smart mobile terminal detects the volume adjustment operation performed by the user in the preset client program, it sends a volume adjustment control instruction to the smart glasses, and the volume control instruction includes the target speaker, the target frequency band, and the target volume indicated by the volume adjustment operation. ;
  • the smart glasses receive the volume adjustment control instruction, and use the voice equalizer to adjust the sound data output by the target speaker to the sound data of the target frequency band and target volume.
  • the user can adjust and control the speakers of the smart glasses, the frequency band and volume of the sound played through the app in the smart mobile terminal. For example, the user can select one or more or all speakers of the smart glasses through the app.
  • the frequency band of the sound data to be played is adjusted to the desired frequency band, and the volume of the sound data is increased or decreased, thereby improving the convenience and flexibility of the sound playback control of the smart glasses.
  • the wireless communication module installed on the smart glasses controls and switches the working mode of the smart glasses based on the user's control operation.
  • a beamforming process so that the sound beam of the sound pickup device points downward;
  • the wireless communication module performs a second beamforming process on the voice data in the hearing aid mode, so that the sound beam of the sound pickup device is collected Pointing forward, thereby realizing the phone call function and hearing aid function based on the same hardware platform of the smart glasses, and expanding the functions of the smart glasses.
  • the weight of the smart glasses can be reduced, power consumption can be reduced, and the manufacturing cost of the smart glasses can be reduced.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Otolaryngology (AREA)
  • Optics & Photonics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Ophthalmology & Optometry (AREA)
  • Quality & Reliability (AREA)
  • Telephone Function (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

一种智能眼镜及其控制方法和系统,其中该智能眼镜包括:镜框、多条镜腿、至少一个拾音装置以及无线通信模块;镜腿连接镜框,至少一个拾音装置设置在多条镜腿中的至少一条镜腿上,无线通信模块设置在任意一条镜腿的腔体内并与拾音装置电性连接;无线通信模块,用于控制切换智能眼镜的工作模式,在通话模式下对拾音装置获取的语音数据进行第一波束成形处理,以使得拾音装置的声束收音指向下方,以及在助听模式下对该语音数据进行第二波束成形处理,以使得拾音装置的声束收音指向前方。本申请可实现基于智能眼镜的同一硬件平台的电话通话功能和助听功能,并可减轻智能眼镜的重量,减小耗电量,降低智能眼镜的制造成本。

Description

智能眼镜及其控制方法和系统
本申请要求于2021年7月22日提交至中国国家知识产权局专利局、申请号为CN2021108334227、名称为“智能眼镜及其控制方法和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及电子装置及通信技术领域,尤其涉及一种智能眼镜及其控制方法和系统。
背景技术
现有的智能眼镜通常把微控制器、传感器(如:计步器、心率传感器、加速器、陀螺仪、全球定位系统(Global Positioning System,GPS)等)等电子部件全放置在眼镜主体上,所有智能的功能都利用内置智能眼镜的微控制器接收传感器的数据进行计算完成。
一方面,由于眼镜主体上负载了太多的电子部件,因而上述智能眼镜重量通常很重,不能一整天穿戴,且耗电量高。另一方面,由于大部份智能眼镜的焦点只集中在基于设备控制的智能功能上,缺乏人性化的控制,且不能提供助听器功能。再一方面,由于眼镜主体上的空间有限,电子部件之间距离很近,彼此之间会产生干扰,特别是扬声器和拾音装置之间会生成回路,容易导致回应或发成啸叫声。
技术问题
本申请实施例提供一种智能眼镜及其控制方法和系统,用于实现基于智能眼镜的同一硬件平台的电话通话功能和助听功能,并可减轻智能眼镜的重量,减小耗电量,降低智能眼镜的制造成本。
技术解决方案
本申请实施例一方面提供了一种智能眼镜,包括:镜框、多条镜腿、至少一个拾音装置以及无线通信模块;
所述镜腿连接所述镜框,所述至少一个拾音装置设置在所述多条镜腿中的至少一条镜腿上,所述无线通信模块设置在所述多条镜腿中的任意一条镜腿的腔体内并与所述至少一个拾音装置电性连接;
所述无线通信模块,用于控制切换所述智能眼镜的工作模式,所述工作模式包括通话模式和助听模式;
所述无线通信模块,还用于在所述通话模式下对所述至少一个拾音装置获取的语音数据进行第一波束成形处理,以使得所述至少一个拾音装置的声束收音指向下方;以及
所述无线通信模块,还用于在所述助听模式下对所述语音数据进行第二波束成形处理,以使得所述至少一个拾音装置的声束收音指向前方。
本申请实施例一方面还提供了一种智能眼镜控制系统,包括:智能移动终端以及如上述实施例中提供的智能眼镜;
所述智能移动终端,用于与所述智能眼镜进行数据交互。
本申请实施例一方面还提供了一种智能眼镜控制方法,所述智能眼镜包括:无线通信模块以及与所述无线通信模块电性连接的拾音装置,所述方法包括:
通过所述无线通信模块响应于用户的控制操作,控制切换所述智能眼镜的工作模式为通话模式或助听模式;
通过所述拾音装置获取语音数据;
若所述智能眼镜的当前工作模式为所述通话模式,则通过所述无线通信模块对所述语音数据进行第一波束成形处理,以使得所述拾音装置的声束收音指向下方;
若所述智能眼镜的当前工作模式为所述助听模式,则通过所述无线通信模块对所述语音数据进行第二波束成形处理,以使得所述拾音装置的声束收音指向前方。
有益效果
本申请各实施例,通过利用配置在智能眼镜上的无线通信模块控制切换智能眼镜的工作模式,一方面,该无线通信模块在通话模式下对拾音装置获取的语音数据进行第一波束成形处理,以使得拾音装置的声束收音指向下方,另一方面,该无线通信模块在助听模式下对该语音数据进行第二波束成形处理,以使得拾音装置的声束收音指向前方,从而实现了基于智能眼镜的同一硬件平台的电话通话功能和助听功能,扩大了智能眼镜的功能。此外,由于无需在智能眼镜上加装额外的助听设备,还可减轻智能眼镜的重量,减小耗电量,降低智能眼镜的制造成本。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请一实施例提供的智能眼镜的结构示意图;
图2为本申请另一实施例提供的智能眼镜的外部结构示意图;
图3为图2所示实施例中的智能眼镜的内部结构示意图;
图4为图2所示实施例中的智能眼镜中的麦克风阵列的结构示意图;
图5为本申请一实施例提供的智能眼镜控制系统的结构示意图;
图6为本申请另一实施例提供的智能眼镜控制系统的结构示意图;
图7为图5和图6所示的智能眼镜控制系统中的智能移动终端的硬件结构示意图;
图8为本申请一实施例提供的智能眼镜控制方法的实现流程示意图;
图9为本申请实施例提供的智能眼镜控制方法中智能眼镜的运行状态和工作模式的示意图;
图10为本申请实施例提供的智能眼镜控制方法中不同工作模式下麦克风阵列的声束指向示意图;
图11为本申请实施例提供的智能眼镜控制方法中通话模式下麦克风阵列的声束指向示意图;
图12为本申请实施例提供的智能眼镜控制方法中助听模式下麦克风阵列的声束指向示意图;
图13为本申请实施例提供的智能眼镜控制方法中通话模式下语音信号的处理过程的示意图;
图14为本申请实施例提供的智能眼镜控制方法中助听模式下语音信号和通话语音信号的处理过程的示意图;
图15为本申请其他实施例提供的智能眼镜控制方法中智能眼镜的音量调整控制的示意图。
本发明的最佳实施方式
在此处键入本发明的最佳实施方式描述段落。
本发明的实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
参见图1,图1为本申请一实施例提供的智能眼镜的结构示意图。为了便于说明,图中仅示出了与本申请实施例相关的部分。如图1所示,智能眼镜包括:镜框101、多条镜腿102、至少一个拾音装置103(为便于理解,图中仅示出了一个)以及无线通信模块104。
其中,镜腿102连接镜框101,至少一个拾音装置103设置在多条镜腿102中的至少一条镜腿102上,无线通信模块104设置在多条镜腿102中的任意一条镜腿102的腔体内并与至少一个拾音装置103电性连接。
无线通信模块104,用于控制切换智能眼镜的工作模式,该工作模式包括通话模式和助听模式(或,对话模式)。
无线通信模块104,还用于在通话模式下对至少一个拾音装置103获取的语音数据进行第一波束成形处理,以使得至少一个拾音装置103的声束收音指向下方。此时,该语音数据的音源可以是智能眼镜的佩戴者。
无线通信模块104,还用于在助听模式下对至少一个拾音装置103获取的语音数据进行第二波束成形处理,以使得至少一个拾音装置103的声束收音指向前方。此时,该语音数据的音源可以是智能眼镜的佩戴者的交谈对象。
于本实施例中,通过利用无线通信模块控制切换智能眼镜的工作模式,一方面,在通话模式下对拾音装置获取的语音数据进行第一波束成形处理,以使得拾音装置的声束收音指向下方,另一方面,在助听模式下对该语音数据进行第二波束成形处理,以使得拾音装置的声束收音指向前方,从而实现了基于智能眼镜的同一硬件平台的电话通话功能和助听功能,扩大了智能眼镜的功能。此外,由于无需在智能眼镜上加装额外的助听设备,还可减轻智能眼镜的重量,减小耗电量,降低智能眼镜的制造成本。
参见图2和图3,图2为本申请另一实施例提供的智能眼镜的外部结构示意图,图3为图2所示实施例中的智能眼镜的内部结构示意图。如图2和图3所示,与图1所示实施例不同的是,于本实施例中:
进一步的,无线通信模块104,还用于接收智能移动终端发送的收音方向控制指令。无线通信模块104,还用于当该收音方向控制指令指示的方向为前方时,在该助听模式下,对该语音数据进行上述第二波束成形处理,以使得该至少一个拾音装置103的声束收音指向前方。无线通信模块104,还用于当该收音方向控制指令指示的方向为全方向时,在该助听模式下,不对该语音数据进行该第二波束成形处理,以使得该至少一个拾音装置103的声束收音指向全方向。其中,声束收音指向全方向,即360°全方位收音。
可选的,多条镜腿102包括第一镜腿102A和第二镜腿102B,第一镜腿102A的前端和第二镜腿102B的前端分别连接镜框101的两侧,至少一个拾音装置103安装在第一镜腿102A的前端。
可选的,智能眼镜还包括:与无线通信模块104电性连接的第一扬声器201和第二扬声器202。第一扬声器201和第二扬声器202用于输出语音数据或音乐数据。该语音数据包括:经无线通信模块104处理后的拾音装置103获取的语音数据,以及无线通信模块104接收的智能移动终端发送的通话语音数据。该音乐数据包括:无线通信模块104接收的智能移动终端发送的音乐数据。
第一扬声器201安装在第一镜腿102A上,且第一扬声器201的输出口位于第一镜腿102A的尾端。
第二扬声器202安装在第二镜腿102B上,且第二扬声器202的输出口位于第二镜腿102B的尾端。
由于拾音装置安装在镜腿的前端,各扬声器的输出口安装在镜腿的尾端,从而使得拾音装置和各扬声器的输出口之间可以具有足够的距离,因此可有效减少在拾音装置和扬声器之间生成回路的情况,降低在使用时发生回音和啸叫的机率。
此外,在用户佩戴智能眼镜后,镜腿的尾端最接近用户的耳朵,将扬声器出口安装在镜腿的尾端,可使得扬声器出口最接近用户耳朵,从而提高声音输出的效率。
其中,第一扬声器201和第二扬声器202优选为单声道扬声器。分别安装在两条镜腿上的单声道扬声器结合起来可达到立体声音效。
可选的,如图3所示,无线通信模块104包括:控制器1041、语音数据处理器1042以及无线信号收发器1043。控制器1041、语音数据处理器1042以及无线信号收发器1043可以通过总线连接。
其中,控制器1041,用于控制切换智能眼镜的工作模式。控制器1041优选为MCU(Microcontroller Unit,微控制单元)。
语音数据处理器1042,用于对语音数据进行处理。语音数据处理器1042优选为DSP(Digital Signal Processing,数字信号处理器)或语音数据处理集成电路。其中,语音数据处理集成电路为常用电路,本申请不对其结构做具体限定。
无线信号收发器1043,用于 与智能移动终端进行数据交互。可选的,无线通信模块104使用蓝牙协议、WiFi(无线保真)协议、NFC(Near Field Communication,近场通信)协议、ZigBee(紫蜂)、DLNA(DIGITAL LIVING NETWORK ALLIANCE,数字生活网络联盟)协议、UWB(Ultra Wideband,无载波通信)、RFID(Radio Frequency Identification,射频识别)协议以及蜂窝移动通信(Cellular Mobile Communication)协议中的至少一种作为与智能移动终端进行数据交互的通信协议。优选为蓝牙协议。
可选的,语音数据处理器1042包括语音均衡器(Equalizer)。无线信号收发器1043还用于接收智能移动终端发送的音量调节控制指令并发送给语音数据处理器1042,语音数据处理器1042还用于通过利用该语音均衡器,将待输出的声音数据,调整为该音量调节控制指令指向的频段及音量的声音数据,并将调整后的声音数据发送给该音量调节控制指令指向的扬声器。其中,该音量调节控制指令指向的频段可以是低频、中频或高频。音量调整可以是增大音量,也可以是减小音量。
进一步的,语音数据处理器1042,还用于在通话模式下,对下行通道上的数据进行语音均衡处理(利用语音均衡器(Equalizer))和输出音量处理,以及利用预设的回声消除算法(Acoustic Echo Cancellation)、波束成形算法(Beamforming)以及噪声抑制算法(Noise Cancellation)对上行通道上的数据进行回音消除处理、所述第一波束成形处理以及噪声抑制处理。
具体的,语音数据处理器1042,还用于在通话模式下,对无线信号收发器1043接收的来自智能移动终端的通话语音数据(即,下行通道上的数据)进行语音均衡处理和输出音量控制处理,并将输出音量控制处理后的通话语音数据发送给第一扬声器201和第二扬声器202进行输出;语音数据处理器1042,还用于利用该输出音量控制处理后的通话语音数据作为参考信号,对拾音装置103获取的语音数据(即,上行通道上的数据)进行回音消除处理,并对回音消除处理后的语音数据进行第一波束成形处理以及噪声抑制处理,并将噪声抑制处理后的信号,通过无线信号收发器1043发送智能移动终端。
进一步的,语音数据处理器1042,还用于在助听模式下,利用预设的反馈取消算法(Feedback Cancellation)、波束成形算法、噪声抑制算法、语音均衡器算法(Equalizer)以及用户语音检测算法(User Talking Detection),对该语音数据进行反馈取消处理、第二波束成形处理、噪声抑制处理、语音均衡处理以及用户语音检测处理,并将语音均衡处理后的语音数据发送给第一扬声器201和第二扬声器202进行输出。同时,该语音均衡处理后的语音数据在处理过程中还用作上述反馈取消处理的参考数据。
可选的,智能眼镜还包括:与无线通信模块104电性连接的至少一个传感器(图中未标示)。
至少一个传感器安装在第一镜腿102A和/或第二镜腿102B的内侧和/或外侧。
具体的,至少一个传感器包括:触控传感器、接近传感器、加速度计、陀螺仪、磁感应传感器以及惯性测量单元中的至少一种。
可选的,惯性测量单元为9轴传感器。该9轴传感器用于采集用户的运动数据,并通过无线通信模块104发送给智能移动终端进行数据处理。
优选的,如图3所示,至少一个传感器包括:9轴传感器2031、至少一个触控传感器2032和至少一个接近传感器2033。
其中,至少一个触控传感器2032安装在第一镜腿102A和/或第二镜腿102B的外侧。
可选的,至少一个触控传感器用于检测用户的第一控制操作,并将检测到的该第一控制操作的数据发送给控制器1041,该第一控制操作用于调整音量。控制器1041,还用于根据该第一控制操作的数据,响应于第一控制操作,控制调整智能眼镜输出的声音的音量。
其中,第一控制操作包括用于调大音量的控制操作和用于调小音量的控制操作。该用于调大音量的控制操作与用户手指在触控传感器上向耳方向扫动的动作对应,该用于调小音量的控制操作与用户手指在触控传感器上向镜框方向(即离开耳方向)扫动的动作对应。
可选的,至少一个触控传感器,还用于检测用户的第二控制操作,并将检测到的该第二控制操作的数据发送给控制器1041。控制器1041还用于根据该第二控制操作的数据,响应于该第二控制操作,控制切换智能眼镜的工作模式。
其中,第二控制操作优选的与用户点击或长按触控传感器的动作对应,例如:用户通过长按触控传感器超过3秒,可将智能眼镜的工作模式切换为通话模式或助听模式。
可选的,至少一个接近传感器安装在第一镜腿102A和/或第二镜腿102B的内侧,用于检测用户是否佩戴或摘下智能眼镜以及获取用户未佩戴智能眼镜的时长,并将检测结果发送给控制器1041。控制器1041,用于根据该检测结果,在该接近传感器检测到用户佩戴智能眼镜时播放音乐数据,在该接近传感器检测到用户摘下智能眼镜时停止播放该音乐数据,以及在该接近传感器检测到用户超过预设时长没有佩戴智能眼镜时执行关机操作。
上述每一种传感器的数量优选为一个,以减轻智能眼镜的总体重量。但在具体应用中,根据实际需要,各种传感器的数量也不限为一个,例如,为提高检测结果的准确性,可以分别在两条镜腿各设置一个接近传感器。
可选的,控制器1041还用于将各传感器获取的数据,通过无线信号收发器1043发送给智能移动终端。
可选的,拾音装置103为麦克风阵列,麦克风阵列包括至少两个麦克风。
具体的,麦克风阵列包括第一麦克风M1、第二麦克风M2和第三麦克风M3。
其中,第三麦克风M3与第一麦克风M1之间的距离,与第三麦克风M3与第二麦克风M2之间的距离相等。或者,如图4所示,第一麦克风M1与第二麦克风M2之间的距离d1,等于第三麦克风M3到第一麦克风M1与第二麦克风M2的连线的中点之间的距离d2。
优选的,如图4所示,为减轻眼镜的重量,仅在智能眼镜上安装一个麦克风阵列,该麦克风阵列中的第一麦克风M1、第二麦克风M2以及第三麦克风M3均安装在同一条镜腿上,且第一麦克风M1和第二麦克风M2比第三麦克风M3更靠近镜框。
控制器1041,还用于在通话模式下,控制第一麦克风M1和第二麦克风M2获取语音数据,以及在助听模式下,控制第一麦克风M1、第二麦克风M2和第三麦克风M3获取语音数据。像这样,在不同模式下,控制不同位置的麦克风进行拾音,可以减少获取的语音数据中的杂音,提高信号处理的速度。
可选的,智能眼镜还包括电池204,电池204安装在第一镜腿102A上且与无线通信模块104电性连接,以用于为智能眼镜上的无线通信模块104、各传感器、各扬声器以及各麦克风等电子元件提供电能。
可选的,智能眼镜还包括至少一个助听器(图中未示出),该至少一个助听器安装在第一镜腿102A和/或第二镜腿102B上,并与无线通信模块104电性连接。无线通信模块104还用于在助听模式下,控制该至少一个助听器输出语音数据。
可选的,智能眼镜还包括至少一个控制按键(图中未示出),该至少一个控制按键安装在第一镜腿102A和/或第二镜腿102B的外侧,并与无线通信模块104电性连接。该至少一个控制按键,用于触发无线通信模块104控制切换智能眼镜的工作模式或运行状态。其中,该运行状态包括闲置状态和工作状态,该工作状态包括该通话模式和该助听模式。
上述智能眼镜的各电子组成元器件之间可通过总线连接。
需要说明的是,上述智能眼镜的各组件彼此之间的关系可以是替代关系,也可以是叠加关系。即,一个智能眼镜上可以安装上述本实施例中的所有组件,或者,也可以根据需求,选择性地安装上述组件中的一部分。当为替代关系时,智能眼镜还设置有外设的连接接口,该连接接口例如可以是PS/2接口、串行接口、并行接口、IEEE1394接口、USB(Universal Serial Bus,通用串行总线)接口等中的至少一种,被替代的组件的功能可通过连接在该连接接口上的外设实现,如:外部扬声器、外部传感器等。
于本实施例中,通过利用无线通信模块控制切换智能眼镜的工作模式,一方面,在通话模式下对拾音装置获取的语音数据进行第一波束成形处理,以使得拾音装置的声束收音指向下方,另一方面,在助听模式下对该语音数据进行第二波束成形处理,以使得拾音装置的声束收音指向前方,从而实现了基于智能眼镜的同一硬件平台的电话通话功能和助听功能,扩大了智能眼镜的功能。此外,由于无需在智能眼镜上加装额外的助听设备,还可减轻智能眼镜的重量,减小耗电量,降低智能眼镜的制造成本。
参见图5,图5为本申请一实施例提供的智能眼镜控制系统的结构示意图。如图5所示,智能眼镜控制系统包括:智能眼镜301与智能移动终端302。
其中,智能眼镜301的结构同图1至图4所示实施例中的智能眼镜,智能眼镜301的结构及其功能具体请参考上述图1至图4所示实施例中的相关描述。
智能移动终端302可以但不限于包括:蜂窝电话、智能手机、其他无线通信设备、个人数字助理、音频播放器、其他媒体播放器、音乐记录器、录像机、照相机、其他媒体记录器、智能收音机、膝上型计算机、个人数字助理(PDA)、便携式多媒体播放器(PMP)、运动图像专家组(MPEG-1或MPEG-2)音频层3(MP3)播放器、数码相机以及智能可穿戴设备(如智能手表、智能手环等)。智能移动终端302上安装有安卓或IOS操作系统。
智能移动终端302用于与智能眼镜301进行数据交互,具体的,例如:接收、存储智能眼镜301发送的数据并进行处理,在执行播放音乐数据、通话等目标任务时,将播放的音乐数据、接收的通话语音数据发送给智能眼镜301等。智能移动终端302在与智能眼镜301进行数据交互时使用的通信协议与智能眼镜301使用的通信协议一致。
如图6所示,智能移动终端302可以包括控制电路,该控制电路可以包括存储和处理电路300。该存储和处理电路300可以包括存储器,例如硬盘驱动存储器,非易失性存储器(例如闪存或用于形成固态驱动器的其它电子可编程限制删除的存储器等),易失性存储器(例如静态或动态随机存取存储器等)等,本申请实施例不作限制。存储和处理电路300中的处理电路可以用于控制智能移动终端302的运行。该处理电路可以基于一个或多个微处理器,微控制器,数字信号处理器,基带处理器,功率管理单元,音频编解码器芯片,专用集成电路,显示驱动器集成电路等来实现。
存储和处理电路300可用于运行智能移动终端302中的软件,例如:人工智能管家应用程序、互联网浏览应用程序,互联网协议语音(Voice over Internet Protocol,VOIP)电话呼叫应用程序,电子邮件应用程序,媒体播放应用程序,操作系统功能等。这些软件可以用于执行一些控制操作,例如,基于预设的分析算法对智能眼镜301发送的运动数据进行的数据处理和分析、基于照相机的图像采集,基于环境光传感器的环境光测量,基于接近传感器的接近传感器测量,基于诸如发光二极管的状态指示灯等状态指示器实现的信息显示功能,基于触摸传感器的触摸事件检测,与在多个(例如分层的)显示器上显示信息相关联的功能,与执行无线通信功能相关联的操作,与收集和产生音频信号相关联的操作,与收集和处理按钮按压事件数据相关联的控制操作,以及智能移动终端302中的其它功能等,本申请实施例不作限制。
进一步的,该存储器存储有可执行程序代码,与该存储器耦合的处理器,调用该存储器中存储的该可执行程序代码,执行本申请以下方法实施例中的相关步骤,以实现本申请各实施例中的智能移动终端302的各项功能。
智能移动终端302还可以包括输入/输出电路420。输入/输出电路420可用于使智能移动终端302实现数据的输入和输出,即允许智能移动终端302从外部设备接收数据和也允许智能移动终端302将数据从智能移动终端302输出至外部设备。输入/输出电路420可以进一步包括传感器320。传感器320可以包括环境光传感器,基于光和电容的接近传感器,触摸传感器(例如,基于光触摸传感器和/或电容式触摸传感器,其中,触摸传感器可以是触控显示屏的一部分,也可以作为一个触摸传感器结构独立使用),加速度计,和其它传感器等。
输入/输出电路420还可以包括一个或多个显示器,例如显示器140。显示器140可以包括液晶显示器,有机发光二极管显示器,电子墨水显示器,等离子显示器,使用其它显示技术的显示器中一种或者几种的组合。显示器140可以包括触摸传感器阵列(即,显示器140可以是触控显示屏)。触摸传感器可以是由透明的触摸传感器电极(例如氧化铟锡(ITO)电极)阵列形成的电容式触摸传感器,或者可以是使用其它触摸技术形成的触摸传感器,例如音波触控,压敏触摸,电阻触摸,光学触摸等,本申请实施例不作限制。
智能移动终端302还可以包括音频组件360。音频组件360可以用于为智能移动终端302提供音频输入和输出功能。智能移动终端302中的音频组件360可以包括扬声器,拾音装置,蜂鸣器,音调发生器以及其它用于产生和检测声音的组件。
通信电路380可以用于为智能移动终端302提供与外部设备通信的能力。通信电路380可以包括模拟和数字输入/输出接口电路,和基于射频信号和/或光信号的无线通信电路。通信电路380中的无线通信电路可以包括射频收发器电路、功率放大器电路、低噪声放大器、开关、滤波器和天线。举例来说,通信电路380中的无线通信电路可以包括用于通过发射和接收近场耦合电磁信号来支持近场通信(Near Field Communication,NFC)的电路。例如,通信电路380可以包括近场通信天线和近场通信收发器。通信电路380还可以包括蜂窝电话收发器和天线,蓝牙、WiFi、ZigBee、DLNA、UWB、RFID等无线局域网收发器电路和天线等。
智能移动终端302还可以进一步包括电池,电力管理电路和其它输入/输出单元400。输入/输出单元400可以包括按钮,操纵杆,点击轮,滚动轮,触摸板,小键盘,键盘,照相机,发光二极管和其它状态指示器等。
用户可以通过输入/输出电路420输入命令来控制智能移动终端302的操作,并且可以使用输入/输出电路420的输出数据以实现接收来自智能移动终端302的状态信息和其它输出。
进一步的,如图7所示,于本申请另一实施例中,智能眼镜控制系统还包括:云端智能设备303。云端智能设备303例如可以是云端服务器或服务器集群,用于与智能移动终端302进行数据交互,存储智能移动终端302发送的数据,以及基于预设的处理逻辑对该数据进行处理。例如,云端智能设备303基于预设分析逻辑对智能移动终端302发送的运动数据进行分析,以统计分析智能眼镜301的佩戴者的运动指标参数,结合其他智能移动终端的运动指标参数向智能移动终端302提供参考建议。例如,当云端智能设备303发现其他的智能移动终端上的用户设定的运动指标更为合理时,可以向智能移动终端302发送建议信息,以提醒用户重新设定。
优选的,智能移动终端302与智能眼镜301之间使用蓝牙协议作为通信协议,云端智能设备303与智能移动终端302之间使用蜂窝移动通信协议(如:2G、3G、4G、5G协议等)作为通信协议。
进一步的,智能移动终端302,还用于与智能眼镜301基于蓝牙协议进行配对,并在配对成功后,将播放的音乐数据发送给智能眼镜301,以通过智能眼镜301对该音乐数据进行播放。
可选的,智能移动终端302,还用于根据用户的收音方向选择操作,向智能眼镜301发送上述收音方向控制指令;
智能移动终端302,还用于根据用户的音量调节操作,向智能眼镜301发送上述音量调节控制指令。
智能移动终端302内置用于控制修改智能眼镜的相关参数的客户端程序(APP),用户可在该APP提供的人机交互界面中进行收音方向选择操作以及音量调节操作。其中,收音方向选择操作例如可以但不限于包括:用户点击该人机交互界面中预设的用于选择智能眼镜301的收音方向的按钮、按键或菜单的操作。音量调节控制指令例如可以但不限于包括:用户点击该人机交互界面中预设的用于调整智能眼镜301播放的声音数据的频段和/或音量以及播放该声音数据的扬声器的按钮、按键或菜单的操作。
可选的,智能移动终端302,还用于通过配置在智能移动终端上的GPS模块获取GPS数据,并将获取的GPS数据发送给智能眼镜301,以用于智能眼镜301的定位。
可选的,智能移动终端302还用于实时接收并存储智能眼镜301发送的运动数据,根据该运动数据、该GPS数据,进行运动指标计算,并根据计算结果生成用于通知或提醒用户运动状态的实时语音数据,并将实时语音数据发送给智能眼镜301进行输出。其中,该实时语音数据包括运动指标计算结果的通知或提醒语音。
具体的,智能移动终端302上可安装有客户端程序,如人工智能管家App,通过该APP执行与智能眼镜301的数据交互操作以及对智能眼镜301发送的运动数据的处理和分析操作,如:跑步指标计算、姿势监测和提醒等。
上述音乐数据和实时语音数据属于智能眼镜301的下行通道的数据,智能眼镜301通过语音数据处理器1042对接收的音乐数据和实时语音数据进行语音均衡处理和输出音量控制处理后发送给第一扬声器201和第二扬声器202进行输出。
具体的,运动数据包括智能眼镜301通过9轴传感器获取的数据(即,(即Ax,Ay,Az;Gx,Gy,Gz;Mx,My,Mz),运动指标为跑步指标,跑步指标包括:配速、距离、步数、头部左右平衡、步数、步距和步频。其中,9轴传感器即指加速度计、陀螺仪以及磁感应传感器。这三类传感器测量的数据在空间坐标系中都可以被分解为X,Y,Z三个方向轴的力,因此也称为3轴加速度计、3轴陀螺仪和3轴磁感应传感器。
智能移动终端302,还用于根据9轴传感器获取的运动数据和本地的GPS数据,执行指标计算、姿势监测和运动提醒操作,并将指标计算结果通过智能移动终端302的显示器进行实时显示。
可选的,智能移动终端302,还用于响应于智能眼镜301发送的语音指令,执行拨打、接听或挂断电话的操作,以及在通话过程中,将接收的通话语音数据发送给智能眼镜301,以通过智能眼镜301上的扬声器对该通话语音数据进行播放。
其中,该语音指令由智能眼镜301的语音数据处理器,通过利用预设的语音识别算法对智能眼镜301的拾音装置获取的语音数据进行语音指令识别处理得到。
可以理解的,本申请中的智能眼镜301的本体上设置有至少两个麦克风,用户可通过智能眼镜301上的麦克风获取语音数据,用作发出语音命令和电话接听。用户发出的声音通过智能眼镜301上的麦克风收音并经过降噪处理后通过智能眼镜301的扬声器予以播放。因此,在通话期间,用户可以将智能移动终端302放在口袋中或桌子上,空出的双手可做其他用途,从而提高了电话接听的便捷性。
需要说明的是,本实施例中的智能眼镜301和智能移动终端302的功能的具体实现过程,还可以参考其他实施例中的相关描述。
于本实施例中,智能眼镜通过利用无线通信模块控制切换智能眼镜的工作模式,一方面,在通话模式下对拾音装置获取的语音数据进行第一波束成形处理,以使得拾音装置的声束收音指向下方,另一方面,在助听模式下对该语音数据进行第二波束成形处理,以使得拾音装置的声束收音指向前方,从而实现了基于智能眼镜的同一硬件平台的电话通话功能和助听功能,扩大了智能眼镜的功能。此外,由于将部分组件(如:GPS模块)的功能通过智能移动终端实现,还可简化智能眼镜的结构,从而减轻智能眼镜的重量,减小耗电量,降低智能眼镜的制造成本。
参见图8,图8为本申请一实施例提供的智能眼镜控制方法的实现流程示意图。本实施例中的智能眼镜的结构同图1至图4所示实施例中的智能眼镜,具体可参考上述图1至图4所示实施例中的相关描述。如图8所示,该方法包括以下步骤:
S501、通过智能眼镜的无线通信模块响应于用户的控制操作,控制切换智能眼镜的工作模式为通话模式或助听模式;
如图9所示,智能眼镜的运行状态包括:闲置状态和工作状态。其中在工作状态下,智能眼镜包括:通话模式和助听模式两种工作模式。在通话模式下,用户利用智能眼镜的电话通话功能,通过移动通信终端和无线通信网络,与该网络另一端的人进行电话交谈。在助听模式下,该用户利用智能眼镜的助听功能与其他人进行面对面的交谈。
可选的,智能眼镜可根据监测到的不同的预设事件,切换不同的运行状态和工作模式。例如,在闲置状态下,当监测到来电事件时,通过智能眼镜的无线通信模块控制智能眼镜切换进入通话模式,然后在通话模式下,当监测到挂线事件时,通过该无线通信模块控制智能眼镜返回闲置状态;在助听模式下,当监测到来电事件时,通过该无线通信模块控制智能眼镜切换进入通话模式,然后在通话模式下,当监测到挂线事件时,通过该无线通信模块控制智能眼镜返回助听模式;在闲置状态下,当监测到按键事件时,通过该无线通信模块控制智能眼镜切换进入助听模式,然后在助听模式下,当再次监测到按键事件时,通过该无线通信模块控制智能眼镜返回闲置状态。
其中,上述来电事件、挂线事件可通过智能移动终端进行监测,智能移动终端在通过内置的事件监测器监测到来电事件或挂线事件时,生成监测到的事件的通知信息并发送给智能眼镜,智能眼镜通过该无线通信模块接收到该通知信息,确认检测到对应的事件。
可选的,智能眼镜上还可以设置有3个状态控制按键,分别对应于不同的运行状态和工作模式,智能眼镜通过该无线通信模块响应于用户点击按键的控制操作,控制智能眼镜进入被点击的按键对应的运行状态或工作模式。
S502、通过安装在智能眼镜上的拾音装置获取语音数据;
具体的,智能眼镜上安装有作为拾音装置的至少两个扬声器,以用于采集智能眼镜的用户或该用户的交谈对象的语音数据。
可选的,智能眼镜的一条镜腿上安装有三个扬声器构成的扬声器阵列,其中第一麦克风和第二麦克风比第三麦克风更靠近镜框,步骤S502具体包括:
步骤S5021、在控制切换智能眼镜的工作模式为通话模式后,通过该无线通信模块控制该第一麦克风和该第二麦克风获取用户的语音数据;
步骤S5022、在控制切换智能眼镜的工作模式为助听模式后,通过该无线通信模块控制该第一麦克风、该第二麦克风和该第三麦克风获取该用户的交谈对象的语音数据。
像这样,在不同模式下,控制不同位置的麦克风进行拾音,可以减少获取的语音数据中的杂音,提高信号处理的速度。
S503、若智能眼镜的当前工作模式为通话模式,则通过该无线通信模块对该语音数据进行第一波束成形处理,以使得该拾音装置的声束收音指向下方;
S504、若智能眼镜的当前工作模式为助听模式,则通过该无线通信模块对该语音数据进行第二波束成形处理,以使得该拾音装置的声束收音指向前方。
如图10所示,为达到更好的收音效果,本申请通过利用预设的算法对语音数据进行处理,以使得在助听模式下和通话模式下,该麦克风阵列的声束收音分别指向不同的方向。
如图11所示,在通话模式下,只使用麦克风1和2获取用户的声音,且麦克风3到麦克风1的距离和麦克风3到麦克风2的距离相等,智能眼镜通过预设的波束成形算法,使得安装在智能眼镜上的麦克风阵列的声束收音指向下方,以更好地获取来自智能眼镜的佩戴者的语音数据。在通话模式下,该波束成形算法的作用是使麦克风阵列只接收下方的声音,即,使得麦克风阵列的声束收音对准用户的嘴,并同时将来自其他方向的声波强度降低。
如图12所示,在助听模式下,同时使用麦克风1、2和3获取对方的声音,且麦克风3到麦克风1的距离和麦克风3到麦克风2的距离相等,智能眼镜通过该波束成形算法,使得该麦克风阵列的声束收音指向前方的预设角度,以更好地获取来自智能眼镜的前方的该佩戴者的交谈对象的语音数据。在助听模式下,该波束成形算法的作用是把智能眼镜后方的声音删除,其工作原理是基于麦克风阵列内不同麦克风的相位差,将来自智能眼镜前面的声波放大(同相位,相位差=0度),将来自智能眼镜侧面的声波减少(0<相位差<180度),将来自智能眼镜后面的声波删除(异相位,相位差=180度),从而使得麦克风距阵只接收前方的声波。
进一步的,除了上述波束成形算法,智能眼镜还根据智能眼镜的状态和接收用户的指示,通过无线通信模块中的DSP对该语音数据执行对应的其他算法。
如图13所示,在通话模式下,智能眼镜通过该DSP对下行通道的数据进行语音均衡处理和输出音量处理,并对上行通道的数据依次进行回音消除处理、第一波束成形处理以及噪声抑制处理。下行通道的数据通过无线通信模块中的无线信号收发器输入。该无线信号收发器优先采用蓝牙协议作为通信协议。
进一步的,在通话模式下,智能眼镜通过该DSP对通过该无线信号收发器输入(如图13中的智能眼镜无线蓝牙输入)的来自于智能移动终端的通话语音数据依次进行语音均衡处理和输出音量控制处理,并将输出音量控制处理后的通话语音数据发送给智能眼镜的扬声器进行输出,同时,通过该DSP对拾音装置获取的语音数据,利用输出音量处理后的通话语音数据作为参考信号进行回音消除处理,然后再进行第一波束成形处理以及噪声抑制处理,并将噪声抑制处理后的数据通过该无线信号收发器输出给智能移动终端。
由于远方说话会通过扬声器播放,因此远方的语音信号会被麦克风阵列接收,从而产生回路。对上行通道的语音数据进行的回声消除处理,是利用回声消除算法通过对比扬声器的输出信号和麦克风阵列的输入信号,从而把回音消除,并中断扬声器与麦克风阵列的回路链。
对上行通道的语音数据进行的噪声抑制处理,是利用噪声抑制算法,将噪音的音量减低或消除,并同时放大对方说话的音量。通过利用噪声抑制算法,即便用户身处于环境噪音很大的地方,远方也听不到喧闹的环境噪音,而智能听到智能眼镜的用户清晰的声音。
对下行通道的语音数据进行的语音均衡处理,是利用语音均衡器,对远方的语音信号进行语音均衡处理,以加强用户弱听的频率信号,从而达到补偿该弱听的频率信号的目的。
对下行通道的语音数据进行的输出音量控制处理,是利用输出音量控制算法调校扬声器的输出音量。
可选的,该无线信号收发器采用蓝牙协议、无线保真协议、近场通信协议、紫蜂、数字生活网络联盟协议、无载波通信协议、射频识别协议以及蜂窝移动通信协议中的至少一种作为与智能移动终端进行数据交互的通信协议。
进一步的,于本申请其他实施例中,在通过拾音装置获取语音数据之前,该方法还包括:
通过该无线通信模块基于蓝牙协议,与智能移动终端进行蓝牙配对,以在智能眼镜和智能移动终端之间建立数据传输通道。智能眼镜和智能移动终端后续的数据交互皆可通过该数据传输通道进行。
如图14所示,在助听模式下,智能眼镜通过该DSP对语音数据进行反馈取消处理、第二波束成形处理、噪声抑制处理、语音均衡处理以及用户语音检测处理,并将语音均衡处理后的语音数据发送给智能眼镜的扬声器进行输出,同时,将语音均衡处理后的语音数据用作该反馈取消处理的参考数据。进一步的,在将语音均衡处理后的语音数据发送给智能眼镜的扬声器进行输出之前,还可以对该语音均衡处理后的语音数据进行输出音量控制处理,具体处理方式与图13中所示的输出音量控制处理方式相同,具体可参考上述图13的相关说明,此处不再赘述。
在助听模式下,由于扬声器输出的声波会即时被麦克风阵列接收并形成回路,从而产生刺耳的啸叫。反馈取消处理就是利用反馈取消算法通过对比扬声器的输出信号和麦克风阵列的输入信号把回音消除,并中断扬声器和麦克风阵列的回路链。
此外,在助听模式下,由于需要将对方的声音放大从而使得有听力障碍的用户能够听清对方说话的内容,但在将声音放大的过程中,会将语音和噪音一起放大,从而给用户带来不适。噪声抑制处理是利用噪声抑制算法将噪音音量减低或消除,并同时放大对方说话的音量。
此外,在助听模式下,一般存在听力障碍的用户可能只是听不见或者听不清某一特定频率的声音。语音均衡处理是利用语音均衡器,将该特定频率的声音信号加强,从而达到补偿该特定频率的声音信号的目的。
此外,由于智能眼镜的麦克风阵列距离用户的嘴巴非常近,当用户说话时,麦克风阵列会接收到很大的信号,并在智能眼镜的扬声器上播放,这样当用户自己说话时,就会通过扬声器听到自己的声音。用户语音检测处理,是利用用户语音检测算法不断检测麦克风阵列接收的信号并进行分析,当检测到该信号为用户的声音时,将通过麦克风阵列接收的信号的音量降低到预设的水平。
可选的,于本申请其他实施例中,该方法还包括:通过安装在智能眼镜的镜腿上的触控传感器对用户的第一控制操作进行检测,该第一控制操作用于调整音量;当通过该触控传感器检测到该第一控制操作时,通过该无线通信模块响应于该第一控制操作,控制调整该智能眼镜输出声音的音量。其中,该第一控制操作包括用于调大音量的控制操作和用于调小音量的控制操作。如图15所示,该用于调大音量的控制操作与用户手指在该触控传感器上向耳方向扫动的动作对应,该用于调小音量的控制操作与用户手指在该触控传感器上向镜框方向扫动的动作对应。
可选的,于本申请其他实施例中,步骤S501中的控制操作的检测具体可基于安装在智能眼镜的镜腿上的触控传感器进行。
具体的,通过该触控传感器对用户的第二控制操作进行检测,当通过该触控传感器检测到该第二控制操作时,通过该无线通信模块响应于该第二控制操作,控制切换智能眼镜的工作模式为通话模式或助听模式,其中,该第二控制操作与用户长按该触控传感器的动作对应,例如,当检测到用户长按触控传感器超过3秒时,将智能眼镜的工作模式切换为通话模式或者助听模式。具体切换为通话模式还是助听模式,可由切换前的工作模式确定,如切换前的工作模式为通话模式,则切换为助听模式。
或者,智能眼镜的镜腿或镜框外侧还安装有控制按键,当检测到用户点击该控制按键的操作时,通过该无线通信模块控制切换智能眼镜的工作模式为通话模式或助听模式。
可选的,于本申请其他实施例中,该方法还包括以下步骤:
步骤S601、通过智能眼镜的镜腿内侧安装的接近传感器检测用户是否佩戴或摘下智能眼镜;
步骤S602、当通过该接近传感器检测到用户佩戴智能眼镜时,通过该无线通信模块控制智能眼镜的扬声器播放音频数据;
步骤S603、当通过该接近传感器检测到用户摘下智能眼镜时,通过该无线通信模块控制该扬声器停止播放该音频数据,并当通过该接近传感器检测到用户摘下智能眼镜超过预设时长时,通过该无线通信模块执行关机操作,以减少耗电量。
其中,当通过该接近传感器检测到用户佩戴智能眼镜时,智能眼镜播放音频数据包括:上一次用户摘下智能眼镜时未播放完的音乐数据,智能眼镜内置的存储器上存储的默认音乐数据,以及,用户戴上智能眼镜时智能眼镜通过无线信号收发器接收的来自智能移动终端的音乐数据或通话语音数据中的任一种。该存储器可以与智能眼镜的无线通信模块通过总线电性连接,或者该存储器也可以是该无线通信模块中的MCU的内存。
可选的,于本申请其他实施例中,在通过该无线通信模块对该语音数据进行第一波束成形处理,该方法还包括以下步骤:
步骤S701、通过该无线通信模块对该语音数据进行语音识别,并将识别的语音指令发送给智能移动终端;
步骤S702、智能移动终端响应于该语音指令,执行该语音指令指向的操作,其中,该语音指令指向的操作包括:拨打电话、接听电话以及挂断电话中的任一种;
步骤S703、当该语音指令指向的操作为拨打或接听电话时,智能移动终端将接收的通话语音数据发送给智能眼镜,以通过智能眼镜的扬声器输出该通话语音数据。
可选的,于本申请其他实施例中,该方法还包括以下步骤:
步骤S801、通过安装在智能眼镜上的运动传感器获取用户的运动数据,并通过该无线通信模块将该运动传感器实时发送给智能移动终端;
步骤S802、智能移动终端实时接收并存储该运动数据,根据该运动数据、GPS数据,进行运动指标计算,并根据计算结果生成用于通知或提醒用户运动状态的实时语音数据,并将该实时语音数据发送给智能眼镜,以通过智能眼镜的扬声器将该实时语音数据进行输出。
其中,该GPS数据通过智能移动终端的GPS模块获取。
进一步的,该运动传感器为9轴传感器,该运动指标为跑步指标,该跑步指标包括:配速、距离、步数、头部左右平衡、步数、步距和步频;
该方法还包括以下步骤:
步骤S803、智能移动终端根据该运动数据和该GPS数据,执行指标计算、姿势监测和运动提醒操作,并将指标计算结果通过智能移动终端的显示器进行实时显示。
具体的,该运动数据包括9轴传感器获取的感测数据,如:3维加速器计Ax,Ay,Az;3维陀螺仪数据Gx,Gy,Gz;以及3维磁感应传感器数据Mx,My,Mz。该9轴传感器的检测数据可以但不限于包括:计步数据、单击或双击的操作数据。智能移动终端结合本地的GPS数据,对上述运动数据进行算法处理和分析,计算得到用户的运动指标,并对该运动指标进行分析,从而得到用户的运动状态。
可选的,于本申请其他实施例中,该方法还包括以下步骤:
智能移动终端当检测到用户在预设的客户端程序进行的收音方向选择操作时,向智能眼镜发送收音方向控制指令,所述收音方向控制指令中包含所述收音方向选择操作指示的目标方向;
智能眼镜接收该收音方向控制指令,若该目标方向为前方,则在助听模式下,对语音数据进行第二波束成形处理,以使得至少一个拾音装置的声束收音指向前方,若该目标方向为全方向,则在助听模式下,不对语音数据进行第二波束成形处理,以使得至少一个拾音装置的声束收音指向全方向。
基于上述步骤,用户可以通过智能移动终端中的app实现对智能眼镜在助听模式下的收音方向的控制,例如,用户可以通过该app选择智能眼镜的声束指向前方或者306°全方位收音,从而可提高智能眼镜收音方向控制的便捷性和灵活性。
可选的,于本申请其他实施例中,该方法还包括以下步骤:
智能移动终端当检测到用户在预设的客户端程序进行的音量调节操作时,向智能眼镜发送音量调节控制指令,该音量控制指令中包含该音量调节操作指示的目标扬声器、目标频段以及目标音量;
智能眼镜接收该音量调节控制指令,通过利用语音均衡器,将该目标扬声器输出的声音数据调整为目标频段以及目标音量的声音数据。
基于上述步骤,用户可以通过智能移动终端中的app实现对智能眼镜的扬声器、播放声音的频段和音量的调整控制,例如,用户可以通过该app选择将智能眼镜的某一个或多个或全部扬声器播放的声音数据的频段调整为所需频段,并增大或减小该声音数据的音量,从而可提高智能眼镜声音播放控制的便捷性和灵活性。
于本实施例中,通过智能眼镜上安装的无线通信模块基于用户的控制操作,控制切换智能眼镜的工作模式,一方面,该无线通信模块在通话模式下对拾音装置获取的语音数据进行第一波束成形处理,以使得拾音装置的声束收音指向下方,另一方面,该无线通信模块在助听模式下对该语音数据进行第二波束成形处理,以使得拾音装置的声束收音指向前方,从而实现了基于智能眼镜的同一硬件平台的电话通话功能和助听功能,扩大了智能眼镜的功能。此外,由于无需在智能眼镜上加装额外的助听设备,还可减轻智能眼镜的重量,减小耗电量,降低智能眼镜的制造成本。
在本申请所提供的几个实施例中,应该理解到,所揭露的智能眼镜、系统和方法,可以通过其它的方式实现。例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
需要说明的是,对于前述的各方法实施例,为了简便描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定都是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。
以上为对本申请所提供的智能眼镜及其控制方法和系统的描述,对于本领域的技术人员,依据本申请实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本申请的限制。
工业实用性
在此处键入工业实用性描述段落。
序列表自由内容
在此处键入序列表自由内容描述段落。

Claims (2)

  1. 一种智能眼镜,其特征在于,包括:镜框、多条镜腿、至少一个拾音装置以及无线通信模块;
    所述镜腿连接所述镜框,所述至少一个拾音装置设置在所述多条镜腿中的至少一条镜腿上,所述无线通信模块设置在所述多条镜腿中的任意一条镜腿的腔体内并与所述至少一个拾音装置电性连接;
    所述无线通信模块,用于控制切换所述智能眼镜的工作模式,所述工作模式包括通话模式和助听模式;
    所述无线通信模块,还用于在所述通话模式下对所述至少一个拾音装置获取的语音数据进行第一波束成形处理,以使得所述至少一个拾音装置的声束收音指向下方;以及
    所述无线通信模块,还用于在所述助听模式下对所述语音数据进行第二波束成形处理,以使得所述至少一个拾音装置的声束收音指向前方。
  2. 如权利要求1所述的智能眼镜,其特征在于,所述多条镜腿包括第一镜腿和第二镜腿,所述第一镜腿的前端和所述第二镜腿的前端分别连接所述镜框的两侧,所述至少一个拾音装置安装在所述第一镜腿的前端。
    3 . 如权利要求2所述的智能眼镜,其特征在于,所述智能眼镜还包括:与所述无线通信模块电性连接的第一扬声器和第二扬声器;
    所述第一扬声器安装在所述第一镜腿上,且所述第一扬声器的输出口位于所述第一镜腿的尾端;
    所述第二扬声器安装在所述第二镜腿上,且所述第二扬声器的输出口位于所述第二镜腿的尾端。
    4 .如权利要求3所述的智能眼镜,其特征在于,所述无线通信模块包括:控制器、语音数据处理器和无线信号收发器;
    所述控制器,用于控制切换所述智能眼镜的工作模式;
    所述语音数据处理器,用于对所述语音数据进行处理;
    所述无线信号收发器,用于与智能移动终端进行数据交互。
    5 .如权利要求4所述的智能眼镜,其特征在于,所述第一扬声器和所述第二扬声器为单声道扬声器;
    所述控制模块为微控制单元;
    所述语音数据处理器为数字信号处理器或语音数据处理集成电路;
    所述无线信号收发器使用蓝牙协议、无线保真协议、近场通信协议、紫蜂、数字生活网络联盟协议、无载波通信协议、射频识别协议以及蜂窝移动通信协议中的至少一种作为与所述智能移动终端进行数据交互的通信协议。
    6 .如权利要求4所述的智能眼镜,其特征在于,所述语音数据处理器,还用于在所述通话模式下,对下行通道的数据进行语音均衡处理和输出音量处理,以及对上行通道的数据进行回音消除处理、所述第一波束成形处理以及噪声抑制处理。
    7 .如权利要求6所述的智能眼镜,其特征在于,
    所述语音数据处理器,还用于在所述通话模式下,对所述无线信号收发器接收的来自智能移动终端的通话语音数据进行所述语音均衡处理和所述输出音量控制处理,并将输出音量控制处理后的通话语音数据发送给所述第一扬声器和所述第二扬声器进行输出;
    所述语音数据处理器,还用于利用所述输出音量控制处理后的通话语音数据作为参考信号,对所述语音数据进行回音消除处理,并对回音消除处理后的所述语音数据进行所述第一波束成形处理以及噪声抑制处理,并将噪声抑制处理后的语音数据通过所述无线信号收发器发送给所述智能移动终端。
    8 .如权利要求4所述的智能眼镜,其特征在于,
    所述语音数据处理器,还用于在所述助听模式下,对所述语音数据进行反馈取消处理、所述第二波束成形处理、噪声抑制处理、语音均衡处理以及用户语音检测处理,并将语音均衡处理后的语音数据发送给所述第一扬声器和所述第二扬声器进行输出,其中所述语音均衡处理后的语音数据还用作所述反馈取消处理的参考数据。
    9 .如权利要求4所述的智能眼镜,其特征在于,所述智能眼镜还包括:与所述无线通信模块电性连接的至少一个传感器;
    所述至少一个传感器安装在所述第一镜腿和/或所述第二镜腿的内侧和/或外侧。
    10 .如权利要求9所述的智能眼镜,其特征在于,所述至少一个传感器包括:触控传感器、接近传感器、加速度计、陀螺仪、磁感应传感器以及惯性测量单元中的至少一种。
    11 .如权利要求9所述的智能眼镜,其特征在于,所述至少一个传感器包括:9轴传感器、至少一个触控传感器和至少一个接近传感器;
    其中,所述至少一个触控传感器安装在所述第一镜腿和/或所述第二镜腿的外侧。
    12 .如权利要求11所述的智能眼镜,其特征在于,
    所述至少一个触控传感器,用于检测用户的第一控制操作,所述第一控制操作用于调整音量;
    所述控制器,还用于响应于所述第一控制操作,控制调整所述音量。
    13 .如权利要求12所述的智能眼镜,其特征在于,所述第一控制操作包括用于调大音量的控制操作和用于调小音量的控制操作;
    其中,所述用于调大音量的控制操作与用户手指在所述触控传感器上向耳方向扫动的动作对应,所述用于调小音量的控制操作与所述用户手指在所述触控传感器上向镜框方向扫动的动作对应。
    14 .如权利要求11所述的智能眼镜,其特征在于,所述至少一个触控传感器,用于检测用户的第二控制操作;
    所述控制器,还用于响应于所述第二控制操作,控制切换所述智能眼镜的工作模式。
    15 .如权利要求14所述的智能眼镜,其特征在于,所述第二控制操作与所述用户点击或长按所述触控传感器的动作对应。
    16 .如权利要求11所述的智能眼镜,其特征在于,所述至少一个接近传感器安装在所述第一镜腿和/或所述第二镜腿的内侧;
    所述至少一个接近传感器,用于检测用户是否佩戴或摘下所述智能眼镜以及获取所述用户未佩戴所述智能眼镜的时长,并将检测结果发送给所述控制器;
    所述控制器,用于根据所述检测结果,在通过所述接近传感器检测到用户佩戴所述智能眼镜时播放音乐数据,在通过所述接近传感器检测到所述用户摘下所述智能眼镜时停止播放所述音乐数据,以及在通过所述接近传感器检测到所述用户超过预设时长没有佩戴所述智能眼镜时执行关机操作。
    17 .如权利要求9所述的智能眼镜,其特征在于,所述控制器,还用于将所述至少一个传感器获取的数据,通过无线信号收发器发送给智能移动终端。
    18 .如权利要求1所述的智能眼镜,其特征在于,所述拾音装置为麦克风阵列,所述麦克风阵列包括至少两个麦克风。
    19 .如权利要求18所述的智能眼镜,其特征在于,所述麦克风阵列包括第一麦克风、第二麦克风和第三麦克风;
    其中,所述第三麦克风与所述第一麦克风之间的距离,与所述第三麦克风与所述第二麦克风之间的距离相等。
    20 .如权利要求19所述的智能眼镜,其特征在于,所述智能眼镜配置有一个所述麦克风阵列,所述第一麦克风、所述第二麦克风和所述第三麦克风均安装在同一条镜腿上,且所述第一麦克风和所述第二麦克风比所述第三麦克风更靠近所述镜框;
    所述控制器,用于在所述通话模式下,控制所述第一麦克风和所述第二麦克风获取所述语音数据;
    所述控制器,还用于在所述助听模式下,控制所述第一麦克风、所述第二麦克风和所述第三麦克风获取所述语音数据。
    21 .如权利要求2所述的智能眼镜,其特征在于,所述智能眼镜还包括电池,所述电池安装在所述第一镜腿上且与所述无线通信模块电性连接。
    22 .如权利要求2所述的智能眼镜,其特征在于,所述智能眼镜还包括至少一个助听器,所述至少一个助听器安装在所述第一镜腿和/或所述第二镜腿上并与所述无线通信模块电性连接;
    所述无线通信模块,还用于在所述助听模式下,控制所述至少一个助听器输出所述语音数据。
    23 .如权利要求2所述的智能眼镜,其特征在于,所述智能眼镜还包括至少一个控制按键,所述至少一个控制按键安装在所述第一镜腿和/或所述第二镜腿的外侧并与所述无线通信模块电性连接;
    所述至少一个控制按键,用于触发所述无线通信模块控制切换所述智能眼镜的工作模式或运行状态;
    其中,所述运行状态包括闲置状态和工作状态,所述工作状态包括所述通话模式和所述助听模式。
    24 .如权利要求1所述的智能眼镜,其特征在于,
    所述无线通信模块,还用于接收智能移动终端发送的收音方向控制指令;
    所述无线通信模块,还用于当所述收音方向控制指令指示的方向为前方时,在所述助听模式下,对所述语音数据进行所述第二波束成形处理,以使得所述至少一个拾音装置的声束收音指向前方;
    所述无线通信模块,还用于当所述收音方向控制指令指示的方向为全方向时,在所述助听模式下,不对所述语音数据进行所述第二波束成形处理,以使得所述至少一个拾音装置的声束收音指向全方向。
    25 .如权利要求5所述的智能眼镜,其特征在于,所述语音数据处理器包括语音均衡器;
    所述无线信号收发器,用于接收智能移动终端发送的音量调节控制指令并发送给所述语音数据处理器;
    所述语音数据处理器通过利用所述语音均衡器,将待输出的声音数据,调整为所述音量调节控制指令指向的频段及音量的声音数据,并将调整后的声音数据发送给所述音量调节控制指令指向的扬声器。
    26 .一种智能眼镜控制系统,其特征在于,包括:智能移动终端以及如权利要求1至25中的任一项所述的智能眼镜;
    所述智能移动终端,用于与所述智能眼镜进行数据交互。
    27 .如权利要求26所述的智能眼镜控制系统,其特征在于,还包括:云端智能设备;
    所述云端智能设备,用于存储所述智能移动终端发送的数据,以及基于预设的处理逻辑对所述数据进行处理。
    28 .如权利要求26所述的智能眼镜控制系统,其特征在于,所述智能移动终端,还用于与所述智能眼镜基于蓝牙协议进行配对,并在配对成功后,将播放的音乐数据发送给所述智能眼镜,以进行播放。
    29 .如权利要求26所述的智能眼镜控制系统,其特征在于,所述智能移动终端,还用于通过本地的全球定位系统(GPS)模块获取GPS数据,并将所述GPS数据发送给所述智能眼镜,以用于所述智能眼镜的定位。
    30 .如权利要求29所述的智能眼镜控制系统,其特征在于,所述智能移动终端还用于实时接收并存储所述智能眼镜发送的运动数据,根据所述运动数据和所述GPS数据,进行运动指标计算,并根据计算结果生成用于通知或提醒用户运动状态的实时语音数据,并将所述实时语音数据发送给所述智能眼镜进行输出;
    其中,所述运动数据包括所述智能眼镜通过9轴传感器获取的数据,所述运动指标为跑步指标,所述跑步指标包括:配速、距离、步数、头部左右平衡、步数、步距和步频;
    所述智能移动终端,还用于根据所述运动数据和所述GPS数据,执行指标计算、姿势监测和运动提醒操作,并将指标计算结果通过所述智能移动终端的显示器进行实时显示。
    31 .如权利要求26所述的智能眼镜控制系统,其特征在于,所述智能移动终端,还用于响应于所述智能眼镜发送的语音指令,执行拨打、接听或挂断电话的操作,以及在通话过程中,将通话语音数据发送给所述智能眼镜。
    32 .如权利要求26所述的智能眼镜控制系统,其特征在于,
    所述智能移动终端,还用于根据用户的收音方向选择操作,向所述智能眼镜发送收音方向控制指令;
    所述智能移动终端,还用于根据用户的音量调节操作,向所述智能眼镜发送音量调节控制指令。
    33 .一种智能眼镜控制方法,其特征在于,所述智能眼镜包括:无线通信模块以及与所述无线通信模块电性连接的拾音装置,所述方法包括:
    通过所述无线通信模块响应于用户的控制操作,控制切换所述智能眼镜的工作模式为通话模式或助听模式;
    通过所述拾音装置获取语音数据;
    若所述智能眼镜的当前工作模式为所述通话模式,则通过所述无线通信模块对所述语音数据进行第一波束成形处理,以使得所述拾音装置的声束收音指向下方;
    若所述智能眼镜的当前工作模式为所述助听模式,则通过所述无线通信模块对所述语音数据进行第二波束成形处理,以使得所述拾音装置的声束收音指向前方。
    34 .如权利要求33所述的方法,其特征在于,所述智能眼镜还包括与所述无线通信模块电性连接的扬声器,所述方法还包括:
    在所述助听模式下,通过所述无线通信模块对所述语音数据进行反馈取消处理、噪声抑制处理、语音均衡处理以及用户语音检测处理,将语音均衡处理后的语音数据发送给所述扬声器进行输出,并将所述语音均衡处理后的语音数据用作所述反馈取消处理的参考数据。
    35 .如权利要求33所述的方法,其特征在于,所述智能眼镜还包括与所述无线通信模块电性连接的扬声器,所述方法还包括:
    在所述通话模式下,通过所述无线通信模块对来自智能移动终端的通话语音数据进行语音均衡处理和输出音量控制处理,并将输出音量控制处理后的通话语音数据发送给所述扬声器进行输出;
    通过所述无线通信模块利用所述输出音量控制处理后的通话语音数据作为参考信号,对所述语音数据进行回音消除处理;以及
    通过所述无线通信模块对所述语音数据进行噪声抑制处理,将噪声抑制处理后的语音数据发送给所述智能移动终端。
    36 .如权利要求35所述的方法,其特征在于,所述通过所述拾音装置获取语音数据之前,所述方法还包括:
    通过所述无线通信模块基于蓝牙协议,与所述智能移动终端进行蓝牙配对,以在所述智能眼镜和所述智能移动终端之间建立数据传输通道。
    37 .如权利要求33所述的方法,其特征在于,所述智能眼镜还包括与所述无线通信模块电性连接的触控传感器,所述方法还包括:
    通过所述触控传感器对用户的第一控制操作进行检测;
    当通过所述触控传感器检测到所述第一控制操作时,通过所述无线通信模块响应于所述第一控制操作,控制调整所述智能眼镜输出声音的音量;
    其中,所述第一控制操作包括用于调大音量的控制操作和用于调小音量的控制操作;
    其中,所述用于调大音量的控制操作与用户手指在所述触控传感器上向耳方向扫动的动作对应,所述用于调小音量的控制操作与用户手指在所述触控传感器上向所述智能眼镜的镜框方向扫动的动作对应。
    38 .如权利要求33所述的方法,其特征在于,所述智能眼镜还包括与所述无线通信模块电性连接的触控传感器,所述通过所述无线通信模块响应于用户的控制操作,控制切换所述智能眼镜的工作模式为通话模式或助听模式,包括:
    通过所述触控传感器对用户的第二控制操作进行检测,其中,所述第二控制操作与所述用户长按所述触控传感器的动作对应;
    当通过所述触控传感器检测到所述第二控制操作时,通过所述无线通信模块响应于所述第二控制操作,控制切换所述工作模式为所述通话模式或所述助听模式;或者,
    所述智能眼镜还包括与所述无线通信模块电性连接的控制按键,所述通过所述无线通信模块响应于用户的控制操作,控制切换所述智能眼镜的工作模式为通话模式或助听模式,包括:
    当检测到所述用户点击所述控制按键的操作时,通过所述无线通信模块响应于所述点击操作,控制切换所述工作模式为所述通话模式或所述助听模式。
    39 .如权利要求33所述的方法,其特征在于,所述智能眼镜还包括与所述无线通信模块电性连接的扬声器和接近传感器,所述接近传感器安装在所述智能眼镜的镜腿内侧,所述方法还包括:
    通过所述接近传感器检测用户是否佩戴或摘下智能眼镜;
    当通过所述接近传感器检测到所述用户佩戴所述智能眼镜时,通过所述无线通信模块控制所述扬声器播放音频数据;
    当通过所述接近传感器检测到所述用户摘下所述智能眼镜时,通过所述无线通信模块控制所述扬声器停止播放所述音频数据,并当通过所述接近传感器检测到所述用户摘下智能眼镜超过预设时长时,通过所述无线通信模块执行关机操作;
    其中,所述音频数据包括以下数据中的任一种:
    上一次所述用户摘下所述智能眼镜时,所述智能眼镜未播放完的音乐数据;
    所述智能眼镜内置的存储器中存储的默认音乐数据;以及
    所述用户戴上所述智能眼镜时,通过所述无线通信模块接收的来自智能移动终端的音乐数据或通话语音数据。
    40 .如权利要求35所述的方法,其特征在于,所述通过所述无线通信模块对所述语音数据进行第一波束成形处理之前,所述方法还包括:
    通过所述无线通信模块对所述语音数据进行语音识别,并将识别的语音指令发送给所述智能移动终端;
    所述智能移动终端响应于所述语音指令,执行所述语音指令指向的操作,其中,所述语音指令指向的操作包括:拨打电话、接听电话以及挂断电话中的任一种;
    当所述语音指令指向的操作为拨打或接听电话时,所述智能移动终端将接收的通话语音数据发送给所述智能眼镜,以通过所述智能眼镜的扬声器输出所述通话语音数据。
    41 .如权利要求33所述的方法,其特征在于,所述智能眼镜还包括与所述无线通信模块电性连接的运动传感器,所述方法还包括:
    通过所述运动传感器获取运动数据,并通过所述无线通信模块将所述运动传感器实时发送给智能移动终端;
    所述智能移动终端实时接收并存储所述运动数据,根据所述运动数据、全球定位系统(GPS)数据,进行运动指标计算,并根据计算结果生成用于通知或提醒用户运动状态的实时语音数据,并将所述实时语音数据发送给所述智能眼镜,以通过所述智能眼镜的扬声器输出所述实时语音数据;
    其中,所述GPS数据通过所述智能移动终端的GPS模块获取;
    所述运动传感器为9轴传感器,所述运动指标为跑步指标,所述跑步指标包括:配速、距离、步数、头部左右平衡、步数、步距和步频;
    所述方法还包括:
    所述智能移动终端根据所述运动数据和所述GPS数据,执行指标计算、姿势监测和运动提醒操作,并将指标计算结果通过所述智能移动终端的显示器进行实时显示。
    42 .如权利要求33所述的方法,其特征在于,所述智能眼镜的运行状态包括闲置状态和工作状态,所述工作状态包括所述通话模式和所述助听模式,所述方法还包括:
    通过所述无线通信模块在所述闲置状态下,当监测到来电事件时,控制所述智能眼镜进入所述通话模式,并在所述通话模式下,当监测到挂线事件时,控制所述智能眼镜返回所述闲置状态;
    通过所述无线通信模块在所述助听模式下,当监测到所述来电事件时,控制所述智能眼镜进入所述通话模式,并在所述通话模式下,当监测到所述挂线事件时,控制所述智能眼镜返回所述助听模式;以及
    通过所述无线通信模块在所述闲置状态下,当监测到按键事件时,控制所述智能眼镜进入所述助听模式,并在所述助听模式下,当再次监测到所述按键事件时,控制所述智能眼镜返回所述闲置状态。
    43 .如权利要求33所述的方法,其特征在于,所述拾音装置为由第一麦克风、第二麦克风和第三麦克风构成的麦克风阵列,所述第一麦克风和所述第二麦克风比所述第三麦克风更靠近所述智能眼镜的镜框,所述第一麦克风与所述第三麦克风之间的距离等于所述第二麦克风与第三麦克风之间的距离,所述通过所述拾音装置获取语音数据,包括:
    在控制切换所述智能眼镜的工作模式为所述通话模式后,通过所述无线通信模块控制所述第一麦克风和所述第二麦克风获取用户的语音数据;以及
    在控制切换所述智能眼镜的工作模式为所述助听模式后,通过所述无线通信模块控制所述第一麦克风、所述第二麦克风和所述第三麦克风获取所述用户的交谈对象的语音数据。
PCT/CN2022/106802 2021-07-22 2022-07-20 智能眼镜及其控制方法和系统 WO2023001195A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/418,377 US20240163603A1 (en) 2021-07-22 2024-01-22 Smart glasses, system and control method thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110833422.7A CN115695620A (zh) 2021-07-22 2021-07-22 智能眼镜及其控制方法和系统
CN202110833422.7 2021-07-22

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/418,377 Continuation US20240163603A1 (en) 2021-07-22 2024-01-22 Smart glasses, system and control method thereof

Publications (1)

Publication Number Publication Date
WO2023001195A1 true WO2023001195A1 (zh) 2023-01-26

Family

ID=84978949

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/106802 WO2023001195A1 (zh) 2021-07-22 2022-07-20 智能眼镜及其控制方法和系统

Country Status (3)

Country Link
US (1) US20240163603A1 (zh)
CN (1) CN115695620A (zh)
WO (1) WO2023001195A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117198312B (zh) * 2023-11-02 2024-01-30 深圳市魔样科技有限公司 一种用于智能眼镜的语音交互处理方法
CN117369144B (zh) * 2023-12-07 2024-04-09 歌尔股份有限公司 一种镜腿组件和头戴显示设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103220612A (zh) * 2013-04-25 2013-07-24 中国人民解放军总医院 组合眼镜式单通道或双通道压电陶瓷骨导助听器
CN103646587A (zh) * 2013-12-05 2014-03-19 北京京东方光电科技有限公司 一种智能眼镜及其控制方法
US20150036856A1 (en) * 2013-07-31 2015-02-05 Starkey Laboratories, Inc. Integration of hearing aids with smart glasses to improve intelligibility in noise
CN206301081U (zh) * 2016-12-30 2017-07-04 贵州小爱机器人科技有限公司 具有双麦克风的智能眼镜和智能交互系统
CN111429928A (zh) * 2019-01-10 2020-07-17 陈筱涵 具收音场景切换功能的助听系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103220612A (zh) * 2013-04-25 2013-07-24 中国人民解放军总医院 组合眼镜式单通道或双通道压电陶瓷骨导助听器
US20150036856A1 (en) * 2013-07-31 2015-02-05 Starkey Laboratories, Inc. Integration of hearing aids with smart glasses to improve intelligibility in noise
CN103646587A (zh) * 2013-12-05 2014-03-19 北京京东方光电科技有限公司 一种智能眼镜及其控制方法
CN206301081U (zh) * 2016-12-30 2017-07-04 贵州小爱机器人科技有限公司 具有双麦克风的智能眼镜和智能交互系统
CN111429928A (zh) * 2019-01-10 2020-07-17 陈筱涵 具收音场景切换功能的助听系统

Also Published As

Publication number Publication date
CN115695620A (zh) 2023-02-03
US20240163603A1 (en) 2024-05-16

Similar Documents

Publication Publication Date Title
US11043980B2 (en) Method for controlling earphone switching, earphone, and earphone system
US11102697B2 (en) Method for controlling earphone switching and earphone
EP3591987B1 (en) Method for controlling earphone switching, earphone, and earphone system
WO2023001195A1 (zh) 智能眼镜及其控制方法和系统
EP3562130B2 (en) Control method at wearable apparatus and related apparatuses
CN110166890B (zh) 音频的播放采集方法、设备及存储介质
CN110764730A (zh) 播放音频数据的方法和装置
CN108540900B (zh) 音量调节方法及相关产品
WO2019154182A1 (zh) 应用程序的音量设置方法及移动终端
CN109062535B (zh) 发声控制方法、装置、电子装置及计算机可读介质
CN109429132A (zh) 耳机系统
WO2020019820A1 (zh) 麦克风堵孔检测方法及相关产品
WO2022033176A1 (zh) 音频播放控制方法、装置、电子设备及存储介质
CN108810198B (zh) 发声控制方法、装置、电子装置及计算机可读介质
WO2020025034A1 (zh) 一种可穿戴设备主从切换方法及相关产品
CN109067965B (zh) 翻译方法、翻译装置、可穿戴装置及存储介质
US20230379615A1 (en) Portable audio device
CN110099337B (zh) 骨传导音频输出模式调节方法、可穿戴设备及存储介质
WO2021238844A1 (zh) 音频输出方法及电子设备
JP2022522208A (ja) 移動端末及び音声出力制御方法
WO2024021736A1 (zh) 蓝牙多媒体包的传输方法、装置、设备和系统
EP3246791B1 (en) Information processing apparatus, informating processing system, and information processing method
WO2020025033A1 (zh) 基于音量的主从切换方法及相关产品
CN112543247B (zh) 一种智能手环及其控制方法
CN108958631A (zh) 屏幕发声控制方法、装置以及电子装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22845360

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE