WO2023083105A1 - Electronic device, interaction module, and control method and control apparatus therefor - Google Patents

Electronic device, interaction module, and control method and control apparatus therefor Download PDF

Info

Publication number
WO2023083105A1
WO2023083105A1 PCT/CN2022/129793 CN2022129793W WO2023083105A1 WO 2023083105 A1 WO2023083105 A1 WO 2023083105A1 CN 2022129793 W CN2022129793 W CN 2022129793W WO 2023083105 A1 WO2023083105 A1 WO 2023083105A1
Authority
WO
WIPO (PCT)
Prior art keywords
ultrasonic
signal
user
interactive module
processing unit
Prior art date
Application number
PCT/CN2022/129793
Other languages
French (fr)
Chinese (zh)
Inventor
庞胜利
Original Assignee
歌尔微电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔微电子股份有限公司 filed Critical 歌尔微电子股份有限公司
Publication of WO2023083105A1 publication Critical patent/WO2023083105A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C23/00Non-electrical signal transmission systems, e.g. optical systems
    • G08C23/02Non-electrical signal transmission systems, e.g. optical systems using infrasonic, sonic or ultrasonic waves
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present application relates to the field of interaction technology, and in particular to an electronic device, an interaction module, a control method and a control device thereof.
  • voice interaction is more convenient to use, in some relatively complex control or frequent control scenarios, the efficiency of voice interaction is relatively low.
  • the interactive module includes an ultrasonic sound generating device, a sound pickup device and a signal processing unit, wherein the control method of the interactive module includes:
  • the control signal processing unit determines the user's voice command information and/or gesture command information according to the sound wave signal
  • the control signal processing unit determines a user instruction according to the voice instruction information and/or the gesture instruction information.
  • the step of determining the user's voice command information and/or gesture command information according to the sound wave signal by the control signal processing unit includes:
  • controlling the signal processing unit to determine the sound signal and/or the ultrasonic signal in the sound wave signal
  • the control signal processing unit determines the user's voice instruction information according to the sound signal, and/or determines the user gesture instruction information according to the ultrasonic signal.
  • the step of controlling the ultrasonic sound generating device to emit ultrasonic signals is specifically:
  • the ultrasonic sound generating device is controlled to emit ultrasonic signals having the first characteristic parameter at intervals.
  • the determining the user gesture instruction information according to the ultrasonic signal is specifically:
  • control method of the interaction module before the step of controlling the ultrasonic sound generating device to emit ultrasonic signals and controlling the sound pickup device to receive the sound wave signal, the control method of the interaction module further includes:
  • the control signal processing unit determines the ultrasonic signal in the current environment according to the sound wave signal in the current environment, and configures the first characteristic parameter according to the ultrasonic signal in the current environment.
  • the step of determining the user instruction by the control signal processing unit according to the voice instruction information and/or the gesture instruction information is specifically:
  • the signal processing unit determines the voice command information but not the gesture command information, the voice command corresponding to the voice command information is used as the user command;
  • the signal processing unit does not determine the voice command information but determines the gesture command information, the gesture command corresponding to the gesture command information is used as the user command;
  • the signal processing unit determines the voice instruction information and the gesture instruction information, according to the instruction priorities of the voice instruction corresponding to the voice instruction information and the gesture instruction corresponding to the gesture instruction information, the one with the higher instruction priority as a user instruction.
  • the present application also proposes a control device for an interactive module, the interactive module includes an ultrasonic sound generating device, a sound pickup device and a signal processing unit, and the intelligent interactive control control device includes:
  • An intelligent interactive control program stored in the memory and operable on the processor, when the processor executes the intelligent interactive control program, the above-mentioned control method of the interactive module is realized.
  • the present application also proposes an interactive module, which includes:
  • Ultrasonic sounding device for sending out ultrasonic signals
  • a pickup device for receiving acoustic signals
  • the signal processing unit is configured to determine the user's voice instruction information and/or gesture instruction information according to the sound wave signal, and determine the user instruction according to the voice instruction information and/or the gesture instruction information.
  • the interaction module also includes:
  • the casing, the ultrasonic sound generating device, the sound pickup device and the signal processing unit are accommodated in the casing.
  • the present application also proposes an electronic device, the electronic device comprising:
  • control device of the interaction module is electrically connected with the interaction module.
  • the technical solution of the present application controls the ultrasonic sound generating device to emit ultrasonic signals, and controls the sound pickup device to receive the sound wave signal; and controls the signal processing unit to determine the user's voice command information and/or gesture command information according to the sound wave signal; and, the control signal
  • the processing unit determines a user instruction according to the voice instruction information and/or the gesture instruction information.
  • the technical solution of the present application combines voice interaction with gesture interaction, so that users can flexibly choose two interaction methods according to actual needs to realize the control of electronic devices. Compared with pure voice interaction, the interaction process can be more efficient. Concise, higher adjustment accuracy, greatly improving the convenience and efficiency of interaction.
  • the technical solution of the present application uses ultrasonic waves to realize posture detection, which not only avoids the dependence of image-based posture detection on light sources, but also improves the stability of posture detection, and can also use the same sound pickup device to simultaneously acquire user voice and posture. There is no need to install additional camera components, which is conducive to the miniaturization and light weight design of electronic equipment.
  • Fig. 1 is a schematic flow chart of the steps of an embodiment of the control method of the interactive module of the present application
  • FIG. 2 is a schematic structural diagram of the hardware operating environment of an embodiment of the control device of the interactive module of the present application
  • FIG. 3 is a schematic structural diagram of an embodiment of an interactive module of the present application.
  • FIG. 4 is a schematic structural diagram of another embodiment of the interactive module of the present application.
  • FIG. 5 is a schematic diagram of modules of an embodiment of an electronic device according to the present application.
  • the present application proposes a method for controlling an interactive module.
  • electronic devices that use voice interaction are all question-and-answer interactions, that is, after receiving the user's voice command, the electronic device executes the corresponding function and feedbacks the function to the user in the form of voice to complete the execution.
  • the user is required to issue complex or frequent voice commands, which makes interactive control less efficient and very inconvenient.
  • the user can only repeatedly output the voice command for adjustment, and can only adjust it again according to the effect of the current adjustment. Adjustment, the overall adjustment process is very lengthy and complicated, and the question-and-answer interaction cannot be applied to scenarios where it is inconvenient for users to speak, such as meetings or people taking a break next to them.
  • the subject of execution of the control method of the interaction module of the present application may be a control device of the interaction module.
  • An ultrasonic sound generating device, a sound pickup device and a signal processing unit can be integrated in the interactive module.
  • control method of the interactive module includes:
  • Step S100 controlling the ultrasonic sound generating device to emit ultrasonic signals, and controlling the sound pickup device to receive the sound wave signals;
  • the control device of the interactive module can control the interactive module to work when it is triggered to enter the interactive mode. Specifically: controlling the ultrasonic sound generating device in the interactive module to emit sound wave signals with a frequency in the range of 20KHz ⁇ 40KHz, and simultaneously controlling the sound pickup device to receive sound wave signals with a frequency within the first preset frequency range.
  • the sound wave signal in the range of 20KHz ⁇ 40KHz is the ultrasonic signal
  • the ultrasonic signal and the sound signal sent by the user (for the sake of simplicity, the following "user sound signal” means "the sound signal sent by the user")
  • the lower limit of the first preset frequency range can be determined by the lowest frequency of the user's voice signal
  • the upper limit of the first preset frequency range can be determined by the highest frequency of the ultrasonic signal. In this way, the ultrasonic wave and the user's voice signal can be received at the same time.
  • Step S200 the control signal processing unit determines the user's voice command information and/or gesture command information according to the sound wave signal
  • the sound pickup device can convert the sound wave signal in the form of sound wave into the form of electrical signal, and output it to the signal processing unit for signal separation processing, so as to separate the user voice signal component and/or ultrasonic signal component from the sound wave signal.
  • a preset voice command library may be pre-integrated in the signal processing unit, and multiple pieces of preset voice command information may be pre-stored in the preset voice command library, so that when the user voice signal component is separated, the separated user voice signal component and Multiple pieces of preset voice command information in the voice command library are matched, and the preset voice command information with the highest matching degree can be used as the voice command information corresponding to the sound wave signal, so as to realize the determination of the user's voice command information.
  • the signal processing unit can also be pre-integrated with an ultrasonic signal analysis algorithm, so that when the ultrasonic signal component is separated, the ultrasonic signal analysis algorithm is run to analyze and calculate the ultrasonic signal component, and then to obtain the posture image information corresponding to the ultrasonic signal component and the
  • the gesture command represented by the gesture image information can be used as the gesture command information corresponding to the sound wave signal, so as to realize the determination of the user's gesture command information.
  • the ultrasonic signal analysis algorithm may be a TOF algorithm
  • the posture command information may be one or more combinations of posture information such as gesture command information, body posture information, and mouth shape posture information.
  • the determination results of this step can be divided into the following three situations, the first one: only the voice command information is determined; the second one: only the gesture command information is determined; the third one : Determine the voice command information and gesture command information at the same time.
  • Step S300 the control signal processing unit determines the user instruction according to the voice instruction information and/or gesture instruction information.
  • the signal processing unit can further determine the function that the user wants to trigger according to the determined voice command information or gesture command information, that is, respectively determine the voice command corresponding to the voice command information and The gesture command corresponding to the gesture command information, at this time, the signal processing unit can output a user command for triggering the function to the main control unit or corresponding functional components in the electronic device according to the determination result, so as to trigger the function through the main control unit or directly
  • the corresponding functional components are used to realize the functions that users want to trigger, so as to realize human-computer interaction.
  • the signal processing unit can use the function that the user wants to trigger determined by any one of the voice command information and the gesture command information, to the user determined by the other.
  • the function to be triggered perform one or more combinations of operations such as confirmation, correction, or supplement, so as to improve the accuracy of the final output user instruction.
  • the adjustment can be completed through simple voice or simple gesture.
  • the user can continuously increase the brightness or volume, or speed up the video progress, by raising or lifting gestures, etc. You can continue to reduce the brightness or volume by lifting or lowering gestures, or roll back the progress of the video, and when you reach a satisfactory brightness, volume or progress, you can exit the adjustment directly through the gesture or voice that characterizes the stop adjustment, so as to continue listening to music Or watch a video without waiting for voice feedback.
  • the technical solution of the present application combines voice interaction with gesture interaction, so that users can flexibly choose two interaction methods according to actual needs to realize the control of electronic devices. Compared with pure voice interaction, the interaction process can be more efficient. Concise, higher adjustment accuracy, greatly improving the convenience and efficiency of interaction.
  • the technical solution of the present application uses ultrasonic waves to realize posture detection, which not only avoids the dependence of image-based posture detection on light sources, but also improves the stability of posture detection, and can also use the same sound pickup device to simultaneously acquire user voice and posture. There is no need to install additional camera components, which is conducive to the miniaturization and light weight design of electronic equipment.
  • the step S200 of controlling the signal processing unit to determine the user's voice command information and/or gesture command information according to the sound wave signal includes:
  • Step S200 control the signal processing unit to determine the sound signal and/or ultrasonic signal in the sound wave signal
  • the electrical signal frequency intervals corresponding to the user's voice signal component and the ultrasonic signal component are different from each other, so a hardware circuit or software program and algorithm for filtering can be integrated in the signal processing unit.
  • the sound wave signal of the electrical signal is received, it is filtered to filter out the components in the frequency range corresponding to the sound signal and the components in the frequency range corresponding to the ultrasonic signal in the sound wave signal, so as to obtain the user sound signal component and the ultrasonic signal portion.
  • the user voice signal component is the voice signal recorded in this step
  • the ultrasonic signal component is the ultrasonic signal recorded in this step.
  • Step S210 the control signal processing unit determines the user's voice command information according to the sound signal, and/or determines the user's gesture command information according to the ultrasonic signal.
  • the signal processing unit can match the determined sound signal with the local pre-integrated preset voice command library, so as to use the preset voice command with the highest matching degree in the preset voice command library as the voice command information; signal processing The unit can also use the locally integrated TOF algorithm to analyze and calculate the ultrasonic signal to determine the gesture instruction information, so as to realize the local recognition of the voice instruction information and the gesture instruction information.
  • the signal processing unit can also send the voice signal or ultrasonic signal to the server through the wireless communication module in the electronic device when the local recognition cannot determine the voice command information and gesture command information, and can receive the output from the server through the wireless communication module. The result of the determination is to realize the online recognition of voice command information and gesture command information.
  • Such setting makes the technical solution of the present application applicable to both offline and online network conditions of electronic equipment, which greatly improves user convenience and wide application.
  • step S100 control the ultrasonic sound generating device to emit ultrasonic signals, specifically:
  • the ultrasonic sound generating device is controlled to emit ultrasonic signals having the first characteristic parameter at intervals.
  • the ultrasonic signal of the first preset frequency is recorded as "0”
  • the ultrasonic signal of the second preset frequency is recorded as "1”
  • the ultrasonic signals of the first preset frequency and the second preset frequency are alternately sent out at the same preset time interval.
  • Ultrasonic signal, the ultrasonic signal whose first characteristic parameter is "010101010” can be realized.
  • those skilled in the art can also encode and encrypt the parameter group formed by the multiple first characteristic parameters transmitted sequentially by controlling all the first characteristic parameters of the ultrasonic signals sent at each interval, so as to improve the technology of the present application.
  • the characteristics of the ultrasonic waves emitted by the scheme are used to reduce the interference of environmental ultrasonic signals on posture detection.
  • step S210 the gesture instruction information of the user is determined according to the ultrasonic signal, specifically:
  • the technical solution of the present application further separates the ultrasonic signal with the first characteristic parameter from the ultrasonic signal after determining the ultrasonic signal, and determines the posture instruction information according to the ultrasonic signal with the first characteristic parameter, thus effectively eliminating The interference of the environmental ultrasonic signal to the posture detection is beneficial to improve the accuracy of the posture detection.
  • a step may be included: controlling the sound pickup device to receive the environmental sound wave signal, And configure the first characteristic parameter according to the environmental sound wave signal; this step can make the electronic device determine the environmental ultrasonic signal in the environmental sound wave signal before emitting the ultrasonic wave, and configure the first characteristic parameter according to the environmental ultrasonic signal, so that the ultrasonic wave to be emitted
  • the first characteristic parameter of the signal can avoid the interference of the current environmental ultrasonic signal to the greatest extent.
  • control signal processing unit determines the user instruction step S300 according to the voice instruction information and/or gesture instruction information, specifically:
  • the signal processing unit determines the voice command information but not the gesture command information, the voice command corresponding to the voice command information is used as the user command;
  • the signal processing unit does not determine the voice command information but determines the gesture command information, the gesture command corresponding to the gesture command information is used as the user command;
  • the signal processing unit determines the voice command information and the gesture command information, according to the command priorities of the voice command corresponding to the voice command information and the gesture command corresponding to the gesture command information, the one with higher command priority is taken as the user command.
  • the technical solution of the present application is to use the voice command corresponding to the voice command information as the user command when the signal processing unit only determines the voice command information, that is, when the user only issues a voice command; when the signal processing unit only determines the gesture command information, that is, When the user only issues a gesture command, the gesture command corresponding to the gesture command information is used as the user command, so as to meet the user's interaction needs in normal situations.
  • the technical solution of the present application enables the signal processing unit to determine the voice command information and gesture command information according to both voice commands and gesture commands.
  • the priority of the command is the one with the higher priority as the user command.
  • the priority of the voice instruction is higher than that of the gesture instruction.
  • the present application also proposes a control device for an interactive module, where the interactive module includes an ultrasonic sound generating device, a sound pickup device and a signal processing unit.
  • control device of the interactive module includes:
  • the control program of the interactive module is stored in the memory 101 and can be run on the processor.
  • the processor 102 executes the control program of the interactive module, the above method for controlling the interactive module is realized.
  • the memory 101 can be a high-speed RAM memory, or a stable memory (non-volatile memory), such as a disk memory, and the memory 101 can optionally also be a storage device independent of the aforementioned control device; the processor 102 Can be CPU.
  • the memory 101 and the processor 102 are connected by a communication bus 103, which may be a UART bus or an I2C bus. It can be understood that other related programs can also be set in the control device to drive other functional units and modules in the interaction module to work.
  • the application also proposes an interactive module.
  • the interaction module includes:
  • Ultrasonic sounding device 210 used for sending out ultrasonic signals
  • the sound pickup device 220 is used to receive the sound wave signal; and,
  • the signal processing unit 230 is configured to determine the user's voice instruction information and/or gesture instruction information according to the sound wave signal, and determine the user instruction according to the voice instruction information and/or gesture instruction information.
  • the ultrasonic sounding device 210 can be an ultrasonic sounding chip 211; the ultrasonic sounding device 210 can be provided with a sounding diaphragm and an electric drive structure, and can control the electric field loaded on both sides of the sounding diaphragm through the electric drive structure to Drive the sounding diaphragm to vibrate and send out ultrasonic signals.
  • the sound pickup device 220 can be a sound pickup MEMS chip 221; the sound pickup device 220 can be provided with a sound receiving diaphragm, and the sound wave signal in the air can be sensed by the sound receiving diaphragm in the sound pickup device 220, and can be based on the sound receiving diaphragm
  • the change of the electric field on both sides outputs the corresponding electric signal, so as to realize the reception of the sound wave signal.
  • Signal processing unit 230 can be made up of ASIC chip 232 and speech processing chip 231; Wherein, ASIC chip 232 is used for according to the sound wave signal of electric signal, determines the sound signal and the ultrasonic signal in the sound wave signal and outputs to speech processing chip 231, makes speech
  • the processing chip 231 can determine the user instruction according to the sound signal and the ultrasonic signal.
  • the interactive module also includes:
  • the casing 240 , the ultrasonic sound generating device 210 , the sound pickup device 220 and the signal processing unit 230 are accommodated in the casing 240 .
  • the casing 240 can be formed by surrounding the shell 241 and the substrate 242 .
  • the housing 240 may be provided with an accommodating cavity, and the ultrasonic sound generating device 210 , the sound pickup device 220 and the signal processing unit 230 may all be accommodated in the accommodating cavity of the housing 240 .
  • the ultrasonic sound generating device 210 , the sound pickup device 220 and the signal processing unit 230 may all be accommodated in the accommodating cavity of the housing 240 .
  • sound pick-up MEMS chip 221 and ultrasonic sound generation chip 211 all can be connected with substrate 242 by support member, can be provided with sound pick-up holes corresponding to sound pickup MEMS chip 221 and ultrasonic sound generation chip 211 respectively on the substrate 242 243 and the sound outlet 244, so that a sound pickup space can be formed between the sound pickup hole 243 and the sound pickup MEMS chip 221, and a sound outlet space can be formed between the ultrasonic sound chip 211 and the sound outlet 244, which is conducive to respectively enhancing The sound-sensing capability of the sound-picking MEMS chip 221 and the sound-generating capability of the ultrasonic sound-generating chip 211, and in the embodiment shown in Fig.
  • the voice processing chip 231 On the side of the voice processing chip 231 away from the substrate 242 . With such arrangement, the volume of the interactive module can be reduced while enhancing the sound output and sound pickup capabilities of the interactive module.
  • the voice processing chip 231 can also be embedded in the substrate 242 to save space, reduce interference between devices, and improve the anti-interference ability of the entire device.
  • the present application also proposes an electronic device.
  • the interactive module includes an interactive module and a control device 100 for the interactive module. All the technical solutions of the embodiments at least have all the beneficial effects brought about by the technical solutions of the above embodiments, and will not be repeated here.
  • the electronic device can be a wearable device; the control device 100 of the interactive module can be electrically connected with the interactive module, and the control device 100 of the interactive module can be housed in the shell 241 of the electronic device.

Abstract

An electronic device, an interaction module, and a control method and control apparatus therefor. The control method for an interaction module comprises: controlling an ultrasound generation apparatus to transmit an ultrasonic signal, and controlling a pickup apparatus to receive an acoustic wave signal (S100); controlling a signal processing unit to determine, according to the acoustic wave signal, voice instruction information and/or posture instruction information of a user (S200); and controlling the signal processing unit to determine a user instruction according to the voice instruction information and/or the posture instruction information (S300). The method improves the human-computer interaction efficiency.

Description

电子设备、交互模块及其控制方法和控制装置Electronic equipment, interactive module and its control method and control device
本申请要求于2021年11月15日提交中国专利局、申请号为202111351728.5、发明名称为“电子设备、交互模块及其控制方法和控制装置”的中国专利申请的优先权,其全部内容通过引用结合在申请中。This application claims the priority of the Chinese patent application with the application number 202111351728.5 and the title of the invention "electronic equipment, interactive module and its control method and control device" filed with the China Patent Office on November 15, 2021, the entire contents of which are incorporated by reference incorporated in the application.
技术领域technical field
本申请涉及交互技术领域,特别涉及一种电子设备、交互模块及其控制方法和控制装置。The present application relates to the field of interaction technology, and in particular to an electronic device, an interaction module, a control method and a control device thereof.
背景技术Background technique
随着AI技术的发展,语音控制和交互的应用也越来越多,技术上也趋于更成熟。With the development of AI technology, there are more and more applications of voice control and interaction, and the technology is becoming more mature.
技术问题technical problem
虽然语音交互使用较为方便,但在一些相对复杂控制或频繁控制的场景下,语音交互的效率却相对较低。Although voice interaction is more convenient to use, in some relatively complex control or frequent control scenarios, the efficiency of voice interaction is relatively low.
技术解决方案technical solution
本申请提出的交互模块的控制方法,所述交互模块包括超声发声装置、拾音装置和信号处理单元,其中,所述交互模块的控制方法包括:The control method of the interactive module proposed in the present application, the interactive module includes an ultrasonic sound generating device, a sound pickup device and a signal processing unit, wherein the control method of the interactive module includes:
控制超声发声装置发射超声波信号,并控制拾音装置接收声波信号;Control the ultrasonic sound generating device to emit ultrasonic signals, and control the pickup device to receive sound wave signals;
控制信号处理单元根据所述声波信号,确定用户的语音指令信息和/或姿势指令信息;The control signal processing unit determines the user's voice command information and/or gesture command information according to the sound wave signal;
控制信号处理单元根据所述语音指令信息和/或所述姿势指令信息,确定用户指令。The control signal processing unit determines a user instruction according to the voice instruction information and/or the gesture instruction information.
在一实施例中,所述控制信号处理单元根据所述声波信号,确定用户的语音指令信息和/或姿势指令信息的步骤,包括:In one embodiment, the step of determining the user's voice command information and/or gesture command information according to the sound wave signal by the control signal processing unit includes:
控制信号处理单元确定所述声波信号中的声音信号和/或超声波信号;controlling the signal processing unit to determine the sound signal and/or the ultrasonic signal in the sound wave signal;
控制信号处理单元根据所述声音信号确定用户的语音指令信息,和/或,根据所述超声波信号确定用户姿势指令信息。The control signal processing unit determines the user's voice instruction information according to the sound signal, and/or determines the user gesture instruction information according to the ultrasonic signal.
在一实施例中,所述控制超声发声装置发射超声波信号的步骤,具体为:In one embodiment, the step of controlling the ultrasonic sound generating device to emit ultrasonic signals is specifically:
控制超声发声装置间隔发射具有第一特征参数的超声波信号。The ultrasonic sound generating device is controlled to emit ultrasonic signals having the first characteristic parameter at intervals.
在一实施例中,所述根据所述超声波信号确定用户姿势指令信息,具体为:In an embodiment, the determining the user gesture instruction information according to the ultrasonic signal is specifically:
确定所述超声波信号中具有第一特征参数的超声波信号,根据具有第一特征参数的超声波信号确定用户的姿势指令信息。Determining an ultrasonic signal with a first characteristic parameter among the ultrasonic signals, and determining user gesture instruction information according to the ultrasonic signal with the first characteristic parameter.
在一实施例中,所述控制超声发声装置发射超声波信号,并控制拾音装置接收声波信号的步骤之前,所述交互模块的控制方法还包括:In one embodiment, before the step of controlling the ultrasonic sound generating device to emit ultrasonic signals and controlling the sound pickup device to receive the sound wave signal, the control method of the interaction module further includes:
控制拾音装置接收当前环境中的声波信号;Control the pickup device to receive the sound wave signal in the current environment;
控制信号处理单元根据当前环境中的声波信号,确定当前环境中的超声波信号,并根据当前环境中的超声波信号配置所述第一特征参数。The control signal processing unit determines the ultrasonic signal in the current environment according to the sound wave signal in the current environment, and configures the first characteristic parameter according to the ultrasonic signal in the current environment.
在一实施例中,所述控制信号处理单元根据所述语音指令信息和/或所述姿势指令信息,确定用户指令的步骤,具体为:In an embodiment, the step of determining the user instruction by the control signal processing unit according to the voice instruction information and/or the gesture instruction information is specifically:
当信号处理单元确定出语音指令信息,未确定出姿势指令信息时,将所述语音指令信息对应的语音指令作为用户指令;When the signal processing unit determines the voice command information but not the gesture command information, the voice command corresponding to the voice command information is used as the user command;
当信号处理单元未确定出语音指令信息,确定出姿势指令信息时,将所述姿势指令信息对应的姿势指令作为用户指令;When the signal processing unit does not determine the voice command information but determines the gesture command information, the gesture command corresponding to the gesture command information is used as the user command;
当信号处理单元确定出语音指令信息和姿势指令信息时,根据所述语音指令信息对应的语音指令和所述姿势指令信息对应的姿势指令二者的指令优先级,将指令优先级高的一者作为用户指令。When the signal processing unit determines the voice instruction information and the gesture instruction information, according to the instruction priorities of the voice instruction corresponding to the voice instruction information and the gesture instruction corresponding to the gesture instruction information, the one with the higher instruction priority as a user instruction.
本申请还提出一种交互模块的控制装置,所述交互模块包括超声发声装置、拾音装置和信号处理单元,所述智能交互的控制的控制装置包括:The present application also proposes a control device for an interactive module, the interactive module includes an ultrasonic sound generating device, a sound pickup device and a signal processing unit, and the intelligent interactive control control device includes:
存储器;memory;
处理器;以及processor; and
存储在存储器上并可在处理器上运行的智能交互的控制程序,所述处理器执行所述智能交互的控制程序时实现如上述的交互模块的控制方法。An intelligent interactive control program stored in the memory and operable on the processor, when the processor executes the intelligent interactive control program, the above-mentioned control method of the interactive module is realized.
本申请还提出一种交互模块,所述交互模块包括:The present application also proposes an interactive module, which includes:
超声发声装置,用于发出超声波信号;Ultrasonic sounding device for sending out ultrasonic signals;
拾音装置,用于接收声波信号;以及,a pickup device for receiving acoustic signals; and,
信号处理单元,用于根据声波信号,确定用户的语音指令信息和/或姿势指令信息,并根据所述语音指令信息和/或所述姿势指令信息,确定用户指令。The signal processing unit is configured to determine the user's voice instruction information and/or gesture instruction information according to the sound wave signal, and determine the user instruction according to the voice instruction information and/or the gesture instruction information.
在一实施例中,所述交互模块还包括:In one embodiment, the interaction module also includes:
壳体,所述超声发声装置、所述拾音装置和所述信号处理单元容置于所述壳体中。The casing, the ultrasonic sound generating device, the sound pickup device and the signal processing unit are accommodated in the casing.
本申请还提出一种电子设备,所述电子设备包括:The present application also proposes an electronic device, the electronic device comprising:
如上述的交互模块;以及,interactive modules as described above; and,
如上述的交互模块的控制装置,所述交互模块的控制装置与所述交互模块电连接。Like the control device of the interaction module mentioned above, the control device of the interaction module is electrically connected with the interaction module.
有益效果Beneficial effect
本申请技术方案通过控制超声发声装置发射超声波信号,并控制拾音装置接收声波信号;并控制信号处理单元根据所述声波信号,确定用户的语音指令信息和/或姿势指令信息;以及,控制信号处理单元根据所述语音指令信息和/或所述姿势指令信息,确定用户指令。本申请技术方案通过将语音交互与姿势交互相结合,以使用户可根据实际需要灵活选择两种交互方法来实现对电子设备的控制,相较于单纯的语音交互而言,交互过程可更为简洁,调节精度更高,极大的提高了交互的便利性和效率。此外,本申请技术方案通过采用超声波来实现姿势检测,不仅可避免图像类姿势检测对于光源的依赖,有利于提高姿势检测稳定性,且还可利用同一拾音装置来同时获取用户语音和姿势,无需安装额外的摄像组件,有利于电子设备的小型化和轻型化设计。The technical solution of the present application controls the ultrasonic sound generating device to emit ultrasonic signals, and controls the sound pickup device to receive the sound wave signal; and controls the signal processing unit to determine the user's voice command information and/or gesture command information according to the sound wave signal; and, the control signal The processing unit determines a user instruction according to the voice instruction information and/or the gesture instruction information. The technical solution of the present application combines voice interaction with gesture interaction, so that users can flexibly choose two interaction methods according to actual needs to realize the control of electronic devices. Compared with pure voice interaction, the interaction process can be more efficient. Concise, higher adjustment accuracy, greatly improving the convenience and efficiency of interaction. In addition, the technical solution of the present application uses ultrasonic waves to realize posture detection, which not only avoids the dependence of image-based posture detection on light sources, but also improves the stability of posture detection, and can also use the same sound pickup device to simultaneously acquire user voice and posture. There is no need to install additional camera components, which is conducive to the miniaturization and light weight design of electronic equipment.
附图说明Description of drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图示出的结构获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present application, and those skilled in the art can also obtain other drawings according to the structures shown in these drawings without creative effort.
图1为本申请交互模块的控制方法一实施例的步骤流程示意图;Fig. 1 is a schematic flow chart of the steps of an embodiment of the control method of the interactive module of the present application;
图2为本申请交互模块的控制装置一实施例硬件运行环境的结构示意图;FIG. 2 is a schematic structural diagram of the hardware operating environment of an embodiment of the control device of the interactive module of the present application;
图3为本申请交互模块一实施例的结构示意图;FIG. 3 is a schematic structural diagram of an embodiment of an interactive module of the present application;
图4为本申请交互模块另一实施例的结构示意图;FIG. 4 is a schematic structural diagram of another embodiment of the interactive module of the present application;
图5为本申请交电子设备一实施例的模块示意图。FIG. 5 is a schematic diagram of modules of an embodiment of an electronic device according to the present application.
本发明的实施方式Embodiments of the present invention
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are only part of the embodiments of the present application, not all of them. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of this application.
另外,在本申请中如涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。In addition, descriptions such as "first", "second" and so on in this application are used for description purposes only, and should not be understood as indicating or implying their relative importance or implicitly indicating the quantity of indicated technical features. Thus, the features defined as "first" and "second" may explicitly or implicitly include at least one of these features. In addition, the technical solutions of the various embodiments can be combined with each other, but it must be based on the realization of those skilled in the art. When the combination of technical solutions is contradictory or cannot be realized, it should be considered that the combination of technical solutions does not exist , nor within the scope of protection required by the present application.
本申请提出一种交互模块的控制方法。The present application proposes a method for controlling an interactive module.
目前采用语音交互的电子设备,例如小爱同学或者手机,均为问答式交互,即电子设备在接收用户的语音指令后,执行相应的功能并以语音的方式反馈用户该功能以执行完毕。如此,在需要复杂控制或频繁控制的场景下,需要用户发出复杂的语音指令或者频繁的发出语音指令,以使得交互控制的效率较低且十分不方便。例如:当用户需要对音乐音量或者所播放视频的亮度、进度、声音等视频参数进行调节时,用户只能通过反复输出用于调节的语音指令,且只能根据当前调节完成后的效果来再次调节,整体调节过程十分冗长且复杂,且问答式交互还无法应用于开会或者旁边有人休息等用户不方便说话的场景。At present, electronic devices that use voice interaction, such as Xiaoai classmates or mobile phones, are all question-and-answer interactions, that is, after receiving the user's voice command, the electronic device executes the corresponding function and feedbacks the function to the user in the form of voice to complete the execution. In this way, in scenarios where complex control or frequent control is required, the user is required to issue complex or frequent voice commands, which makes interactive control less efficient and very inconvenient. For example: when the user needs to adjust the video parameters such as the music volume or the brightness, progress, and sound of the video being played, the user can only repeatedly output the voice command for adjustment, and can only adjust it again according to the effect of the current adjustment. Adjustment, the overall adjustment process is very lengthy and complicated, and the question-and-answer interaction cannot be applied to scenarios where it is inconvenient for users to speak, such as meetings or people taking a break next to them.
本申请交互模块的控制方法的执行主体可为交互模块的控制装置。交互模块中可集成有超声发声装置、拾音装置和信号处理单元。The subject of execution of the control method of the interaction module of the present application may be a control device of the interaction module. An ultrasonic sound generating device, a sound pickup device and a signal processing unit can be integrated in the interactive module.
参照图1,在本申请一实施例中,交互模块的控制方法包括:Referring to Fig. 1, in an embodiment of the present application, the control method of the interactive module includes:
步骤S100、控制超声发声装置发射超声波信号,并控制拾音装置接收声波信号;Step S100, controlling the ultrasonic sound generating device to emit ultrasonic signals, and controlling the sound pickup device to receive the sound wave signals;
交互模块的控制装置可在被触发进入交互模式时,控制交互模块工作。具体为:控制交互模块中的超声波发声装置发射频率处于20KHz~40KHz范围内的声波信号,并同时控制拾音装置接收频率处于第一预设频率范围内的声波信号。可以理解的是,处于20KHz~40KHz范围内的声波信号即为超声波信号,而超声波信号和用户所发出的声音信号(为简化表述以下用“用户声音信号”表示“用户所发出的声音信号”)同属声波信号的一种,因而在本实施例中,第一预设频率范围的下限可由用户声音信号的最低频率来确定,第一预设频率范围的上限可由为超声波信号的最高频率来确定。如此,以同时接收超声波和用户声音信号。The control device of the interactive module can control the interactive module to work when it is triggered to enter the interactive mode. Specifically: controlling the ultrasonic sound generating device in the interactive module to emit sound wave signals with a frequency in the range of 20KHz~40KHz, and simultaneously controlling the sound pickup device to receive sound wave signals with a frequency within the first preset frequency range. It can be understood that the sound wave signal in the range of 20KHz~40KHz is the ultrasonic signal, and the ultrasonic signal and the sound signal sent by the user (for the sake of simplicity, the following "user sound signal" means "the sound signal sent by the user") Both belong to a kind of sound wave signal, so in this embodiment, the lower limit of the first preset frequency range can be determined by the lowest frequency of the user's voice signal, and the upper limit of the first preset frequency range can be determined by the highest frequency of the ultrasonic signal. In this way, the ultrasonic wave and the user's voice signal can be received at the same time.
步骤S200、控制信号处理单元根据声波信号,确定用户的语音指令信息和/或姿势指令信息;Step S200, the control signal processing unit determines the user's voice command information and/or gesture command information according to the sound wave signal;
制拾音装置可在将声波形式的声波信号转为电信号的形式后,输出至信号处理单元进行信号分离处理,以从声波信号中分离出其中的用户声音信号分量和/或超声波信号分量。信号处理单元中可预集成有预设语音指令库,预设语音指令库中可预存有多条预设语音指令信息,以在分离出用户声音信号分量时,将分离出的用户声音信号分量与语音指令库中的多条预设语音指令信息进行匹配,并可将匹配度最高的预设语音指令信息作为与声波信号对应的语音指令信息,从而实现确定用户的语音指令信息。信号处理单元中还可预集成有超声波信号分析算法,以在分离出超声波信号分量时,运行超声波信号分析算法对超声波信号分量进行分析计算,进而以得到该超声波信号分量对应的姿势图像信息以及该姿势图像信息所表征的姿势指令,并可将姿势指令作为与声波信号对应的姿势指令信息,从而实现确定用户的姿势指令信息。其中,超声波信号分析算法可为TOF算法,姿势指令信息可为手势指令信息、肢体姿势信息、嘴型姿势信息等姿势信息中的一种或多种组合。可以理解的是,根据用户的实际交互情况,本步骤的确定结果可分为以下三种情况,第一种:只确定出语音指令信息;第二种:只确定出姿势指令信息;第三种:同时确定出语音指令信息和姿势指令信息。The sound pickup device can convert the sound wave signal in the form of sound wave into the form of electrical signal, and output it to the signal processing unit for signal separation processing, so as to separate the user voice signal component and/or ultrasonic signal component from the sound wave signal. A preset voice command library may be pre-integrated in the signal processing unit, and multiple pieces of preset voice command information may be pre-stored in the preset voice command library, so that when the user voice signal component is separated, the separated user voice signal component and Multiple pieces of preset voice command information in the voice command library are matched, and the preset voice command information with the highest matching degree can be used as the voice command information corresponding to the sound wave signal, so as to realize the determination of the user's voice command information. The signal processing unit can also be pre-integrated with an ultrasonic signal analysis algorithm, so that when the ultrasonic signal component is separated, the ultrasonic signal analysis algorithm is run to analyze and calculate the ultrasonic signal component, and then to obtain the posture image information corresponding to the ultrasonic signal component and the The gesture command represented by the gesture image information can be used as the gesture command information corresponding to the sound wave signal, so as to realize the determination of the user's gesture command information. Wherein, the ultrasonic signal analysis algorithm may be a TOF algorithm, and the posture command information may be one or more combinations of posture information such as gesture command information, body posture information, and mouth shape posture information. It can be understood that, according to the actual interaction situation of the user, the determination results of this step can be divided into the following three situations, the first one: only the voice command information is determined; the second one: only the gesture command information is determined; the third one : Determine the voice command information and gesture command information at the same time.
步骤S300、控制信号处理单元根据语音指令信息和/或姿势指令信息,确定用户指令。Step S300, the control signal processing unit determines the user instruction according to the voice instruction information and/or gesture instruction information.
当只确定出语音指令信息或姿势指令信息时,信号处理单元可根据确定出的语音指令信息或姿势指令信息,进一步确定用户所想要触发的功能,即分别确定语音指令信息对应的语音指令以及姿势指令信息对应的姿势指令,此时信号处理单元可根据确定结果,输出一用于触发该功能的用户指令至电子设备中的主控单元或者相应的功能组件,以通过主控单元或者直接触发相应的功能组件来实现用户所想要触发的功能,从而实现人机交互。当同时确定出语音指令信息和姿势指令信息时,信号处理单元可利用语音指令信息和姿势指令信息二者中任意一者所确定的用户所想要触发的功能,对另一者所确定的用户所想要触发的功能,进行确认、修正或者补充等操作中的一种或多种组合,以提高最终输出的用户指令的精确度。When only the voice command information or gesture command information is determined, the signal processing unit can further determine the function that the user wants to trigger according to the determined voice command information or gesture command information, that is, respectively determine the voice command corresponding to the voice command information and The gesture command corresponding to the gesture command information, at this time, the signal processing unit can output a user command for triggering the function to the main control unit or corresponding functional components in the electronic device according to the determination result, so as to trigger the function through the main control unit or directly The corresponding functional components are used to realize the functions that users want to trigger, so as to realize human-computer interaction. When the voice command information and the gesture command information are determined at the same time, the signal processing unit can use the function that the user wants to trigger determined by any one of the voice command information and the gesture command information, to the user determined by the other. For the function to be triggered, perform one or more combinations of operations such as confirmation, correction, or supplement, so as to improve the accuracy of the final output user instruction.
如此,当用户处于需要简单控制的场景时,可通过简单的语音或者简单的姿势来完成调节。当用户处于需要对音乐音量或者所播放视频的视频参数进行调节等需要复杂控制或频繁控制的场景时,用户可通过上扬或者上抬等姿势来持续调高亮度或者音量,或者加快视频进度,以及可通过下扬或者下放等姿势来持续降低亮度或者音量,或者回退视频进度,并可在达到满意亮度、音量或者进度时,直接通过表征停止调节的姿势或者语音来退出调节,以继续听音乐或者观看视频,无需等待语音反馈。当用户处于不方便说话的场景时,用户可通过姿势来触发电子设备的相应功能。本申请技术方案通过将语音交互与姿势交互相结合,以使用户可根据实际需要灵活选择两种交互方法来实现对电子设备的控制,相较于单纯的语音交互而言,交互过程可更为简洁,调节精度更高,极大的提高了交互的便利性和效率。此外,本申请技术方案通过采用超声波来实现姿势检测,不仅可避免图像类姿势检测对于光源的依赖,有利于提高姿势检测稳定性,且还可利用同一拾音装置来同时获取用户语音和姿势,无需安装额外的摄像组件,有利于电子设备的小型化和轻型化设计。In this way, when the user is in a scene that requires simple control, the adjustment can be completed through simple voice or simple gesture. When the user is in a scene that requires complex or frequent control, such as adjusting the music volume or the video parameters of the played video, the user can continuously increase the brightness or volume, or speed up the video progress, by raising or lifting gestures, etc. You can continue to reduce the brightness or volume by lifting or lowering gestures, or roll back the progress of the video, and when you reach a satisfactory brightness, volume or progress, you can exit the adjustment directly through the gesture or voice that characterizes the stop adjustment, so as to continue listening to music Or watch a video without waiting for voice feedback. When the user is in a scene where it is inconvenient to speak, the user can trigger a corresponding function of the electronic device through gestures. The technical solution of the present application combines voice interaction with gesture interaction, so that users can flexibly choose two interaction methods according to actual needs to realize the control of electronic devices. Compared with pure voice interaction, the interaction process can be more efficient. Concise, higher adjustment accuracy, greatly improving the convenience and efficiency of interaction. In addition, the technical solution of the present application uses ultrasonic waves to realize posture detection, which not only avoids the dependence of image-based posture detection on light sources, but also improves the stability of posture detection, and can also use the same sound pickup device to simultaneously acquire user voice and posture. There is no need to install additional camera components, which is conducive to the miniaturization and light weight design of electronic equipment.
参照图1,在本申请一实施例中,控制信号处理单元根据声波信号,确定用户的语音指令信息和/或姿势指令信息的步骤S200,包括:Referring to FIG. 1, in an embodiment of the present application, the step S200 of controlling the signal processing unit to determine the user's voice command information and/or gesture command information according to the sound wave signal includes:
步骤S200、控制信号处理单元确定声波信号中的声音信号和/或超声波信号;Step S200, control the signal processing unit to determine the sound signal and/or ultrasonic signal in the sound wave signal;
由于声波信号转换为电信号后,其中用户声音信号分量和超声波信号分量各自所对应的电信号频率区间互不相同,因而信号处理单元中可集成有用于滤波的硬件电路或软件程序和算法,以在接收到电信号的声波信号时对其进行滤波处理,以将声波信号中处于声音信号对应频率区间的分量和处于超声波信号对应频率区间的分量分别滤出,以得到用户声音信号分量和超声波信号分量。需要说明的是,用户声音信号分量即为本步骤所记载的声音信号,超声波信号分量即为本步骤所记载的超声波信号。After the acoustic wave signal is converted into an electrical signal, the electrical signal frequency intervals corresponding to the user's voice signal component and the ultrasonic signal component are different from each other, so a hardware circuit or software program and algorithm for filtering can be integrated in the signal processing unit. When the sound wave signal of the electrical signal is received, it is filtered to filter out the components in the frequency range corresponding to the sound signal and the components in the frequency range corresponding to the ultrasonic signal in the sound wave signal, so as to obtain the user sound signal component and the ultrasonic signal portion. It should be noted that the user voice signal component is the voice signal recorded in this step, and the ultrasonic signal component is the ultrasonic signal recorded in this step.
步骤S210、控制信号处理单元根据声音信号确定用户的语音指令信息,和/或,根据超声波信号确定用户姿势指令信息。Step S210, the control signal processing unit determines the user's voice command information according to the sound signal, and/or determines the user's gesture command information according to the ultrasonic signal.
本实施例中,信号处理单元可将确定的声音信号与本地预集成的预设语音指令库进行匹配,以将预设语音指令库中匹配度最高的预设语音指令作为语音指令信息;信号处理单元还可利用本地集成的TOF算法对超声波信号进行分析计算来确定姿势指令信息,从而以实现语音指令信息和姿势指令信息的本地识别。此外,信号处理单元还可在本地识别无法确定语音指令信息和姿势指令信息时,将语音信号或超声波信号通过电子设备中的无线通信模组发送至服务器,并可经无线通信模块接收服务器所输出的确定结果,以实现语音指令信息和姿势指令信息的在线识别。如此设置,使得本申请技术方案可适用于电子设备离线和在线两种网络工况,极大的提高了用户使用的便利性以及应用的广泛性。In this embodiment, the signal processing unit can match the determined sound signal with the local pre-integrated preset voice command library, so as to use the preset voice command with the highest matching degree in the preset voice command library as the voice command information; signal processing The unit can also use the locally integrated TOF algorithm to analyze and calculate the ultrasonic signal to determine the gesture instruction information, so as to realize the local recognition of the voice instruction information and the gesture instruction information. In addition, the signal processing unit can also send the voice signal or ultrasonic signal to the server through the wireless communication module in the electronic device when the local recognition cannot determine the voice command information and gesture command information, and can receive the output from the server through the wireless communication module. The result of the determination is to realize the online recognition of voice command information and gesture command information. Such setting makes the technical solution of the present application applicable to both offline and online network conditions of electronic equipment, which greatly improves user convenience and wide application.
进一步地,步骤S100中,控制超声发声装置发射超声波信号,具体为:Further, in step S100, control the ultrasonic sound generating device to emit ultrasonic signals, specifically:
控制超声发声装置间隔发射具有第一特征参数的超声波信号。The ultrasonic sound generating device is controlled to emit ultrasonic signals having the first characteristic parameter at intervals.
本领域技术人员可通过改变超声波的频率或者强度的方式对发射的超声波进行编码加密,以使发射出去的超声波信号具有第一特征参数。例如,将第一预设频率的超声波信号记“0”,第二预设频率的超声波信号记“1”,通过间隔相同的预设时间交替发出第一预设频率和第二预设频率的超声波信号,即可实现发出第一特征参数为“010101010”的超声波信号。此外,本领域技术人员还可通过控制每一次间隔发出的超声波信号所有具有的第一特征参数,来对依次发射的多个第一特征参数所形成参数组再次进行编码加密,以提高本申请技术方案所发射的超声波的特征性,从而以减小环境超声波信号对于姿势检测的干扰。Those skilled in the art can encode and encrypt the emitted ultrasonic waves by changing the frequency or intensity of the ultrasonic waves, so that the emitted ultrasonic signals have the first characteristic parameter. For example, the ultrasonic signal of the first preset frequency is recorded as "0", the ultrasonic signal of the second preset frequency is recorded as "1", and the ultrasonic signals of the first preset frequency and the second preset frequency are alternately sent out at the same preset time interval. Ultrasonic signal, the ultrasonic signal whose first characteristic parameter is "010101010" can be realized. In addition, those skilled in the art can also encode and encrypt the parameter group formed by the multiple first characteristic parameters transmitted sequentially by controlling all the first characteristic parameters of the ultrasonic signals sent at each interval, so as to improve the technology of the present application. The characteristics of the ultrasonic waves emitted by the scheme are used to reduce the interference of environmental ultrasonic signals on posture detection.
进一步地,步骤S210中,根据超声波信号确定用户的姿势指令信息,具体为:Further, in step S210, the gesture instruction information of the user is determined according to the ultrasonic signal, specifically:
确定超声波信号中具有第一特征参数的超声波信号,根据具有第一特征参数的超声波信号确定用户的姿势指令信息。Determining the ultrasonic signal with the first characteristic parameter among the ultrasonic signals, and determining the gesture instruction information of the user according to the ultrasonic signal with the first characteristic parameter.
由于电子设备所处环境通常还具有很多其他类型的电子设备,电子设备所处的空气中往往会存在其他电子设备所发出的超声波信号(即环境超声波信号),十分影响姿势检测的精度。针对此问题,本申请技术方案通过在确定超声波信号后,进一步分离出超声波信号中具有第一特征参数的超声波信号,并根据具有第一特征参数的超声波信号来确定姿势指令信息,因而可有效排除环境超声波信号对于姿势检测的干扰,有利于提高姿势检测的精度。Since the environment where the electronic device is located usually has many other types of electronic devices, there are often ultrasonic signals emitted by other electronic devices (ie, ambient ultrasonic signals) in the air where the electronic device is located, which greatly affects the accuracy of posture detection. In response to this problem, the technical solution of the present application further separates the ultrasonic signal with the first characteristic parameter from the ultrasonic signal after determining the ultrasonic signal, and determines the posture instruction information according to the ultrasonic signal with the first characteristic parameter, thus effectively eliminating The interference of the environmental ultrasonic signal to the posture detection is beneficial to improve the accuracy of the posture detection.
此外,由于环境超声波信号的较为复杂,统一预设的第一特征参数无法有效降低不同种环境超声波信号的干扰,因而在步骤S100之前,还可包括一步骤:控制拾音装置接收环境声波信号,并根据环境声波信号配置第一特征参数;该步骤可使电子设备在发射超声波之前,确定环境声波信号中的环境超声波信号,并根据环境超声波信号来配置第一特征参数,以使即将发射的超声波信号所具有的第一特征参数可最大程度上避免当前环境超声波信号的干扰。In addition, due to the complexity of the environmental ultrasonic signal, the uniformly preset first characteristic parameter cannot effectively reduce the interference of different environmental ultrasonic signals. Therefore, before step S100, a step may be included: controlling the sound pickup device to receive the environmental sound wave signal, And configure the first characteristic parameter according to the environmental sound wave signal; this step can make the electronic device determine the environmental ultrasonic signal in the environmental sound wave signal before emitting the ultrasonic wave, and configure the first characteristic parameter according to the environmental ultrasonic signal, so that the ultrasonic wave to be emitted The first characteristic parameter of the signal can avoid the interference of the current environmental ultrasonic signal to the greatest extent.
参照图1,在本申请一实施例中,控制信号处理单元根据语音指令信息和/或姿势指令信息,确定用户指令的步骤S300,具体为:Referring to FIG. 1 , in an embodiment of the present application, the control signal processing unit determines the user instruction step S300 according to the voice instruction information and/or gesture instruction information, specifically:
当信号处理单元确定出语音指令信息,未确定出姿势指令信息时,将语音指令信息对应的语音指令作为用户指令;When the signal processing unit determines the voice command information but not the gesture command information, the voice command corresponding to the voice command information is used as the user command;
当信号处理单元未确定出语音指令信息,确定出姿势指令信息时,将姿势指令信息对应的姿势指令作为用户指令;When the signal processing unit does not determine the voice command information but determines the gesture command information, the gesture command corresponding to the gesture command information is used as the user command;
当信号处理单元确定出语音指令信息和姿势指令信息时,根据语音指令信息对应的语音指令和姿势指令信息对应的姿势指令二者的指令优先级,将指令优先级高的一者作为用户指令。When the signal processing unit determines the voice command information and the gesture command information, according to the command priorities of the voice command corresponding to the voice command information and the gesture command corresponding to the gesture command information, the one with higher command priority is taken as the user command.
通常情况下,用户不会同时发出语音指令和姿势指令,只会根据自身需要依次发出语音指令或者姿势指令。本申请技术方案通过在信号处理单元只确定出语音指令信息时,即用户只发出语音指令时,将语音指令信息对应的语音指令作为用户指令;在信号处理单元只确定出姿势指令信息时,即用户只发出姿势指令时,将姿势指令信息对应的姿势指令作为用户指令,以满足用户在通常情况下的交互需求。Under normal circumstances, the user will not issue voice commands and gesture commands at the same time, but only issue voice commands or gesture commands in sequence according to their own needs. The technical solution of the present application is to use the voice command corresponding to the voice command information as the user command when the signal processing unit only determines the voice command information, that is, when the user only issues a voice command; when the signal processing unit only determines the gesture command information, that is, When the user only issues a gesture command, the gesture command corresponding to the gesture command information is used as the user command, so as to meet the user's interaction needs in normal situations.
但在某些特殊情况下,例如,用户同时发出语音指令和姿势指令时,本申请技术方案通过使信号处理单元在同时确定出语音指令信息和姿势指令信息时,根据语音指令和姿势指令二者的指令优先级,将指令优先级高的一者做为用户指令。经实验表明,用户在发出指令时,语音指令的误差率原低于姿势指令,且语音指令更能表达用户实际想要执行的功能,因而在本实施例中,语音指令的优先级大于姿势指令,以避免电子设备出现同时执行两个相反功能而导致硬件损坏的情况。However, in some special cases, for example, when the user issues voice commands and gesture commands at the same time, the technical solution of the present application enables the signal processing unit to determine the voice command information and gesture command information according to both voice commands and gesture commands. The priority of the command is the one with the higher priority as the user command. Experiments have shown that when the user sends an instruction, the error rate of the voice instruction is lower than that of the gesture instruction, and the voice instruction can better express the function that the user actually wants to perform. Therefore, in this embodiment, the priority of the voice instruction is higher than that of the gesture instruction. In order to prevent electronic equipment from performing two opposite functions at the same time and causing hardware damage.
本申请还提出一种交互模块的控制装置,交互模块包括超声发声装置、拾音装置和信号处理单元。The present application also proposes a control device for an interactive module, where the interactive module includes an ultrasonic sound generating device, a sound pickup device and a signal processing unit.
参照图2,在本申请一实施例中,交互模块的控制装置包括:Referring to Figure 2, in an embodiment of the present application, the control device of the interactive module includes:
存储器101;memory 101;
处理器102;以及processor 102; and
存储在存储器101上并可在处理器上运行的交互模块的控制程序,处理器102执行交互模块的控制程序时实现如上的交互模块的控制方法。The control program of the interactive module is stored in the memory 101 and can be run on the processor. When the processor 102 executes the control program of the interactive module, the above method for controlling the interactive module is realized.
本实施例中,存储器101可以为高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器,存储器101可选的还可以是独立于前述控制装置的存储装置;处理器102可以为CPU。存储器101和处理器102之间以通信总线103连接,该通信总线103可以是UART总线或I2C总线。可以理解的是,控制装置中还可设置有其他的相关程序,以驱动交互模块中其他的功能单元及模块工作。In this embodiment, the memory 101 can be a high-speed RAM memory, or a stable memory (non-volatile memory), such as a disk memory, and the memory 101 can optionally also be a storage device independent of the aforementioned control device; the processor 102 Can be CPU. The memory 101 and the processor 102 are connected by a communication bus 103, which may be a UART bus or an I2C bus. It can be understood that other related programs can also be set in the control device to drive other functional units and modules in the interaction module to work.
本申请还提出一种交互模块。The application also proposes an interactive module.
参照图3至图5,在一实施例中,交互模块包括:Referring to Fig. 3 to Fig. 5, in one embodiment, the interaction module includes:
超声发声装置210,用于发出超声波信号;Ultrasonic sounding device 210, used for sending out ultrasonic signals;
拾音装置220,用于接收声波信号;以及,The sound pickup device 220 is used to receive the sound wave signal; and,
信号处理单元230,用于根据声波信号,确定用户的语音指令信息和/或姿势指令信息,并根据语音指令信息和/或姿势指令信息,确定用户指令。The signal processing unit 230 is configured to determine the user's voice instruction information and/or gesture instruction information according to the sound wave signal, and determine the user instruction according to the voice instruction information and/or gesture instruction information.
本实施例中,超声发声装置210可为超声发声芯片211;超声发声装置210可中设有发声振膜以及电驱动结构,并可通过电驱动结构控制加载至发声振膜两侧的电场,来驱动发声振膜振动并发出超声波信号。拾音装置220可为拾音MEMS芯片221;拾音装置220中可设有收声振膜,拾音装置220中可通过收声振膜感应空气中的声波信号,并可根据收声振膜两侧电场的变化输出相应的电信号,从而实现声波信号的接收。信号处理单元230可由ASIC芯片232和语音处理芯片231组成;其中,ASIC芯片232用于根据电信号的声波信号,确定声波信号中的声音信号和超声波信号并输出至语音处理芯片231,以使语音处理芯片231可根据声音信号和超声波信号确定用户指令。In this embodiment, the ultrasonic sounding device 210 can be an ultrasonic sounding chip 211; the ultrasonic sounding device 210 can be provided with a sounding diaphragm and an electric drive structure, and can control the electric field loaded on both sides of the sounding diaphragm through the electric drive structure to Drive the sounding diaphragm to vibrate and send out ultrasonic signals. The sound pickup device 220 can be a sound pickup MEMS chip 221; the sound pickup device 220 can be provided with a sound receiving diaphragm, and the sound wave signal in the air can be sensed by the sound receiving diaphragm in the sound pickup device 220, and can be based on the sound receiving diaphragm The change of the electric field on both sides outputs the corresponding electric signal, so as to realize the reception of the sound wave signal. Signal processing unit 230 can be made up of ASIC chip 232 and speech processing chip 231; Wherein, ASIC chip 232 is used for according to the sound wave signal of electric signal, determines the sound signal and the ultrasonic signal in the sound wave signal and outputs to speech processing chip 231, makes speech The processing chip 231 can determine the user instruction according to the sound signal and the ultrasonic signal.
进一步地,交互模块还包括:Further, the interactive module also includes:
壳体240,超声发声装置210、拾音装置220和信号处理单元230容置于壳体240中。The casing 240 , the ultrasonic sound generating device 210 , the sound pickup device 220 and the signal processing unit 230 are accommodated in the casing 240 .
本实施例中,壳体240可由外壳241和基板242可合围形成。壳体240中可设有容置腔,超声发声装置210、拾音装置220和信号处理单元230均可容置于壳体240的容置腔中。在图3所示实施例中,拾音MEMS芯片221和超声发声芯片211均可通过支撑件与基板242连接,基板242上可分别对应拾音MEMS芯片221和超声发声芯片211开设有拾音孔243和出音孔244,如此即可在拾音孔243与拾音MEMS芯片221之间形成拾音空间,以及在超声发声芯片211和出音孔244之间形成出音空间,有利于分别增强拾音MEMS芯片221的感音能力和超声发声芯片211的发声能力,且在图2所示实施例中,语音处理芯片231设于基板242位于容置腔内的一侧上,ASIC芯片232设于语音处理芯片231远离基板242的一侧上。如此设置,可在增强交互模块出声能力和拾音能力的同时,减小交互模块的体积。在图4实施例中,语音处理芯片231还可采用埋入基板242的方式进行设置,以节省空间,并可降低器件间的干扰,有利于提升整体器件的抗干扰能力。In this embodiment, the casing 240 can be formed by surrounding the shell 241 and the substrate 242 . The housing 240 may be provided with an accommodating cavity, and the ultrasonic sound generating device 210 , the sound pickup device 220 and the signal processing unit 230 may all be accommodated in the accommodating cavity of the housing 240 . In the embodiment shown in Fig. 3, sound pick-up MEMS chip 221 and ultrasonic sound generation chip 211 all can be connected with substrate 242 by support member, can be provided with sound pick-up holes corresponding to sound pickup MEMS chip 221 and ultrasonic sound generation chip 211 respectively on the substrate 242 243 and the sound outlet 244, so that a sound pickup space can be formed between the sound pickup hole 243 and the sound pickup MEMS chip 221, and a sound outlet space can be formed between the ultrasonic sound chip 211 and the sound outlet 244, which is conducive to respectively enhancing The sound-sensing capability of the sound-picking MEMS chip 221 and the sound-generating capability of the ultrasonic sound-generating chip 211, and in the embodiment shown in Fig. On the side of the voice processing chip 231 away from the substrate 242 . With such arrangement, the volume of the interactive module can be reduced while enhancing the sound output and sound pickup capabilities of the interactive module. In the embodiment of FIG. 4 , the voice processing chip 231 can also be embedded in the substrate 242 to save space, reduce interference between devices, and improve the anti-interference ability of the entire device.
本申请还提出一种电子设备,参照图5,该交互模块包括交互模块和交互模块的控制装置100,该交互模块和交互模块的控制装置100的具体结构参照上述实施例,由于采用了上述所有实施例的全部技术方案,因此至少具有上述实施例的技术方案所带来的所有有益效果,在此不再一一赘述。The present application also proposes an electronic device. Referring to FIG. 5 , the interactive module includes an interactive module and a control device 100 for the interactive module. All the technical solutions of the embodiments at least have all the beneficial effects brought about by the technical solutions of the above embodiments, and will not be repeated here.
其中,电子设备可为可穿戴设备;交互模块的控制装置100可与交互模块电连接,且交互模块的控制装置100交互模块可容置于电子设备的外壳241中。Wherein, the electronic device can be a wearable device; the control device 100 of the interactive module can be electrically connected with the interactive module, and the control device 100 of the interactive module can be housed in the shell 241 of the electronic device.
以上所述仅为本申请的可选实施例,并非因此限制本申请的专利范围,凡是在本申请的构思下,利用本申请说明书及附图内容所作的等效结构变换,或直接/间接运用在其他相关的技术领域均包括在本申请的专利保护范围内。The above is only an optional embodiment of the application, and does not limit the patent scope of the application. Under the concept of the application, the equivalent structure transformation made by using the description of the application and the contents of the accompanying drawings, or direct/indirect use All other relevant technical fields are included in the patent protection scope of the present application.

Claims (10)

  1. 一种交互模块的控制方法,所述交互模块包括超声发声装置、拾音装置和信号处理单元,其中,所述交互模块的控制方法包括:A control method of an interactive module, the interactive module includes an ultrasonic sound generating device, a sound pickup device and a signal processing unit, wherein the control method of the interactive module includes:
    控制超声发声装置发射超声波信号,并控制拾音装置接收声波信号;Control the ultrasonic sound generating device to emit ultrasonic signals, and control the pickup device to receive sound wave signals;
    控制信号处理单元根据所述声波信号,确定用户的语音指令信息和/或姿势指令信息;以及,The control signal processing unit determines the user's voice command information and/or gesture command information according to the sound wave signal; and,
    控制信号处理单元根据所述语音指令信息和/或所述姿势指令信息,确定用户指令。The control signal processing unit determines a user instruction according to the voice instruction information and/or the gesture instruction information.
  2. 如权利要求1所述的交互模块的控制方法,其中,所述控制信号处理单元根据所述声波信号,确定用户的语音指令信息和/或姿势指令信息的步骤,包括:The control method of the interactive module according to claim 1, wherein the step of determining the voice command information and/or gesture command information of the user according to the sound wave signal by the control signal processing unit comprises:
    控制信号处理单元确定所述声波信号中的声音信号和/或超声波信号;controlling the signal processing unit to determine the sound signal and/or the ultrasonic signal in the sound wave signal;
    控制信号处理单元根据所述声音信号确定用户的语音指令信息,和/或,根据所述超声波信号确定用户姿势指令信息。The control signal processing unit determines the user's voice instruction information according to the sound signal, and/or determines the user gesture instruction information according to the ultrasonic signal.
  3. 如权利要求2所述的交互模块的控制方法,其中,所述控制超声发声装置发射超声波信号的步骤,具体为:The control method of the interactive module according to claim 2, wherein the step of controlling the ultrasonic sound generating device to emit ultrasonic signals is specifically:
    控制超声发声装置间隔发射具有第一特征参数的超声波信号。The ultrasonic sound generating device is controlled to emit ultrasonic signals having the first characteristic parameter at intervals.
  4. 如权利要求3所述的交互模块的控制方法,其中,所述根据所述超声波信号确定用户姿势指令信息,具体为:The control method of the interactive module according to claim 3, wherein said determining the user posture instruction information according to the ultrasonic signal is specifically:
    确定所述超声波信号中具有第一特征参数的超声波信号,根据具有第一特征参数的超声波信号确定用户的姿势指令信息。Determining an ultrasonic signal with a first characteristic parameter among the ultrasonic signals, and determining user gesture instruction information according to the ultrasonic signal with the first characteristic parameter.
  5. 如权利要求3所述的交互模块的控制方法,其中,所述控制超声发声装置发射超声波信号,并控制拾音装置接收声波信号的步骤之前,所述交互模块的控制方法还包括:The control method of the interactive module according to claim 3, wherein, before the step of controlling the ultrasonic sound generating device to emit ultrasonic signals, and controlling the sound pickup device to receive the sound wave signal, the control method of the interactive module further comprises:
    控制拾音装置接收当前环境中的声波信号;Control the pickup device to receive the sound wave signal in the current environment;
    控制信号处理单元根据当前环境中的声波信号,确定当前环境中的超声波信号,并根据当前环境中的超声波信号配置所述第一特征参数。The control signal processing unit determines the ultrasonic signal in the current environment according to the sound wave signal in the current environment, and configures the first characteristic parameter according to the ultrasonic signal in the current environment.
  6. 如权利要求1-5任意一项所述的交互模块的控制方法,其中,所述控制信号处理单元根据所述语音指令信息和/或所述姿势指令信息,确定用户指令的步骤,具体为:The control method of the interactive module according to any one of claims 1-5, wherein the step of determining the user instruction according to the voice instruction information and/or the gesture instruction information by the control signal processing unit is specifically:
    当信号处理单元确定出语音指令信息,未确定出姿势指令信息时,将所述语音指令信息对应的语音指令作为用户指令;When the signal processing unit determines the voice command information but not the gesture command information, the voice command corresponding to the voice command information is used as the user command;
    当信号处理单元未确定出语音指令信息,确定出姿势指令信息时,将所述姿势指令信息对应的姿势指令作为用户指令;When the signal processing unit does not determine the voice command information but determines the gesture command information, the gesture command corresponding to the gesture command information is used as the user command;
    当信号处理单元确定出语音指令信息和姿势指令信息时,根据所述语音指令信息对应的语音指令和所述姿势指令信息对应的姿势指令二者的指令优先级,将指令优先级高的一者作为用户指令。When the signal processing unit determines the voice instruction information and the gesture instruction information, according to the instruction priorities of the voice instruction corresponding to the voice instruction information and the gesture instruction corresponding to the gesture instruction information, the one with the higher instruction priority as a user instruction.
  7. 一种交互模块的控制装置,所述交互模块包括超声发声装置、拾音装置和信号处理单元,其中,所述智能交互的控制的控制装置包括:A control device for an interactive module, the interactive module includes an ultrasonic sound generating device, a sound pickup device and a signal processing unit, wherein the control device for intelligent interactive control includes:
    存储器;memory;
    处理器;以及processor; and
    存储在存储器上并可在处理器上运行的智能交互的控制程序,所述处理器执行所述智能交互的控制程序时实现如权利要求1-6任一项所述的交互模块的控制方法。An intelligent interactive control program stored in a memory and operable on a processor, when the processor executes the intelligent interactive control program, the control method of the interactive module according to any one of claims 1-6 is realized.
  8. 一种交互模块,其中,所述交互模块包括:An interactive module, wherein the interactive module includes:
    超声发声装置,用于发出超声波信号;Ultrasonic sounding device for sending out ultrasonic signals;
    拾音装置,用于接收声波信号;以及,a pickup device for receiving acoustic signals; and,
    信号处理单元,用于根据声波信号,确定用户的语音指令信息和/或姿势指令信息,并根据所述语音指令信息和/或所述姿势指令信息,确定用户指令。The signal processing unit is configured to determine the user's voice instruction information and/or gesture instruction information according to the sound wave signal, and determine the user instruction according to the voice instruction information and/or the gesture instruction information.
  9. 如权利要求8所述的交互模块,其中,所述交互模块还包括:The interactive module according to claim 8, wherein the interactive module further comprises:
    壳体,所述超声发声装置、所述拾音装置和所述信号处理单元容置于所述壳体中。The casing, the ultrasonic sound generating device, the sound pickup device and the signal processing unit are accommodated in the casing.
  10. 一种电子设备,其中,所述电子设备包括:An electronic device, wherein the electronic device includes:
    如权利要求8-9任意一项所述的交互模块;以及,The interactive module according to any one of claims 8-9; and,
    如权利要求7所述的交互模块的控制装置,所述交互模块的控制装置与所述交互模块电连接。The control device of the interactive module according to claim 7, the control device of the interactive module is electrically connected with the interactive module.
PCT/CN2022/129793 2021-11-15 2022-11-04 Electronic device, interaction module, and control method and control apparatus therefor WO2023083105A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111351728.5 2021-11-15
CN202111351728.5A CN114121002A (en) 2021-11-15 2021-11-15 Electronic equipment, interactive module, control method and control device of interactive module

Publications (1)

Publication Number Publication Date
WO2023083105A1 true WO2023083105A1 (en) 2023-05-19

Family

ID=80396605

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/129793 WO2023083105A1 (en) 2021-11-15 2022-11-04 Electronic device, interaction module, and control method and control apparatus therefor

Country Status (2)

Country Link
CN (1) CN114121002A (en)
WO (1) WO2023083105A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114121002A (en) * 2021-11-15 2022-03-01 歌尔微电子股份有限公司 Electronic equipment, interactive module, control method and control device of interactive module

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106843457A (en) * 2016-12-09 2017-06-13 瑞声声学科技(深圳)有限公司 Gesture recognition system and the gesture identification method using the system
US20170300124A1 (en) * 2017-03-06 2017-10-19 Microsoft Technology Licensing, Llc Ultrasonic based gesture recognition
CN107728482A (en) * 2016-08-11 2018-02-23 阿里巴巴集团控股有限公司 Control system, control process method and device
CN109644306A (en) * 2016-08-29 2019-04-16 宗德工业国际有限公司 The system of audio frequency apparatus and audio frequency apparatus
CN114121002A (en) * 2021-11-15 2022-03-01 歌尔微电子股份有限公司 Electronic equipment, interactive module, control method and control device of interactive module

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107728482A (en) * 2016-08-11 2018-02-23 阿里巴巴集团控股有限公司 Control system, control process method and device
CN109644306A (en) * 2016-08-29 2019-04-16 宗德工业国际有限公司 The system of audio frequency apparatus and audio frequency apparatus
CN106843457A (en) * 2016-12-09 2017-06-13 瑞声声学科技(深圳)有限公司 Gesture recognition system and the gesture identification method using the system
US20170300124A1 (en) * 2017-03-06 2017-10-19 Microsoft Technology Licensing, Llc Ultrasonic based gesture recognition
CN114121002A (en) * 2021-11-15 2022-03-01 歌尔微电子股份有限公司 Electronic equipment, interactive module, control method and control device of interactive module

Also Published As

Publication number Publication date
CN114121002A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN105814909B (en) System and method for feeding back detection
WO2020249091A1 (en) Voice interaction method, apparatus, and system
JP2019159306A (en) Far-field voice control device and far-field voice control system
CN105745942A (en) Systems and methods for providing a wideband frequency response
CN112806067B (en) Voice switching method, electronic equipment and system
WO2023083105A1 (en) Electronic device, interaction module, and control method and control apparatus therefor
CN112118527A (en) Multimedia information processing method, device and storage medium
WO2021139535A1 (en) Method, apparatus and system for playing audio, and device and storage medium
CN105760154A (en) Method and device for controlling audio frequency
WO2022007944A1 (en) Device control method, and related apparatus
CN110517711A (en) Playback method, device, storage medium and the electronic equipment of audio
CN110572759B (en) Electronic device
WO2023001195A1 (en) Smart glasses and control method therefor, and system
US11257511B1 (en) Voice equalization based on face position and system therefor
WO2024037183A1 (en) Audio output method, electronic device and computer-readable storage medium
CN114520002A (en) Method for processing voice and electronic equipment
WO2020042491A9 (en) Headphone far-field interaction method, headphone far-field interaction accessory, and wireless headphones
WO2020102994A1 (en) 3d sound effect realization method and apparatus, and storage medium and electronic device
CN111160318B (en) Electronic equipment control method and device
US10966016B2 (en) Electronic device including a plurality of speakers
CN110581911B (en) Electronic device and voice control method
CN208956331U (en) A kind of photoelectricity MEMS condenser microphone and electronic equipment
CN208210221U (en) Realize the performance device around audio
KR20220016552A (en) The method for processing data and the electronic device supporting the same
WO2020042490A1 (en) Earphone far-field interaction method, earphone far-field interaction accessory, and wireless earphone

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22891900

Country of ref document: EP

Kind code of ref document: A1