WO2020087895A1 - 语音交互处理方法及装置 - Google Patents

语音交互处理方法及装置 Download PDF

Info

Publication number
WO2020087895A1
WO2020087895A1 PCT/CN2019/084692 CN2019084692W WO2020087895A1 WO 2020087895 A1 WO2020087895 A1 WO 2020087895A1 CN 2019084692 W CN2019084692 W CN 2019084692W WO 2020087895 A1 WO2020087895 A1 WO 2020087895A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
microprocessor
collector
voice interaction
image
Prior art date
Application number
PCT/CN2019/084692
Other languages
English (en)
French (fr)
Inventor
文白林
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to US16/840,753 priority Critical patent/US11620995B2/en
Publication of WO2020087895A1 publication Critical patent/WO2020087895A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4418Suspend and resume; Hibernate and awake
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/66Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
    • H04M1/667Preventing unauthorised calls from a telephone set
    • H04M1/67Preventing unauthorised calls from a telephone set by electronic means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the embodiments of the present application relate to the field of computer technology, and in particular, to a voice interaction processing method and device.
  • Voice interaction as a new generation of user interaction mode after keyboard interaction, mouse interaction and touch screen interaction, is gradually favored by the majority of users due to its convenient and fast characteristics, and is widely used in various electronic devices.
  • a voice assistant on the smart mobile terminal, the user can perform voice interaction with the smart mobile terminal with the help of the voice assistant.
  • voice interaction software such as voice assistants
  • voice assistant Siri users need to wake up Siri through the specific wake-up word "Hey Siri”; or, when using the Huawei Mate10 voice assistant, they need to wake up the voice assistant through the specific wake-up word "Hello, little E” .
  • the voice interaction software every time the user uses the voice interaction software, he needs to speak the wake-up word first, resulting in an unfriendly voice interaction process, which cannot achieve the effect of natural interaction. If the voice interaction software is kept running for a long time, the device will increase Power consumption, so the user experience is low.
  • the present application provides a voice interaction processing method and device, which realizes a friendly and natural voice interaction effect, and at the same time reduces the power consumption of electronic equipment.
  • a voice interactive processing device which includes: a sound collector, an image collector, a microprocessor, and an application processor; wherein, the sound collector is used to collect and transmit the sound data of the first user To the microprocessor; the microprocessor is used to start the image collector when the first user is the target user according to the sound data of the first user; the image collector is used to collect the user image data and transmit it to the microprocessor; The microprocessor is also used to send a wake-up instruction for waking up the voice interaction software to the application processor when the target user is determined to be in the voice interaction state according to the user image data; the application processor is used to receive the wake-up instruction and wake up the voice Interactive software to provide voice interaction for target users.
  • the device may be a terminal device, such as an artificial intelligence robot, mobile phone, smart speaker, self-service cash machine, and so on.
  • the user does not need to wake up the voice interaction software through the wake-up word, but the low-power microprocessor receives and processes the sound data transmitted by the sound collector and the user image data transmitted by the image collector, and then determines the target
  • a wake-up instruction to wake up the voice interactive software is sent to the application processor, and the target user is provided with a voice interactive function to wake up the voice interactive software, thereby achieving a friendly and natural voice interactive effect, and at the same time There is no need to be in a working state for a long time, thereby reducing the power consumption of the device.
  • the microprocessor is specifically configured to acquire a user image feature based on the user image data, and determine that the target user is in a voice interaction state according to the user image feature.
  • the user image data here can be understood as the original data of one or more pictures, or the original data of the video, and the user image feature is the feature data extracted from the original data.
  • the microprocessor is specifically configured to obtain a voiceprint feature of the user according to the voice data of the first user, and determine the first user as the voiceprint feature of the user Target users.
  • the microprocessor uses voiceprint recognition to determine that the first user is indeed the target user.
  • the microprocessor when it is determined that the target user is in a voice interaction state according to the user image feature, is specifically configured to: determine the user image feature and the target user's Target image feature matching, and it is determined that the target user is in a voice interaction state according to the living body detection method.
  • the microprocessor determines the user image feature and the target user's Target image feature matching, and it is determined that the target user is in a voice interaction state according to the living body detection method.
  • the device further includes: a posture sensor for detecting a posture parameter of the device and transmitting the posture parameter to the microprocessor;
  • the image collector includes: pre-image acquisition And a rear image collector;
  • the microprocessor is also used to determine that the device is in a front-facing posture according to the posture parameter, and send a first start command to the front image collector to turn on the front image collector; or,
  • the microprocessor is also used to determine that the device is in the reverse-positioning posture according to the posture parameter, and send a second opening instruction to the rear image collector to turn on the rear image collector.
  • the image collector can be accurately turned on, and the power consumption of the device is further reduced.
  • the device further includes: a distance sensor, configured to detect a distance between the first user and the device, and transmit the distance to the microprocessor; the microprocessor, further It is used to determine that when the distance is less than or equal to the preset distance, send a third start instruction to the sound collector to start the sound collector.
  • a distance sensor configured to detect a distance between the first user and the device, and transmit the distance to the microprocessor
  • the microprocessor further It is used to determine that when the distance is less than or equal to the preset distance, send a third start instruction to the sound collector to start the sound collector.
  • a voice interaction processing method is provided, which is applied to a device including a sound collector, an image collector, a microprocessor, and an application processor; wherein, the sound collector collects the sound data of the first user and transmits it to Microprocessor; when the microprocessor determines that the first user is the target user based on the first user's voice data, the image collector is turned on; the image collector collects the user's image data and transmits it to the microprocessor; the microprocessor according to the user image When the data determines that the target user is in the voice interaction state, the wake-up instruction for waking up the voice interaction software is sent to the application processor; the application processor receives the wake-up instruction and wakes up the voice interaction software to provide the voice interaction function for the target user.
  • the microprocessor determining that the target user is in a voice interaction state based on the user image data includes: acquiring user image characteristics based on the user image data, and determining the target user based on the user image characteristics In the state of voice interaction.
  • the user image data here can be understood as the original data of one or more pictures, or the original data of the video, and the user image feature is the feature data extracted from the original data.
  • the determining, by the microprocessor, that the first user is the target user based on the sound data of the first user includes: obtaining a user voiceprint according to the sound data of the first user Feature, and determine the first user as the target user according to the voiceprint feature of the user.
  • the microprocessor uses voiceprint recognition to determine that the first user is indeed the target user.
  • the microprocessor determines that the target user is in a voice interaction state based on the user image data, which specifically includes: based on the user image data, determining that the target user is in voice using a living body detection method Interactive state.
  • the device further includes a posture sensor
  • the image collector includes a front image collector and a rear image collector
  • the method further includes: the posture sensor detects the posture parameters of the device, And transmit the posture parameter to the microprocessor; when the microprocessor determines that the device is in the front-facing posture according to the posture parameter, the microprocessor sends a first start command to the front image collector to turn on the front image collector; or, microprocessing
  • the device determines that the device is in the reverse-positioning posture according to the posture parameter, it sends a second opening instruction to the rear image collector to turn on the rear image collector.
  • the device further includes a distance sensor
  • the method further includes: the distance sensor detects the distance between the first user and the device, and transmits the distance to the microprocessor; the micro When determining that the distance is less than or equal to the preset distance, the processor sends a third start instruction to the sound collector to start the sound collector.
  • FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a voice interaction processing method provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of another voice interaction processing method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a three-dimensional space provided by an embodiment of the present application.
  • At least one refers to one or more, and “multiple” refers to two or more.
  • “And / or” describes the relationship of the related objects, indicating that there can be three relationships, for example, A and / or B, which can mean: A exists alone, A and B exist at the same time, B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the related object is a "or” relationship.
  • “At least one of the following” or a similar expression refers to any combination of these items, including any combination of a single item or a plurality of items.
  • At least one (a) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, and c may be single or multiple.
  • the words "first” and “second” do not limit the number and the execution order.
  • the voice interaction processing method provided in this application can be applied to human-computer interaction scenarios, and users can achieve friendly and natural interaction with voice interaction devices installed with voice interaction software without awakening the voice interaction software through specific wake-up words. Thereby improving the user's experience.
  • the voice interaction device here may refer to a device used for voice interaction with a user, and the device may be a mobile phone, a tablet computer, a video camera, a computer, a wearable device, a vehicle-mounted device, or a portable device.
  • the above-mentioned devices or the above-mentioned devices with built-in chip systems are collectively referred to as electronic devices in this application.
  • FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the mobile phone or a chip system built in the mobile phone includes: a memory 101, a processor 102, a sensor component 103, a multimedia component 104, an audio component 105, an input / output interface 106 and Power supply assembly 107 and so on.
  • the memory 101 can be used to store data, software programs, and modules; mainly including a storage program area and a storage data area, where the storage program area can store software programs, including instructions formed in code, including but not limited to an operating system, at least one function place Required applications, such as sound playback function, image playback function, etc .; the storage data area can store data created according to the use of mobile phones, such as audio data, image data, phone book, etc.
  • the memory may be a floppy disk, a hard disk such as an internal hard disk and a mobile hard disk, a magnetic disk, an optical disk, a magneto-optical disk such as CD_ROM, DCD_ROM, non-volatile storage Devices such as RAM, ROM, PROM, EPROM, EEPROM, flash memory, or any other form of storage medium known in the art.
  • the processor 102 is the control center of the mobile phone, and uses various interfaces and lines to connect the various parts of the entire device, by running or executing the software programs and / or software modules stored in the memory 101, and calling the data stored in the memory 101, Perform various functions of the mobile phone and process data to monitor the mobile phone as a whole.
  • the processor 102 may integrate an application processor (Application Processor, AP) and a microprocessor, wherein the AP mainly processes an operating system, a user interface, an application program, etc.
  • the microprocessor may be used to receive and process sensors The data collected by multiple components such as component 103 and multimedia component 104, and controls the opening and closing of multiple components. It can be understood that the above-mentioned microprocessor may not be integrated into the processor 102.
  • the processor 102 may further include other hardware circuits or accelerators, such as application specific integrated circuits, field programmable gate arrays or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. It can implement or execute various exemplary logical blocks, modules, and circuits described in conjunction with the disclosure of the present application.
  • the processor 102 may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a digital signal processor and a microprocessor, and so on.
  • the sensor assembly 103 includes one or more sensors, which are used to provide various aspects of status assessment for the mobile phone.
  • the sensor assembly 103 may include a distance sensor and a posture sensor.
  • the distance sensor is used to detect the distance between an external object and the mobile phone
  • the posture sensor is used to detect the posture of the mobile phone, such as acceleration / deceleration or orientation.
  • the distance sensor in the embodiment of the present application may be an optical sensor
  • the attitude sensor may be an acceleration sensor or a gyro sensor.
  • the sensor assembly 103 may also include a magnetic sensor, a pressure sensor, or a temperature sensor, and the sensor assembly 103 may also detect the on / off state of the mobile phone, the relative positioning of the components, or the temperature change of the mobile phone.
  • the sensor component 103 may send the detected various state parameters to a microprocessor with lower power consumption for processing.
  • the multimedia component 104 provides a screen of an output interface between the mobile phone and the user.
  • the screen may be a touch panel, and when the screen is a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or sliding action, but also detect the duration and pressure related to the touch or sliding operation.
  • the multimedia component 104 further includes an image collector, and the multimedia component 104 includes a front image collector and / or a rear image collector.
  • the front image collector in the embodiment of the present application may be a front camera, a rear
  • the image collector can be a rear camera. Regardless of the front camera or the rear camera, the number of cameras is not limited in this embodiment.
  • the image acquisition method used can be to capture a single or multiple pictures, but also can be recorded video.
  • the front camera and / or the rear camera can sense an external multimedia signal, which is used to form an image frame.
  • Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the multimedia component 104 can send the collected image data to a microprocessor with low power consumption for processing, and the microprocessor can control the front image collector and / or the rear image collector Turn on and off.
  • the audio component 105 may provide an audio interface between the user and the mobile phone.
  • the audio component 105 may include a sound collector, and the sound collector in the embodiment of the present application may be a microphone.
  • the audio component 105 may further include an audio circuit and a speaker, or the sound collector further includes an audio circuit and a speaker.
  • the audio circuit can transmit the converted electrical signal of the received audio data to the speaker, which converts the speaker into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit Convert to audio data, and then output the audio data to the input ⁇ output interface 106 to send to, for example, another mobile phone, or output the audio data to the processor 102 for further processing.
  • the audio component 105 may send the collected audio data to a microprocessor with lower power consumption for processing.
  • the input / output interface 106 provides an interface between the processor 102 and a peripheral interface module.
  • the peripheral interface module may include a keyboard, a mouse, or a USB (Universal Serial Bus) device.
  • the input ⁇ output interface 106 may have only one input ⁇ output interface, or may have multiple input ⁇ output interfaces.
  • the power supply component 107 is used to provide power for various components of the mobile phone.
  • the power supply component 107 may include a power management system, one or more power supplies, and other components related to the generation, management, and distribution of power by the mobile phone.
  • the mobile phone may further include a wireless fidelity (WiFi) module, a Bluetooth module, etc., and the embodiments of the present application will not be repeated here.
  • WiFi wireless fidelity
  • Bluetooth Bluetooth module
  • FIG. 1 does not constitute a limitation on the mobile phone, and may include more or less components than those shown in the figure, or a combination of certain components, or a different arrangement of components.
  • FIG. 2 is a schematic flowchart of a voice interaction processing method provided by an embodiment of the present application.
  • the method can be applied to a device including a sound collector, an image collector, a microprocessor, and an application processor.
  • the device is shown in FIG.
  • S201 The sound collector collects the sound data of the first user and transmits it to the microprocessor.
  • the sound collector may refer to a device used to collect sound data in the electronic device.
  • the sound collector may include a microphone, or include a microphone and an audio circuit.
  • the first user may refer to any user that can be collected by the sound collector.
  • the first user may refer to a user holding the electronic device or a user closer to the electronic device.
  • the sound data of the first user may refer to the sound signal of the first user collected by the sound collector, or may refer to audio data obtained by converting the sound signal.
  • the sound collector may be a low-power sound collector, and the sound collector may be in an on state.
  • the sound collector may collect the first user The voice data of the first user, and transmit the collected voice data of the first user to the microprocessor.
  • one or more users' voiceprint features are pre-stored in the electronic device.
  • the microprocessor obtains the voiceprint characteristics of the user according to the voice data of the first user, and determines the first user as the target user according to the voiceprint characteristics of the user.
  • the microprocessor may refer to a processor with low power consumption.
  • the microprocessor may be a sensor hub or a microcontroller.
  • the user voiceprint feature may refer to a voice feature used to uniquely identify a user, for example, the user voiceprint feature may include one or more of sound intensity, formant frequency value and its trend, waveform, etc.
  • the image collector may refer to a device for collecting user images, for example, the image collector may be a camera of the electronic device; optionally, the image collector may include a front image collector (for example, a front camera ) And / or a rear image collector (for example, a rear camera).
  • the target user here may refer to a preset user, for example, the target user may be the owner of the electronic device, or other users who frequently use the electronic device, etc. This embodiment of the present application does not specifically limit this.
  • the microprocessor may convert the sound signal into audio data when receiving the sound signal, and extract user voiceprint features from the converted audio data; Or, when the voice data of the first user is already converted audio data, when the microprocessor receives the audio data, the microprocessor may directly extract user voiceprint features from the audio data.
  • the microprocessor can obtain and store the voiceprint feature of the target user in advance, and when the microprocessor extracts the voiceprint feature of the user, the microprocessor can compare the stored voiceprint feature of the target user with the voiceprint feature of the user Perform matching. If the two match successfully, the microprocessor determines the first user as the target user.
  • the microprocessor determines that the first user is not the target user.
  • the microprocessor may send an opening instruction to the image collector, and the image collector is started when the image collector receives the opening instruction.
  • different users may have different permission levels, and some users have higher permission levels, and no subsequent image verification is required to enter the voice interaction, then the matched users need to confirm whether they meet the requirements.
  • Authority that is, whether subsequent image verification is required, and the image collector is turned on if necessary.
  • the method and process for the microprocessor to extract the voiceprint feature of the user from the audio data can refer to the related art, which is not specifically limited in the embodiments of the present application.
  • the successful matching of the voiceprint feature of the target user with the voiceprint feature of the user may mean that the two are completely consistent or the matching error is within a certain fault tolerance range.
  • S203 The image collector collects user image data and transmits it to the microprocessor.
  • the image collector can capture user images and collect user image data in real time, periodically or non-periodically, and transmit the collected user image data to the microprocessor.
  • the microprocessor obtains user image features based on the user image data, and determines that the target user is in a voice interaction state according to the user image features.
  • the user image feature refers to an image feature used to uniquely identify a user.
  • the user image feature may include one or more of eye features, face features, and lip features.
  • the voice interaction software may refer to software for providing voice interaction functions.
  • the voice interaction software may be software such as a voice assistant.
  • the microprocessor when the microprocessor receives the user image data, the microprocessor can extract user image features from the user image data; meanwhile, the microprocessor can obtain and store the target user image features in advance. After the microprocessor extracts the user image feature, the microprocessor may determine that the user image feature matches the target user's image feature according to the face recognition method, for example, the microprocessor matches the stored target user's image feature with the user The image features are matched. If the two match successfully, the microprocessor determines that the user corresponding to the user's image feature is the target user. If the two match fails, the microprocessor determines that the user corresponding to the user's image feature is not the target user.
  • the microprocessor may further determine that the target user is in a voice interaction state according to the living body detection method. For example, the microprocessor may determine the user image feature within a period of time The lip feature determines whether the target user is speaking. When it is determined that the target user is speaking, it can be determined that the target user is in a voice interaction state. After that, the microprocessor can send a wake-up instruction for waking up the voice interaction software to the application processor.
  • S205 The application processor receives the wake-up instruction and wakes up the voice interaction software to provide the voice interaction function for the target user.
  • the voice interaction software can run on the application processor, and when the voice interaction software is not used for a long time, the voice interaction software can be in a sleep state or a low power consumption state, that is, when the power consumption of the voice interaction software is lower than the normal working state Power consumption.
  • the application processor receives the wake-up instruction sent by the microprocessor, the application processor can wake up the voice interaction software, so that the voice interaction software provides the voice interaction function for the target user.
  • the user does not need to wake up the voice interaction software through a wake-up word, but the microprocessor with lower power consumption receives and processes the sound data transmitted by the sound collector and the user image data transmitted by the image collector, and When it is determined that the target user is in the interactive voice state, a wake-up instruction for awakening the voice interactive software is sent to the application processor, and the voice interactive software is awakened to provide the voice interactive function for the target user, thereby realizing a friendly and natural voice interactive effect, and simultaneously The interactive software does not need to be in a working state for a long time, thereby reducing the power consumption of the electronic device.
  • the electronic device further includes a posture sensor, and the image collector includes an image front collector and a rear image collector.
  • the front collector or the rear image collector can be started by the method shown in FIG. 3 as follows. As shown in FIG. 3, the method includes: S2021-S2023.
  • the power consumption of the electronic device can be further reduced by turning on the front collector or the rear image collector through the methods described in S2021 to S2023 below.
  • the attitude sensor detects the attitude parameter of the electronic device, and transmits the attitude parameter to the microprocessor.
  • the attitude sensor may refer to a sensor that can be used to detect the attitude of the electronic device.
  • the attitude sensor may include an acceleration sensor or a gyro sensor.
  • the posture parameter may include a parameter of the electronic device in a preset three-dimensional space, and the three-dimensional space may include an x-axis, a y-axis, and a z-axis.
  • the three-dimensional space is perpendicular to the y-axis and A horizontal plane is formed, and the z-axis is perpendicular to the horizontal plane.
  • the x-axis, y-axis, and z-axis correspond to (0, 0, 9.81); assuming that the electronic device is placed horizontally on the horizontal plane and the reverse side is facing upward, The x-axis, y-axis, and z-axis correspond to (0, 0, -9.81).
  • the posture sensor may be set to a working state, and the posture sensor may detect posture parameters of the electronic device in real time, periodically, or non-periodically, and transmit the detected posture parameters to the microprocessor .
  • the attitude sensor can periodically detect the parameters of the electronic device in the three-dimensional space shown in FIG. 4, and transmit the corresponding values on the x-axis, y-axis, and z-axis to the microprocessor.
  • the posture parameter is described only by taking the three-dimensional space shown in FIG. 4 as an example. In practical applications, the posture parameter may also be expressed in other ways, which is not specifically limited in the embodiments of the present application.
  • the microprocessor can determine the placement state of the electronic device according to the posture parameter. It is assumed that the pose parameters corresponding to the front-facing pose include: the value corresponding to the z axis in the parameter in three-dimensional space is greater than 0 and less than or equal to 9.81. If the value corresponding to the z-axis in the posture parameter received by the microprocessor belongs to the range of (0,9.81), it is determined that the electronic device is in a front-facing posture, so that the microprocessor sends a first start command to the front image collector , So that the front image collector starts and collects user image data when it receives the first start instruction.
  • the posture parameter corresponding to the posture on the reverse side includes: the value corresponding to the z axis in the parameter in three-dimensional space is greater than or equal to -9.81 and less than 0. If the value corresponding to the z-axis in the attitude parameter received by the microprocessor belongs to the range of [-9.81,0), it is determined that the electronic device is in a posture on the reverse side, for example, the microprocessor determines that the z-axis corresponds to the attitude parameter The value is greater than or equal to -9.81 and less than 0, then the microprocessor can determine that the electronic device is in the reverse position, so that the microprocessor sends a second start command to the rear image collector, so that the rear image collector is receiving When the second opening instruction is reached, the user image data is opened and collected.
  • the electronic device further includes a distance sensor.
  • the method further includes the following steps to start the sound collector, specifically as described below.
  • the distance sensor detects the distance between the first user and the electronic device, and transmits the distance to the microprocessor.
  • the distance sensor may be used to detect the distance between the external object and the electronic device.
  • the distance sensor may be a proximity light sensor.
  • the distance sensor may be set to a working state, and the distance sensor may detect an external object (for example, the external object is the first user) and the electronic device in real time, periodically, or non-periodically Distance, and transmit the detected distance to the microprocessor.
  • the microprocessor When determining that the distance is less than or equal to the preset distance, the microprocessor sends a third start instruction to the sound collector to start the sound collector.
  • the preset distance can be set in advance, and the specific value of the preset distance can be set by a person skilled in the art according to actual needs, which is not specifically limited in this embodiment of the present application.
  • the microprocessor may determine whether the distance is less than or equal to a preset distance, for example, the preset distance is 20 centimeters (cm); when it is determined that the distance is less than the preset distance At this time, the microprocessor may send a third start instruction to the sound collector, so that when the sound collector receives the third start instruction, it starts and collects the sound data of the first user.
  • the sound collector by detecting the distance between the first user and the electronic device, and when the distance is less than or equal to the preset distance, the sound collector is turned on to collect the sound data of the first user.
  • the power consumption is usually smaller than the power consumption of the sound collector, so compared with the sound collector being in a working state for a long time, the power consumption of the electronic device can be further reduced.
  • An embodiment of the present application further provides a voice interaction processing device.
  • the structure of the device may be shown in FIG. 1.
  • the device may be an electronic device or a chip system built in the electronic device.
  • the sound collector is used to collect the first user's sound data and transmit it to the microprocessor;
  • the microprocessor is used to obtain the user's voiceprint characteristics according to the first user's sound data, and according to When the voiceprint feature of the user determines that the first user is the target user, the image collector is turned on; the image collector is used to collect user image data and transmit it to the microprocessor;
  • the microprocessor is also used to obtain based on the user image data User image features, and when it is determined that the target user is in the voice interaction state according to the user image features, send a wake-up instruction to wake up the voice interaction software to the application processor;
  • the application processor is used to receive the wake-up instruction and wake up the voice interaction
  • the software provides voice interaction for target users.
  • the microprocessor is specifically used to determine that the target user is in the voice interaction state based on the user image data and using a living body detection method.
  • a posture sensor is used to detect the posture parameters of the device and transmit the posture parameters to the microprocessor;
  • the image collector includes: a front image collector and a rear image collector;
  • the microprocessor is also used to determine that the device is in the front-facing position according to the posture parameter, and sends a first start command to the front image collector to turn on the front image collector; or, the microprocessor is also used to When the posture parameter determines that the device is in the reverse posture, it sends a second start instruction to the rear image collector to start the rear image collector.
  • a distance sensor is used to detect the distance between the first user and the device and transmit the distance to the microprocessor; the microprocessor is also used to determine the distance When the distance is less than or equal to the preset distance, a third start instruction is sent to the sound collector to start the sound collector.
  • the user does not need to wake up the voice interaction software through a wake-up word
  • the low-power microprocessor receives and processes the sound data transmitted by the sound collector and the user image data transmitted by the image collector
  • a wake-up instruction for awakening the voice interactive software is sent to the application processor, and the voice interactive software is awakened to provide the voice interactive function for the target user, thereby realizing a friendly and natural voice interactive effect, and simultaneously
  • the interactive software does not need to be in a working state for a long time, thereby reducing the power consumption of the device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Hardware Design (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种语音交互处理方法及装置,用于实现友好、自然的语音交互效果的同时降低功耗。该方法中的微处理器根据声音采集器采集的声音数据确定第一用户为目标用户时才开启图像采集器(S202);然后由图像采集器采集用户图像数据,并传输至微处理器(S203);微处理器根据用户图像数据确定目标用户处于语音交互状态时才向应用处理器发送唤醒指令(S204)。通过该方法一定程度上避免了图像采集器和应用处理器的误开启,降低了功耗。

Description

语音交互处理方法及装置
本申请要求于2018年10月29日提交国家知识产权局、申请号为201811271551.6、申请名称为“语音交互处理方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及计算机技术领域,尤其涉及一种语音交互处理方法及装置。
背景技术
语音交互作为继键盘交互、鼠标交互及触摸屏交互后的新一代用户交互模式,以其方便快捷的特点,逐渐被广大用户所喜爱,并被广泛应用在各种电子设备中。比如,在智能移动终端上安装语音助手,用户借助语音助手即可与智能移动终端进行语音交互。
目前,用户在使用语音助手等语音交互软件时,通常需要先通过特定唤醒词唤醒语音交互软件,然后再向语音交互软件输入相应的语音操作指令以实现语音交互。比如,用户在使用语音助手Siri时,需要先通过特定唤醒词“Hey Siri”唤醒Siri;或者,在使用华为Mate10的语音助手时,需要先通过特定唤醒词“你好,小E”唤醒语音助手。上述使用过程中,用户每次在使用语音交互软件时,都需要先说出唤醒词,导致语音交互过程不友好,无法达到自然交互的效果,若保持语音交互软件长期处于运行状态又会增加设备的功耗,从而用户体验较低。
发明内容
本申请提供一种语音交互处理方法及装置,实现了友好、自然的语音交互效果,同时降低了电子设备的功耗。
下面通过多个方面介绍本申请提供的方案,应理解的是,各个方面之间的实现方式和有益效果可相互参考。
第一方面,提供一种语音交互处理装置,该装置包括:声音采集器、图像采集器、微处理器和应用处理器;其中,声音采集器,用于采集第一用户的声音数据,并传输至微处理器;微处理器,用于根据第一用户的声音数据确定第一用户为目标用户时,开启图像采集器;图像采集器,用于采集用户图像数据,并传输至微处理器;微处理器,还用于根据该用户图像数据确定目标用户处于语音交互状态时,向应用处理器发送用于唤醒语音交互软件的唤醒指令;应用处理器,用于接收该唤醒指令,并唤醒语音交互软件以为目标用户提供语音交互功能。
该装置可以为终端设备,例如人工智能机器人、手机、智能音箱、自助取款机等等。
上述技术方案中,用户无需通过唤醒词唤醒语音交互软件,而是由功耗较低的微处理器接收和处理声音采集器传输的声音数据和图像采集器传输的用户图像数据,并在确定目标用户处于交互语音状态时,向应用处理器发送用于唤醒语音交互软件的唤醒指令,以唤醒语音交互软件为目标用户提供语音交互功能,从而实现了友好、自然 的语音交互效果,同时语音交互软件无需长时间处于工作状态,从而降低了该装置的功耗。
在第一方面的一种可能的实现方式中,所述微处理器具体用于根据该用户图像数据获取用户图像特征,并根据该用户图像特征确定目标用户处于语音交互状态。这里用户图像数据可以理解为一张或多张图片的原始数据,或视频的原始数据,用户图像特征是从原始数据中提取出的特征数据。
在第一方面的一种可能的实现方式中,所述微处理器具体用于根据所述第一用户的所述声音数据获取用户声纹特征,并根据该用户声纹特征确定第一用户为目标用户。换句话说,所述微处理器利用声纹识别的方式确定该第一用户确实为目标用户。
在第一方面的一种可能的实现方式中,在根据该用户图像特征确定目标用户处于语音交互状态时,微处理器,具体用于:根据人脸识别方法确定该用户图像特征与目标用户的目标图像特征匹配,以及根据活体检测方法确定目标用户处于语音交互状态。上述可能的实现方式中,提供了一种简单有效的确定目标用户处于语音交互状态的方法。
在第一方面的一种可能的实现方式中,该装置还包括:姿态传感器,用于检测该装置的姿态参数,并将该姿态参数传输至微处理器;图像采集器包括:前置图像采集器和后置图像采集器;微处理器,还用于根据该姿态参数确定该装置处于正面放置姿态时,向前置图像采集器发送第一开启指令,以开启前置图像采集器;或者,微处理器,还用于根据该姿态参数确定该装置处于反面放置姿态时,向后置图像采集器发送第二开启指令,以开启后置图像采集器。上述可能的实现方式中,能够精准地实现图像采集器的开启,进一步降低该装置的功耗。
在第一方面的一种可能的实现方式中,该装置还包括:距离传感器,用于检测第一用户与该装置之间的距离,并将该距离传输至微处理器;微处理器,还用于确定在该距离小于或等于预设距离时,向声音采集器发送第三开启指令,以开启声音采集器。上述可能的实现方式中,只有当用户靠近装置时才确定用户确实要与该装置对话,此时才开启声音采集器,因此能够进一步降低该装置的功耗。
第二方面,提供一种语音交互处理方法,应用于包括声音采集器、图像采集器、微处理器和应用处理器的装置中;其中,声音采集器采集第一用户的声音数据,并传输至微处理器;微处理器根据第一用户的声音数据确定第一用户为目标用户时,开启图像采集器;图像采集器采集用户图像数据,并传输至微处理器;微处理器根据该用户图像数据确定目标用户处于语音交互状态时,向应用处理器发送用于唤醒语音交互软件的唤醒指令;应用处理器接收该唤醒指令,并唤醒语音交互软件以为目标用户提供语音交互功能。
在第二方面的一种可能的实现方式中,微处理器根据该用户图像数据确定目标用户处于语音交互状态,包括:根据该用户图像数据获取用户图像特征,并根据该用户图像特征确定目标用户处于语音交互状态。这里用户图像数据可以理解为一张或多张图片的原始数据,或视频的原始数据,用户图像特征是从原始数据中提取出的特征数据。
在第二方面的一种可能的实现方式中,所述微处理器根据第一用户的声音数据确 定第一用户为目标用户,包括:根据所述第一用户的所述声音数据获取用户声纹特征,并根据该用户声纹特征确定第一用户为目标用户。换句话说,所述微处理器利用声纹识别的方式确定该第一用户确实为目标用户。
在第二方面的一种可能的实现方式中,微处理器根据该用户图像数据确定目标用户处于语音交互状态,具体包括:基于所述用户图像数据,利用活体检测方法确定所述目标用户处于语音交互状态。
在第二方面的一种可能的实现方式中,该装置还包括姿态传感器,图像采集器包括前置图像采集器和后置图像采集器,该方法还包括:姿态传感器检测该装置的姿态参数,并将该姿态参数传输至微处理器;微处理器根据姿态参数确定该装置处于正面放置姿态时,向前置图像采集器发送第一开启指令,以开启前置图像采集器;或者,微处理器根据该姿态参数确定该装置处于反面放置姿态时,向后置图像采集器发送第二开启指令,以开启后置图像采集器。
在第二方面的一种可能的实现方式中,该装置还包括距离传感器,该方法还包括:距离传感器检测第一用户与该装置之间的距离,并将该距离传输至微处理器;微处理器在确定该距离小于或等于预设距离时,向声音采集器发送第三开启指令,以开启声音采集器。
可以理解地,上述提供的语音交互处理方法所能达到的有益效果可参考上文所提供的对应的装置中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种电子设备的结构示意图;
图2为本申请实施例提供的一种语音交互处理方法的流程示意图;
图3为本申请实施例提供的另一种语音交互处理方法的流程示意图;
图4为本申请实施例提供的一种三维空间的示意图。
具体实施方式
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c或a-b-c,其中a、b和c可以是单个,也可以是多个。另外,在本申请的实施例中,“第一”、“第二”等字样并不对数量和执行次序进行限定。
需要说明的是,本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请提供的语音交互处理方法可应用于人机交互场景中,且用户无需通过特定唤醒词唤醒语音交互软件,即可与安装有语音交互软件的语音交互设备之间实现友好、自然的交互,从而提高用户的体验。这里的语音交互设备可以是指用于与用户进行语 音交互的设备,该设备可以是手机、平板电脑、摄像机、计算机、可穿戴设备、车载设备或便携式设备等。为方便描述,本申请中将上面提到的设备或者内置芯片系统的上述设备统称为电子设备。
图1为本申请实施例提供的一种电子设备的结构示意图。图1中以该电子设备为手机为例进行说明,该手机或者内置于手机的芯片系统包括:存储器101、处理器102、传感器组件103、多媒体组件104、音频组件105、输入\输出接口106和电源组件107等。
下面结合图1对手机或者内置于手机的芯片系统的各个构成部件进行具体的介绍:
存储器101可用于存储数据、软件程序以及模块;主要包括存储程序区和存储数据区,其中,存储程序区可存储软件程序,包括以代码形成的指令,包括但不限于操作系统、至少一个功能所需的应用程序,比如声音播放功能、图像播放功能等;存储数据区可存储根据手机的使用所创建的数据,比如音频数据、图像数据、电话本等。在一些可行的实施例中,可以有一个存储器,也可以有多个存储器;该存储器可以是软盘,硬盘如内置硬盘和移动硬盘,磁盘,光盘,磁光盘如CD_ROM、DCD_ROM,非易失性存储设备如RAM、ROM、PROM、EPROM、EEPROM、闪存、或者技术领域内所公知的任意其他形式的存储介质。
处理器102是手机的控制中心,利用各种接口和线路连接整个设备的各个部分,通过运行或执行存储在存储器101内的软件程序和/或软件模块,以及调用存储在存储器101内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。在本申请实施例中,处理器102可集成应用处理器(Application Processor,AP)和微处理器,其中,AP主要处理操作系统、用户界面和应用程序等,微处理器可用于接收和处理传感器组件103和多媒体组件104等多个组件采集到的数据,并控制多个组件的开启和关闭等。可以理解的是,上述微处理器也可以不集成到处理器102中。
除此以外,处理器102还可进一步包括其他硬件电路或加速器,如专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器102也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理器和微处理器的组合等等。
传感器组件103包括一个或多个传感器,用于为手机提供各个方面的状态评估。其中,传感器组件103可以包括距离传感器和姿态传感器,距离传感器用于检测外部物体与手机的距离,姿态传感器用于检测手机的放置姿态,比如加速/减速、或者方位等。比如,本申请实施例中的距离传感器可以为光传感器,姿态传感器可以为加速度传感器或陀螺仪传感器。此外,传感器组件103还可以包括磁传感器,压力传感器或温度传感器,通过传感器组件103还可以检测到手机打开/关闭状态,组件的相对定位,或手机的温度变化等。在本申请实施例中,传感器组件103可以将检测到的各种状态参数发送给功耗较低的微处理器进行处理。
多媒体组件104在手机和用户之间提供一个输出接口的屏幕,该屏幕可以为触摸面板,且当该屏幕为触摸面板时,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。 所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。此外,多媒体组件104还包括图像采集器,多媒体组件104包括一个前置图像采集器和/或后置图像采集器,比如,本申请实施例中的前置图像采集器可以为前置摄像头,后置图像采集器可以为后置摄像头。不论前置摄像头还是后置摄像头,摄像头的个数本实施例不做限定。采用的图像采集方法可以是捕获单张或多张图片,也可以是录制视频。
当手机处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以感应外部的多媒体信号,该信号被用于形成图像帧。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。在本申请实施例中,多媒体组件104可以将采集到的图像数据发送给功耗较低的微处理器进行处理,且微处理器可以控制前置图像采集器和/或后置图像采集器的开启和关闭。
音频组件105可提供用户与手机之间的音频接口,比如,音频组件105可以包括声音采集器,本申请实施例中的声音采集器可以为麦克风。音频组件105还可以包括音频电路和扬声器,或者声音采集器还包括音频电路和扬声器。具体的,音频电路可将接收到的音频数据转换后的电信号,传输到扬声器,由扬声器转换为声音信号输出;另一方面,麦克风将收集的声音信号转换为电信号,由音频电路接收后转换为音频数据,再将音频数据输出至输入\输出接口106以发送给比如另一手机,或者将音频数据输出至处理器102以便进一步处理。在本申请实施例中,音频组件105可以将采集到的音频数据发送给功耗较低的微处理器进行处理。
输入\输出接口106为处理器102和外围接口模块之间提供接口,比如,外围接口模块可以包括键盘、鼠标、或USB(通用串行总线)设备等。在一种可能的实现方式中,输入\输出接口106可以只有一个输入\输出接口,也可以有多个输入\输出接口。电源组件107用于为手机的各个组件提供电源,电源组件107可以包括电源管理系统,一个或多个电源,及其他与手机生成、管理和分配电力相关联的组件。
尽管未示出,手机还可以包括无线保真(Wireless Fidelity,WiFi)模块、蓝牙模块等,本申请实施例在此不再赘述。本领域技术人员可以理解,图1中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
图2为本申请实施例提供的一种语音交互处理方法的流程示意图,该方法可应用于包括声音采集器、图像采集器、微处理器和应用处理器的装置中,比如,该装置为图1所示的电子设备,参见图2,该方法包括以下几个步骤。
S201:声音采集器采集第一用户的声音数据,并传输至微处理器。
其中,该声音采集器可以是指该电子设备中用于采集声音数据的器件,比如,该声音采集器可以包括麦克风,或者包括麦克风和音频电路等。第一用户可以是指该声音采集器可采集到的任意一个用户,比如,第一用户可以是指手持该电子设备的用户、或者距离该电子设备较近的用户等。第一用户的声音数据可以是指该声音采集器采集到的第一用户的声音信号,或者是指将该声音信号进行转换得到的音频数据等。
具体的,该声音采集器可以是低功耗的声音采集器,且该声音采集器可以处于开启状态,当该声音采集器检测到第一用户的声音时,该声音采集器可采集第一用户的 声音数据,并将采集到的第一用户的声音数据传输至微处理器。
S202:微处理器根据第一用户的声音数据确定第一用户为目标用户时,开启图像采集器。
具体的,该电子设备中预先存储一个或多个用户的声纹特征。微处理器根据第一用户的声音数据获取用户声纹特征,并根据该用户声纹特征确定第一用户为目标用户。
其中,微处理器可以是指功耗较低的处理器,比如,微处理器可以是传感器中心(sensor hub)或者微控制器等。该用户声纹特征可以是指用于唯一标识一个用户的声音特征,比如,该用户声纹特征可以包括音强、共振峰的频率值及其走向、波形等中的一种或者多种等。
另外,图像采集器可以是指用于采集用户图像的器件,比如,图像采集器可以是该电子设备的摄像头;可选的,该图像采集器可以包括前置图像采集器(比如,前置摄像头)和/或后置图像采集器(比如,后置摄像头)。这里的目标用户可以是指预先设置的用户,比如该目标用户可以是该电子设备的主人,或者是经常使用该电子设备的其他用户等,本申请实施例对此不作具体限定。
具体的,当第一用户的声音数据为声音信号时,微处理器在接收到该声音信号时,可以将该声音信号转换为音频数据,并从转换得到的音频数据中提取用户声纹特征;或者,当第一用户的声音数据已经为转换后的音频数据时,微处理器在接收到该音频数据时,可以直接从该音频数据中提取用户声纹特征。同时,微处理器中可以预先获取并存储目标用户的声纹特征,在微处理器提取到该用户声纹特征时,微处理器可以将存储的目标用户的声纹特征与该用户声纹特征进行匹配,若二者匹配成功,则微处理器确定第一用户为目标用户,若二者匹配失败,则微处理器确定第一用户不是目标用户。当微处理器确定第一用户为目标用户时,微处理器可以向图像采集器发送开启指令,已在该图像采集器接收到该开启指令时开启该图像采集器。
在其它一些实施例中,不同的用户可以有不同的权限等级,有的用户权限等级较高,不需要后续的图像验证即可进入语音交互,那么此时匹配出的用户还需要确认下是否符合权限,即是否需要后续的图像验证,若需要才开启图像采集器。
需要说明的是,微处理器从音频数据中提取用户声纹特征的方法和过程可以参考相关技术,本申请实施例对此不作具体限定。另外,目标用户的声纹特征与该用户声纹特征匹配成功,可以是指二者完全一致、或者匹配误差在一定的容错范围内。
S203:图像采集器采集用户图像数据,并传输至微处理器。
在开启图像采集器之后,该图像采集器可以捕获用户图像并实时地、周期性地或者非周期性地采集用户图像数据,并将采集得到的用户图像数据传输至微处理器。
S204:微处理器根据该用户图像数据确定目标用户处于语音交互状态时,向应用处理器发送用于唤醒语音交互软件的唤醒指令。
具体的,微处理器根据该用户图像数据获取用户图像特征,并根据该用户图像特征确定目标用户处于语音交互状态。
其中,用户图像特征是指用于唯一标识一个用户的图像特征,比如,该用户图像特征可以包括眼部特征、脸庞特征和唇部特征中的一种或者多种等。语音交互软件可以是指用于提供语音交互功能的软件,比如,该语音交互软件可以是语音助手等软件。
具体的,微处理器在接收到该用户图像数据时,微处理器可以从该用户图像数据中提取用户图像特征;同时,微处理器中可以预先获取并存储目标用户的图像特征。在微处理器提取到该用户图像特征之后,微处理器可以根据人脸识别方法确定该用户图像特征与目标用户的图像特征匹配,比如,微处理器将存储的目标用户的图像特征与该用户图像特征进行匹配,若二者匹配成功,则微处理器确定该用户图像特征对应的用户为目标用户,若二者匹配失败,则微处理器确定该用户图像特征对应的用户不是目标用户。当微处理器确定该用户图像特征对应的用户为目标用户时,微处理器可以进一步根据活体检测方法确定目标用户处于语音交互状态,比如,微处理器可以根据一段时间内的该用户图像特征中的唇部特征确定目标用户是否在说话,当确定目标用户在说话时,即可确定目标用户处于语音交互状态。之后,微处理器可以向应用处理器发送用于唤醒语音交互软件的唤醒指令。
S205:应用处理器接收该唤醒指令,并唤醒语音交互软件以为目标用户提供语音交互功能。
语音交互软件可以运行在应用处理器上,且当语音交互软件长时间不被使用时,语音交互软件可以处于休眠状态或者低功耗状态,即语音交互软件的功耗低于正常工作状态时的功耗。当应用处理器接收到微处理器发送的唤醒指令时,应用处理器可以唤醒语音交互软件,从而使得语音交互软件为目标用户提供语音交互功能。
在本申请实施例中,用户无需通过唤醒词唤醒语音交互软件,而是由功耗较低的微处理器接收和处理声音采集器传输的声音数据和图像采集器传输的用户图像数据,并在确定目标用户处于交互语音状态时,向应用处理器发送用于唤醒语音交互软件的唤醒指令,以唤醒语音交互软件为目标用户提供语音交互功能,从而实现了友好、自然的语音交互效果,同时语音交互软件无需长时间处于工作状态,从而降低了电子设备的功耗。
进一步地,该电子设备还包括姿态传感器,该图像采集器包括图像前置采集器和后置图像采集器。相应地,在上述S202中,当微处理器开启图像采集器时,具体可以通过如下图3所示的方法开启前置采集器或后置图像采集器。如图3所示,该方法包括:S2021-S2023。通过下述S2021至S2023所述的方法开启前置采集器或后置图像采集器,可以进一步降低该电子设备的功耗。
S2021:姿态传感器检测该电子设备的姿态参数,并将该姿态参数传输至微处理器。
其中,姿态传感器可以是指能够用于检测该电子设备的姿态的传感器,比如,该姿态传感器可以包括加速度传感器或者陀螺仪传感器等。该姿态参数可以包括该电子设备在预设的三维空间中的参数,该三维空间可以包括x轴、y轴和z轴,比如,该三维空间如图4所示,x轴与y轴垂直且形成水平面,z轴垂直于该水平面。示例性的,假设该电子设备水平放置在该水平面且正面向上时,x轴、y轴和z轴对应为(0,0,9.81);假设该电子设备水平放置在该水平面且反面向上时,x轴、y轴和z轴对应为(0,0,-9.81)。
具体的,该姿态传感器可以被设置为工作状态,且该姿态传感器可以实时地、周期性地、或者非周期性地检测该电子设备的姿态参数,并将检测得到的姿态参数传输至微处理器。比如,该姿态传感器可以周期性地检测该电子设备在图4所示三维空间 中的参数,并将检测得到的x轴、y轴和z轴上对应数值传输给微处理器。
需要说明的,上述仅以图4所示的三维空间为例对该姿态参数进行说明,在实际应用中,还可以通过其他方式表示该姿态参数,本申请实施例不作具体限定。
S2022:微处理器根据该姿态参数确定该电子设备处于正面放置姿态时,向前置图像采集器发送第一开启指令,以开启前置图像采集器。
当微处理器接收到该姿态参数时,微处理器可以根据该姿态参数确定该电子设备的放置状态。其中,假设该正面放置姿态对应的姿态参数包括:三维空间中参数中z轴对应的数值大于0且小于或者等于9.81。若微处理器接收到的姿态参数中z轴对应的数值属于(0,9.81]的范围中,则确定该电子设备处于正面放置姿态,从而微处理器向前置图像采集器发送第一开启指令,以使前置图像采集器在接收到第一开启指令时,开启并采集用户图像数据。
S2023:微处理器根据该姿态参数确定该电子设备处于反面放置姿态时,向后置图像采集器发送第二开启指令,以开启后置图像采集器。
其中,假设该反面放置姿态对应的姿态参数包括:三维空间中参数中z轴对应的数值大于或等于-9.81且小于0。若微处理器接收到的姿态参数中z轴对应的数值属于[-9.81,0)的范围中,则确定该电子设备处于反面放置姿态,比如,微处理器确定该姿态参数中的z轴对应的数值大于或等于-9.81且小于0,则微处理器可以确定该电子设备处于反面放置姿态,从而微处理器向后置图像采集器发送第二开启指令,以使后置图像采集器在接收到第二开启指令时,开启并采集用户图像数据。
需要说明的是,上述仅以正面放置姿态、反面放置姿态及其对应的数值范围为例进行说明,在实际应用于中,还可以通过设置其他的姿态以及设置不同的数值范围等来实现,本申请实施例在此不再赘述。
进一步地,该电子设备还包括距离传感器。相应地,在声音采集器通过上述S201采集第一用户的声音数据之前,该方法还包括如下步骤以开启声音采集器,具体如下所述。
S2011:距离传感器检测第一用户与该电子设备之间的距离,并将该距离传输至微处理器。
其中,该距离传感器可以用于检测外部物体与该电子设备之间的距离,比如,该距离传感器可以是接近光传感器。具体的,该距离传感器可以被设置为工作状态,且该距离传感器可以实时地、周期性地、或者非周期性地检测外部物体(比如,该外部物体为第一用户)与该电子设备之间的距离,并将检测得到的距离传输至微处理器。
S2012:微处理器在确定该距离小于或等于预设距离时,向声音采集器发送第三开启指令,以开启声音采集器。
其中,该预设距离可以事先设置,且该预设距离的具体数值可以由本领域技术人员根据实际需要进行设置,本申请实施例对此不作具体限定。具体的,当微处理器接收到该距离时,微处理器可以确定该距离是否小于或者等于预设距离,比如,该预设距离为20厘米(cm);当确定该距离小于该预设距离时,微处理器可以向声音采集器发送第三开启指令,以使声音采集器在接收到第三开启指令时,开启并采集第一用户的声音数据。
本申请实施例中,通过检测第一用户与该电子设备之间的距离,并在该距离小于或等于预设距离时,开启声音采集器以采集第一用户的声音数据,由于距离传感器的功耗通常小于声音采集器的功耗,因此与声音采集器长时间处于工作状态相比,可以进一步降低该电子设备的功耗。
本申请实施例还提供一种语音交互处理装置,该装置的结构可以参见图1所示,该装置可以为电子设备或者内置于电子设备的芯片系统。在本申请实施例中,声音采集器,用于采集第一用户的声音数据,并传输至微处理器;微处理器,用于根据第一用户的声音数据获取用户声纹特征,并在根据该用户声纹特征确定第一用户为目标用户时,开启图像采集器;图像采集器,用于采集用户图像数据,并传输至微处理器;微处理器,还用于根据该用户图像数据获取用户图像特征,并在根据该用户图像特征确定目标用户处于语音交互状态时,向应用处理器发送用于唤醒语音交互软件的唤醒指令;应用处理器,用于接收该唤醒指令,并唤醒语音交互软件以为目标用户提供语音交互功能。
可选的,在根据该用户图像特征确定目标用户处于语音交互状态时,微处理器具体用于:基于用户图像数据,利用活体检测方法确定目标用户处于语音交互状态。
在本申请的另一实施例中,姿态传感器,用于检测该装置的姿态参数,并将该姿态参数传输至微处理器;图像采集器包括:前置图像采集器和后置图像采集器;微处理器,还用于根据该姿态参数确定该装置处于正面放置姿态时,向前置图像采集器发送第一开启指令,以开启前置图像采集器;或者,微处理器,还用于根据该姿态参数确定该装置处于反面放置姿态时,向后置图像采集器发送第二开启指令,以开启后置图像采集器。
在本申请的另一实施例中,距离传感器,用于检测第一用户与该装置之间的距离,并将该距离传输至所述微处理器;微处理器,还用于确定在该距离小于或等于预设距离时,向声音采集器发送第三开启指令,以开启声音采集器。
需要说明的是,上述关于声音采集器、图像采集器、微处理器、应用采集器、姿态传感器和距离传感器的相关描述,具体可以参见上述方法实施例中的相关描述,本申请实施例在此不再赘述。
在本申请实施例中,用户无需通过唤醒词唤醒语音交互软件,而是由功耗较低的微处理器接收和处理声音采集器传输的声音数据和图像采集器传输的用户图像数据,并在确定目标用户处于交互语音状态时,向应用处理器发送用于唤醒语音交互软件的唤醒指令,以唤醒语音交互软件为目标用户提供语音交互功能,从而实现了友好、自然的语音交互效果,同时语音交互软件无需长时间处于工作状态,从而降低了该装置的功耗。
最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (8)

  1. 一种语音交互处理装置,其特征在于,所述装置包括:声音采集器、图像采集器、微处理器和应用处理器;其中,
    所述声音采集器,用于采集第一用户的声音数据,并传输至所述微处理器;
    所述微处理器,用于根据所述第一用户的声音数据确定所述第一用户为目标用户时,开启所述图像采集器;
    所述图像采集器,用于采集用户图像数据,并传输至所述微处理器;
    所述微处理器,还用于根据所述用户图像数据确定所述目标用户处于语音交互状态时,向所述应用处理器发送唤醒指令;
    所述应用处理器,用于接收所述唤醒指令,并唤醒语音交互软件以为所述目标用户提供语音交互功能。
  2. 根据权利要求1所述的装置,其特征在于,所述微处理器,具体用于:
    基于所述用户图像数据,利用活体检测方法确定所述目标用户处于语音交互状态。
  3. 根据权利要求1或2所述的装置,其特征在于,所述装置还包括:
    姿态传感器,用于检测所述装置的姿态参数,并将所述姿态参数传输至所述微处理器;
    所述图像采集器包括:前置图像采集器和后置图像采集器;
    所述微处理器,还用于根据所述姿态参数确定所述装置处于正面放置姿态时,向所述前置图像采集器发送第一开启指令,以开启所述前置图像采集器;或者,
    所述微处理器,还用于根据所述姿态参数确定所述装置处于反面放置姿态时,向所述后置图像采集器发送第二开启指令,以开启所述后置图像采集器。
  4. 根据权利要求1-3任一项所述的装置,其特征在于,所述装置还包括:
    距离传感器,用于检测所述第一用户与所述装置之间的距离,并将所述距离传输至所述微处理器;
    所述微处理器,还用于确定在所述距离小于或等于预设距离时,向所述声音采集器发送第三开启指令,以开启所述声音采集器。
  5. 一种语音交互处理方法,其特征在于,应用于包括声音采集器、图像采集器、微处理器和应用处理器的装置中;其中,
    所述声音采集器采集第一用户的声音数据,并传输至所述微处理器;
    所述微处理器根据所述第一用户的声音数据确定所述第一用户为目标用户时,开启所述图像采集器;
    所述图像采集器采集用户图像数据,并传输至所述微处理器;
    所述微处理器根据所述用户图像数据确定所述目标用户处于语音交互状态时,向所述应用处理器发送唤醒指令;
    所述应用处理器接收所述唤醒指令,并唤醒语音交互软件以为所述目标用户提供语音交互功能。
  6. 根据权利要求5所述的方法,其特征在于,所述微处理器根据所述用户图像数据确定所述目标用户处于语音交互状态,具体包括:
    基于所述用户图像数据,利用活体检测方法确定所述目标用户处于语音交互状态。
  7. 根据权利要求5或6所述的方法,其特征在于,所述装置还包括姿态传感器,所述图像采集器包括前置图像采集器和后置图像采集器,所述方法还包括:
    所述姿态传感器检测所述装置的姿态参数,并将所述姿态参数传输至所述微处理器;
    所述微处理器根据所述姿态参数确定所述装置处于正面放置姿态时,向所述前置图像采集器发送第一开启指令,以开启所述前置图像采集器;或者,
    所述微处理器根据所述姿态参数确定所述装置处于反面放置姿态时,向所述后置图像采集器发送第二开启指令,以开启所述后置图像采集器。
  8. 根据权利要求5-7任一项所述的方法,其特征在于,所述装置还包括距离传感器,所述方法还包括:
    所述距离传感器检测所述第一用户与所述装置之间的距离,并将所述距离传输至所述微处理器;
    所述微处理器在确定所述距离小于或等于预设距离时,向所述声音采集器发送第三开启指令,以开启所述声音采集器。
PCT/CN2019/084692 2018-10-29 2019-04-26 语音交互处理方法及装置 WO2020087895A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/840,753 US11620995B2 (en) 2018-10-29 2020-04-06 Voice interaction processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811271551.6A CN111105792A (zh) 2018-10-29 2018-10-29 语音交互处理方法及装置
CN201811271551.6 2018-10-29

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/840,753 Continuation US11620995B2 (en) 2018-10-29 2020-04-06 Voice interaction processing method and apparatus

Publications (1)

Publication Number Publication Date
WO2020087895A1 true WO2020087895A1 (zh) 2020-05-07

Family

ID=70419701

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/084692 WO2020087895A1 (zh) 2018-10-29 2019-04-26 语音交互处理方法及装置

Country Status (3)

Country Link
US (1) US11620995B2 (zh)
CN (1) CN111105792A (zh)
WO (1) WO2020087895A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333854A (zh) * 2020-09-29 2022-04-12 华为技术有限公司 语音唤醒方法、电子设备及芯片系统
CN112802468B (zh) * 2020-12-24 2023-07-11 合创汽车科技有限公司 汽车智能终端的交互方法、装置、计算机设备和存储介质
CN113380243A (zh) * 2021-05-27 2021-09-10 广州广电运通智能科技有限公司 一种辅助语音交互的方法及系统、存储介质
CN114356275B (zh) * 2021-12-06 2023-12-29 上海小度技术有限公司 交互控制方法、装置、智能语音设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834222A (zh) * 2015-04-30 2015-08-12 广东美的制冷设备有限公司 家用电器的控制方法和装置
CN205038456U (zh) * 2015-04-30 2016-02-17 广东美的制冷设备有限公司 家用电器的控制装置
CN106572299A (zh) * 2016-10-31 2017-04-19 北京小米移动软件有限公司 摄像头开启方法及装置
CN107678793A (zh) * 2017-09-14 2018-02-09 珠海市魅族科技有限公司 语音助手启动方法及装置、终端及计算机可读存储介质
CN108154140A (zh) * 2018-01-22 2018-06-12 北京百度网讯科技有限公司 基于唇语的语音唤醒方法、装置、设备及计算机可读介质
CN108181992A (zh) * 2018-01-22 2018-06-19 北京百度网讯科技有限公司 基于手势的语音唤醒方法、装置、设备及计算机可读介质
CN108182939A (zh) * 2017-12-13 2018-06-19 苏州车萝卜汽车电子科技有限公司 用于自助服务的语音处理方法及装置
CN108600695A (zh) * 2018-04-19 2018-09-28 广东水利电力职业技术学院(广东省水利电力技工学校) 一种智能交互机器人控制系统

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9274744B2 (en) * 2010-09-10 2016-03-01 Amazon Technologies, Inc. Relative position-inclusive device interfaces
CN103745723A (zh) * 2014-01-13 2014-04-23 苏州思必驰信息科技有限公司 一种音频信号识别方法及装置
US20160116960A1 (en) * 2014-10-24 2016-04-28 Ati Technologies Ulc Power management using external sensors and data
CN105786438A (zh) * 2014-12-25 2016-07-20 联想(北京)有限公司 一种电子系统
CN104951077A (zh) 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 基于人工智能的人机交互方法、装置和终端设备
CN105426723A (zh) * 2015-11-20 2016-03-23 北京得意音通技术有限责任公司 基于声纹识别、人脸识别以及同步活体检测的身份认证方法及系统
CN106653014A (zh) * 2016-10-31 2017-05-10 广州华凌制冷设备有限公司 智能家居的语音控制方法和智能家居的语音控制装置
KR20180060328A (ko) * 2016-11-28 2018-06-07 삼성전자주식회사 멀티 모달 입력을 처리하는 전자 장치, 멀티 모달 입력을 처리하는 방법 및 멀티 모달 입력을 처리하는 서버
CN106782524A (zh) * 2016-11-30 2017-05-31 深圳讯飞互动电子有限公司 一种混合唤醒方法及系统
US11314898B2 (en) * 2017-02-28 2022-04-26 Samsung Electronics Co., Ltd. Operating method of electronic device for function execution based on voice command in locked state and electronic device supporting the same
US9973732B1 (en) * 2017-07-27 2018-05-15 Amazon Technologies, Inc. Device selection for video based communications
CN107918726A (zh) * 2017-10-18 2018-04-17 深圳市汉普电子技术开发有限公司 距离感应方法、设备及存储介质
CN207704862U (zh) * 2018-01-03 2018-08-07 深圳市桑格尔科技股份有限公司 一种采用声纹识别技术唤醒的设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834222A (zh) * 2015-04-30 2015-08-12 广东美的制冷设备有限公司 家用电器的控制方法和装置
CN205038456U (zh) * 2015-04-30 2016-02-17 广东美的制冷设备有限公司 家用电器的控制装置
CN106572299A (zh) * 2016-10-31 2017-04-19 北京小米移动软件有限公司 摄像头开启方法及装置
CN107678793A (zh) * 2017-09-14 2018-02-09 珠海市魅族科技有限公司 语音助手启动方法及装置、终端及计算机可读存储介质
CN108182939A (zh) * 2017-12-13 2018-06-19 苏州车萝卜汽车电子科技有限公司 用于自助服务的语音处理方法及装置
CN108154140A (zh) * 2018-01-22 2018-06-12 北京百度网讯科技有限公司 基于唇语的语音唤醒方法、装置、设备及计算机可读介质
CN108181992A (zh) * 2018-01-22 2018-06-19 北京百度网讯科技有限公司 基于手势的语音唤醒方法、装置、设备及计算机可读介质
CN108600695A (zh) * 2018-04-19 2018-09-28 广东水利电力职业技术学院(广东省水利电力技工学校) 一种智能交互机器人控制系统

Also Published As

Publication number Publication date
CN111105792A (zh) 2020-05-05
US11620995B2 (en) 2023-04-04
US20200234707A1 (en) 2020-07-23

Similar Documents

Publication Publication Date Title
WO2020087895A1 (zh) 语音交互处理方法及装置
EP3179474B1 (en) User focus activated voice recognition
CN108735209B (zh) 唤醒词绑定方法、智能设备及存储介质
WO2021013137A1 (zh) 一种语音唤醒方法及电子设备
CN112289313A (zh) 一种语音控制方法、电子设备及系统
CN103677267A (zh) 移动终端及其唤醒方法、装置
CN105676984B (zh) 可携式电子装置以及其电源控制方法
CN110730115B (zh) 语音控制方法及装置、终端、存储介质
CN111819533B (zh) 一种触发电子设备执行功能的方法及电子设备
CN104950775A (zh) 唤醒主mcu微控制单元的电路、方法及装置
CN113347560B (zh) 蓝牙连接方法、电子设备及存储介质
WO2021000956A1 (zh) 一种智能模型的升级方法及装置
CN108920922A (zh) 解锁方法、装置、移动终端及计算机可读介质
US20230333628A1 (en) User attention-based user experience
US11636867B2 (en) Electronic device supporting improved speech recognition
EP4010833A1 (en) Machine learning based privacy processing
CN111696562B (zh) 语音唤醒方法、设备及存储介质
CN111681655A (zh) 语音控制方法、装置、电子设备及存储介质
WO2023280020A1 (zh) 一种系统模式切换方法、电子设备及计算机可读存储介质
CN114039398B (zh) 一种新能源摄像设备的控制方法、装置和存储介质
CN111028846B (zh) 免唤醒词注册的方法和装置
CN115731923A (zh) 命令词响应方法、控制设备及装置
CN111681654A (zh) 语音控制方法、装置、电子设备及存储介质
CN110989963B (zh) 唤醒词推荐方法及装置、存储介质
CN116030804A (zh) 一种语音唤醒方法、语音唤醒装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19880735

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19880735

Country of ref document: EP

Kind code of ref document: A1