CN111210819B - Information processing method and device and electronic equipment - Google Patents

Information processing method and device and electronic equipment Download PDF

Info

Publication number
CN111210819B
CN111210819B CN201911423707.2A CN201911423707A CN111210819B CN 111210819 B CN111210819 B CN 111210819B CN 201911423707 A CN201911423707 A CN 201911423707A CN 111210819 B CN111210819 B CN 111210819B
Authority
CN
China
Prior art keywords
audio data
instruction
information
data
response
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911423707.2A
Other languages
Chinese (zh)
Other versions
CN111210819A (en
Inventor
杨卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911423707.2A priority Critical patent/CN111210819B/en
Publication of CN111210819A publication Critical patent/CN111210819A/en
Application granted granted Critical
Publication of CN111210819B publication Critical patent/CN111210819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • H04L12/282Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The application discloses an information processing method, an information processing device and electronic equipment, wherein the method comprises the steps that in the process of playing first audio data obtained in a first mode by first equipment, second audio data is obtained in a second mode; the first device processes the second audio data, and if the second audio data includes target audio data that matches the first reference data, the first device executes the first instruction. The method and the device have the advantages that even if the first device is in the process of playing the audio data, other audio data can be responded and processed, the influence of playing of the audio data of the device on the processing of the other audio data is reduced, and the experience effect of a user is improved.

Description

Information processing method and device and electronic equipment
Technical Field
The present application relates to the field of information processing technologies, and in particular, to an information processing method, an information processing device, and an electronic device.
Background
Along with the development of electronic devices, more and more electronic devices have a voice interaction function, but in the voice interaction process between a user and the electronic devices, the user and the electronic devices can be influenced by the fact that the electronic devices play audio or environmental audio, so that the user and the electronic devices cannot perform voice interaction, or the experience effect of semantic interaction is reduced.
Disclosure of Invention
In view of this, the present application provides the following technical solutions:
an information processing method, comprising:
obtaining second audio data in a second mode in the process of playing the first audio data obtained in the first mode by the first device, wherein the first mode and the second mode are different;
the first device processing the second audio data;
the first device executes a first instruction if the second audio data includes target audio data that matches first reference data.
Optionally, the first device executing the first instruction includes:
and the first device sends a second instruction to a second device connected with the first device, so that the second device outputs a response result, wherein the response result is matched with the second audio data.
Optionally, the obtaining second audio data in a second manner includes: and collecting second audio data of the environment where the first equipment is located.
Optionally, the method further comprises:
the first device determines the second device from candidate electronic devices connected with the first device;
wherein the determining the second device from candidate electronic devices connected to the first device includes: selecting a device which does not execute a response task from candidate devices connected with the first device, determining the device as the second device, wherein the response task is matched with the target audio data.
Optionally, the method further comprises:
the first device determines the second device from candidate electronic devices connected with the first device;
wherein the second audio data includes request data corresponding to the target audio data, and the determining the second device from candidate electronic devices connected to the first device includes:
determining a response mode based on the request data; and determining a device matched with the response mode from candidate devices connected with the first device, and determining the device as the second device.
Optionally, the obtaining second audio data in a second manner includes: receiving second audio data from an environment where a connected second device is located, wherein the second audio data is acquired by the connected second device;
the first device executes a first instruction comprising:
the first device outputs a response result, wherein the response result is information corresponding to the second audio data.
Optionally, the receiving the second audio data collected by the connected second device from the environment where the second device is located includes:
and receiving second audio data broadcast from a second device, wherein the second audio data is transmitted by the second device after judging that the second audio data does not comprise audio data matched with second reference data.
Optionally, the first instruction includes: a third instruction for adjusting parameters of the first audio data played by the first device;
the first device executes a first instruction comprising:
and the first device executes the third instruction and controls the first audio data to be played according to the playing parameters corresponding to the third instruction.
An information processing apparatus comprising:
a data acquisition unit configured to acquire second audio data in a second manner in playing first audio data acquired in a first manner, wherein the first manner and the second manner are different;
a processing unit for processing the second audio data;
and the execution unit is used for executing the first instruction if the second audio data comprise target audio data matched with the first reference data.
An electronic device, comprising:
the output device is used for playing first audio data, wherein the first audio data are obtained in a first mode;
obtaining means for obtaining second audio data in a second manner, wherein the first manner and the second manner are different;
processing means for processing the second audio data; and if the second audio data comprises target audio data matched with the first reference data, executing the first instruction.
As can be seen from the above technical solution, the present application discloses an information processing method, an apparatus and an electronic device, where the method includes that a first device obtains second audio data in a second manner during a process of playing first audio data obtained in a first manner; the first device processes the second audio data, and if the second audio data includes target audio data that matches the first reference data, the first device executes the first instruction. In this way, in the process of playing the audio data by the first device, the second audio data is received in a second mode different from the first mode, and then the second audio data is processed, so that the effect that the playing of the audio data of the device itself has influence on the processing of other audio data is reduced, and the experience effect of a user is improved, wherein the effect that other audio data can be responded and processed even if the first device plays the audio data is realized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates an application environment diagram of an information processing method in one embodiment of the application;
fig. 2 is a schematic flow chart of an information processing method according to an embodiment of the present application;
fig. 3 is a schematic flow chart of an information processing method based on environmental information according to an embodiment of the present application;
fig. 4 shows a signaling interaction diagram of an information processing method according to an embodiment of the present application;
FIG. 5 shows a schematic diagram of a voice interaction scenario provided by an embodiment of the present application;
fig. 6 shows a signaling interaction diagram of another information processing method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic diagram showing a structure of an information processing apparatus according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 shows an application environment diagram of an information processing method in one embodiment of the present application referring to fig. 1, the information processing method is applied to a first device 100. The first device 100 is provided with an information transmission function and an information processing function. The information transfer function of the first device 100 is characterized by the first device having both input and output functions.
Specifically, the first device may be one of an intelligent sound box, an intelligent television device, an intelligent terminal, a tablet computer, and a notebook computer. The first audio data is data played by the first device; the second audio data may include audio data generated by a user corresponding to the first device, and may also include audio data of an environment in which the first device is located; correspondingly, the audio data sent by other devices can also be used.
In one possible implementation, the input functionality of the first device 100 is embodied in the ability to obtain audio data in a first manner. Specifically, the first mode may be a wired data acquisition mode, that is, audio data obtained by a limited connection between the first device 100 and the audio data generating device. For example, the first device is a television device, which may be in a video data wired connection with a video playing conversion device, such as a set-top box, and send audio or video data generated directly or indirectly by the set-top box to the television device. The first mode may also be an information reading mode, that is, the first device 100 obtains the audio and video data stored locally through an information reading mode, and in addition, the information reading mode may also be implemented through a wireless connection mode, so that the first device obtains the audio and video data stored in the cloud memory through a wireless data transmission mode.
In another possible implementation, the input function of the first device 100 is implemented in that the first device is capable of receiving information sent by other devices, which may be audio information, video information, or instruction information.
The output function of the first device 100 may represent an output function of the first device for the first audio data, for example, the first audio data is simply generated from the audio information, and the output function of the first device may be a play function for the first audio data. If the first audio data is generated from video data, the output function of the first device may be a function of synchronously playing the picture and the audio of the video data. Correspondingly, the output function of the first device 100 may also characterize the output of relevant information generated after the processing of the second audio data, such as the output of response information or instruction information.
After the first device 100 obtains the first audio data in the first manner, and plays the first audio data. The first device 100 may also obtain second audio data in a second manner. The second audio data may be acquired by the first device 100 or may be received by the first device. The manner in which the second audio data is obtained will be specifically described in the following embodiments of the present application.
After the first device 100 obtains the second audio data, the second audio data is processed. Since the second audio data may comprise data for the first device, it may also comprise data relating to the environment in which the first device is located. And the first device processes the second audio data to determine whether the second audio data includes target audio data matching the first reference data, and if so, the first device executes the first instruction. The first reference data may be keywords for the first device; but may also be keywords for other devices, in which case the first device may be understood as the master device in the current network. For example, the first device has voice interaction functionality, the first baseline data may characterize keywords that can wake up the first device. Thus, the target audio data may be audio data that exactly matches the first reference data, or may be audio data that is capable of characterizing the first reference data. For example, if the first reference data is "please start the voice interaction function", the corresponding target audio data may be in the form of "start voice interaction", "voice interaction start", etc.
Correspondingly, the first instruction is an instruction generated corresponding to the target audio data, which may be an instruction directly responding to the function corresponding to the target audio data, or may be an instruction for optimizing or controlling the function corresponding to the target audio data. In particular, the present application will be specifically described in the following examples.
Referring to fig. 2, a flow chart of an information processing method according to an embodiment of the present application is shown, where the method may include:
s201, in the process of playing the first audio data obtained in the first mode, the first device obtains second audio data in the second mode.
The first device obtains second audio data during the playing of the first audio data. However, the first way in which the first device obtains the first audio data is different from the second way in which the second audio data is obtained. For example, when a multimedia application is installed in a first device and then the multimedia application is started, the first device may acquire first audio data in a first manner. Specifically, the first device may acquire the first audio data according to an audio data acquisition path indicated in the instruction information received by the multimedia application. If the instruction information indicates that the multimedia program plays live broadcast data, the live broadcast data of the live broadcast set top box needs to be obtained through a video limited interface, and the live broadcast data is first audio data. If the instruction information indicates the multimedia program to play the stored audio, the first device needs to obtain the audio through the local storage path, and the obtained audio is the first audio data. After the first device obtains the first audio data, the first audio data is played, and the specific playing format of the first audio data is matched with the data format of the obtained first audio data.
In the process of playing the first audio data by the first device, the first device is not only in a playing state, but also can obtain the second audio data, and the manner of obtaining the second audio data is different from that of the first audio data currently being played. For example, the first device may collect the second audio information of the current environment, or may receive the second audio data sent by the other devices. The second audio data may be audio information associated with the first device, e.g., the second audio data may include wake words for a certain voice interaction function of the first device.
S201, the first device processes the second audio data.
S203, if the second audio data includes the target audio data to which the first reference data matches, the first device executes the first instruction.
The first device processes the second audio data to determine whether the second audio data includes the target audio data. The target audio data is data that matches first reference data, which may be a keyword for a certain application or function of the first device to wake up, or a keyword for another device that is communicatively connected to the first device. The first device executes the first instruction when the first device processes the second audio data, that is, it recognizes that the second audio data includes the target audio data. The first instruction is an instruction matched with the target audio data, and if the target audio data is a wake-up word aiming at a certain voice function of the first equipment, the first instruction is an application corresponding to the voice function. If the target audio data not only includes a wake-up word for a certain voice function of the first device but also includes request information corresponding to the wake-up word, the first instruction is an instruction for controlling the first device to output response information corresponding to the request information through the voice function after the voice function is started.
For example, the wake word of the voice interactive application of the first device is "launch voice assistant". When the second audio data obtained by the first device is "start voice assistant, today's journey", the first device starts its voice interactive application and outputs the information "meeting at meeting center at 2:30 pm" matching the today's journey.
According to the information processing method provided by the embodiment of the application, the first device plays the first audio data and processes the obtained second audio data, and if the second audio data comprises the target audio data matched with the first reference data, the first device executes the first instruction, so that even if the first device plays the audio data, other audio data can be responded and processed, the influence of the playing of the audio data of the device on the processing of the other audio data is reduced, and the experience effect of a user is improved.
The first device executes the first instruction if the second audio data includes target audio data that matches the first reference data. It should be noted that, in the embodiment of the present application, the first instruction may represent an operation instruction to the first device, and may also represent an instruction for transmission generated by the first device. In one possible implementation, the first device executing the first instruction includes:
The first device sends a second instruction to a second device connected with the first device, so that the second device outputs a response result.
The second device is a device connected with the first device, has an information transmission function, and specifically, can also have an information processing function, for example, the second device can respond to related audio data in the second audio data instead of the first device; the second device may also output information obtained from the first device. I.e. the response result finally output by the second device matches the second audio data.
The following describes, in a specific embodiment, a process in which the first device sends the second instruction to the second device, so that the second device outputs the response result.
In order to enable a user in the environment of the first device to obtain a better experience effect, user information in the environment of the first device needs to be analyzed. Referring to fig. 3, a method for processing information based on an environment where a first device is located according to an embodiment of the present application is shown, where the method includes:
s301, in the process of playing the first audio data obtained in the first mode, the first device obtains second audio data in a second mode;
S302, the first device processes second audio data;
s303, if the second audio data comprise target audio data matched with first reference data, the first equipment acquires environment information of the first equipment;
s304, the first device generates a second instruction based on the environment information;
s305, the first device sends a second instruction to a second device connected with the first device, so that the second device outputs a response result.
In this embodiment, the first instruction executed by the first device includes a second instruction, and if the second audio data includes the target audio data matched with the first reference data, the first device needs to acquire the environmental information where the first device is located, where the environmental information may represent image information of a scene where the first device is located, may also represent infrared sensing information of the scene where the first device is located, and so on. It should be noted that, the first device may acquire an environmental image of the first device by using an image acquisition component, such as a camera, and then analyze the environmental image to obtain environmental information of the first device. For another example, the first device does not have an image acquisition component, and can send an environmental information acquisition instruction to a device connected with the first device and having image information acquisition, such as a monitoring device, wherein the monitoring device acquires environmental information of the first device, and then transmits the acquired environmental information of the first device to the first device, and the first device performs subsequent processing based on the environmental information.
In one possible implementation, the generating, by the first device, a second instruction based on the environmental information includes:
the first device analyzes the environment information to obtain whether the associated information of the information receiver of the first device meets a first preset condition, and if so, a second instruction is generated.
The information person of the first equipment receives a living organism which represents the environment where the first equipment is located and can receive first audio data or subsequent response information output by the first equipment. The first preset condition may represent a quantity condition of the information receiver, and may also represent distance information between the information receiver and the first device. If the first preset condition characterizes the number condition of the information receivers, the association information of the information receivers of the first device obtained by the first device is the number information of the information receivers of the first device, that is, when the number of the information receivers of the first device is greater than a preset number threshold, the first device generates a second instruction. If the first preset condition characterizes the distance information between the information receiver and the first device, the first device obtains the associated information of the information receiver of the first device as the distance information between the information receiver and the first device, and the receiver information with the distance less than or equal to the set distance threshold can be obtained through analysis of the distance information. Based on the analysis of the environmental information of the first device, the real situation of the receiver actually focusing on the information output of the first device can be obtained, so that the problem that if the first device interrupts the current playing of the first audio data, the experience effect of the information receiver is poor is judged. And then determining whether to generate the second instruction based on the determination result. If the information receivers are more, a second instruction is generated and sent to the second equipment, so that the second equipment outputs a response result, and the influence on the experience effect of the receivers is reduced.
For example, if the first device is an intelligent television, the information receiver corresponding to the first device is a television viewer, and if the environment information of the intelligent television is analyzed, the number of television viewers is larger. If the second audio data is identified to include the target audio data matched with the first reference data, and the first reference data is a wake-up word aiming at the voice interaction function of the smart television, the target audio data is identified to indicate that the smart television needs to wake up the voice interaction function of the smart television, and the smart television can possibly perform voice interaction with a user later, and the first audio data terminal playing the smart television must be enabled to output voice information. Since there are many television viewers, interrupting the output of the first audio data may affect the experience of other non-interactive viewers. The smart television will generate a second instruction at this time and send the second instruction to the smart speaker, and the smart speaker outputs a response result matching the second audio data, for example, the smart speaker completes the voice interaction process with the user after the voice interaction function of the smart television. Therefore, the intelligent sound box can respond to the voice interaction process after the intelligent television wakes up without interrupting the playing process of the first audio data of the intelligent television, so that an interaction user can acquire interaction information based on a voice assistant, and simultaneously, a television viewer can continuously acquire the first audio data of the television.
In another possible implementation manner, before the first device sends the second instruction to the second device connected thereto, the information processing method further includes:
the first device determining a response mode of the second device;
the first device sends a second instruction matched with the response mode to a second device connected with the first device, so that the second device outputs a response result matched with the response mode.
The first device needs to determine a response mode of the second device, wherein the response mode characterizes a mode that the second device needs to respond to information, and the mode comprises a mode that the second device only needs to output response information and a mode that the second device needs to generate the response information and output the response information. If the response mode of the second device is an output mode that only needs to perform response information, the processor of the first device or the processor of the processing device connected with the first device may complete the process of generating response information, and then send the response information to the second device, so as to output the response information. If the response mode of the second device is a mode for generating response information and outputting the response information, the first device sends the corresponding request information to the second device, the second device processes the request information to obtain the response information, and the second device outputs the corresponding information.
Specifically, after the function or application of the first device corresponding to the first reference information is awakened, request information of a user is received, and the corresponding information processing process is completed. The information may be processed by the first device and then response information corresponding to the processing result may be output by the second device. When the second device has a processing function matched with that of the first device, the first device directly sends the request information of the user to the second device, and the processor of the second device finishes the processing of the request information and outputs corresponding response information.
For example, after the voice interaction function of the first device is awakened, a user who needs to perform voice interaction with the first device outputs corresponding interaction request information, and then expects to obtain corresponding feedback information through the voice interaction function of the first device. After the first device receives the voice interaction request of the user, the voice interaction processing module of the first device generates response information corresponding to the voice interaction request and sends the response information to the second device, so that the second device outputs the response information. Or the ground equipment sends the received voice request information of the user to the second equipment, and the second equipment processes the response information corresponding to the voice interaction request and outputs the response information so that the user obtains the response information.
Correspondingly, the first device determines a response mode of the second device, including:
the first device determines a response mode of the second device based on the second audio data.
After the first device recognizes that the second audio data includes the target audio data matched with the first reference data, the second audio data is analyzed to obtain specific content of the second audio data, and a response mode of the second device is determined according to the content of the second audio data.
If the second audio data only comprises the target audio data, determining that the second equipment is in a first response mode;
and if the second audio data comprises the target audio data and the request information matched with the target audio data, determining that the second equipment is in a second response mode.
The first response mode represents a mode of independent response of the second equipment, namely the second equipment processes information and outputs response information; the second response mode characterizes a mode that the second device outputs according to the response result obtained by the first device.
Taking the first reference data as a voice assistant wake-up word of the first device as an example, if the second audio data only includes the wake-up word, the second device can respond to the wake-up word, for example, the first device is a television, the second device is a sound box, the wake-up word is "hello, television", and the sound box outputs "hello". I.e. the second device only responds to the wake-up word alone. If the second audio data includes not only the wake-up word but also the request information, the first device may process the request information to obtain response information corresponding to the request information, and then send the response information to the second device, where the second device outputs the response information.
It should be noted that, if the second device receives the second instruction sent by the first device, that is, after the function or application corresponding to the first reference data of the first device is started, the second device outputs response information, which may be response information for the target audio data or response information for the second audio data according to the above embodiment. If the function corresponding to the first reference data is used as the voice interaction function, the second device can respond independently in the process of performing voice interaction with the user after the function is started. The second equipment receives the voice information of the user, processes the voice information to obtain response information, and outputs the response information. Correspondingly, the second device may also be an input and output interface only as information. The second device receives the voice information of the user, sends the voice information of the user to the first device, and the first device processes the voice information to obtain response information, and the first device can output the response information, or the first device can output the response information to the second device, and the second device outputs the response information until the interaction process with the user is completed.
In one possible implementation, if the second audio data includes the target audio data and the request information matched with the target audio data, determining that the second device is in the second response mode includes:
and analyzing the request information, and determining that the second equipment is in a second response mode if the analysis result meets a second preset condition.
The second preset condition represents a condition capable of applying the second response mode, which may be a judging condition for a response result or a condition determined according to the association request probability of the request information. For example, the request information is information which can obtain a unique output result, and the first device can process the request information and output response information to the second device for output. However, if the request information characterizes an associated request pattern, the request information may be processed and output by the second device in order to better balance the processing resources of the first device.
By way of example, after the voice interactive function of the first device is activated, the request information is "is the current point? The request information is a closed request, namely, only one response is needed to be output, the first device sends the current time to the second device, and the second device outputs a response result, wherein the response result is "17:30".
If it is identified that the request information is an association request, such as "how to download the update information", and a unique response result cannot be obtained for the request, because it cannot be confirmed what application update information is downloaded, and it is also uncertain which version of update information, the interaction process is a relatively complex interaction process, and the second device may process and output the request information. The second device will output "ask for which application to update" and then "ask for your current model of the handset device" etc. until the interaction process with the user is completed.
The first device sends a second instruction to the second device connected with the first device, so that the second device outputs a response result. Typically in an application scenario a first device will be connected to a plurality of devices, from which a second device is required to determine the instruction that best matches the first device. Correspondingly, in order to implement a process that the first device sends an instruction to the second device, the embodiment of the present application further includes:
the first device determines a second device from among candidate electronic devices connected to the first device.
The first device determines a second device from candidate electronic devices connected with the first device, including:
The first device determines a second device among candidate electronic devices connected to the first device based on a processing pattern matching the first reference data.
The processing mode matched with the first reference data can be used for representing the processing procedure mode of the audio data, the output response mode of the audio data and the execution state mode of the audio data.
In a possible implementation manner, the processing mode characterizes a processing mode of an application or a function corresponding to the first reference data, where the processing mode has the same attribute, for example, the first reference data corresponds to a voice assistant wake word of the first device, when determining the second device, a device of the same voice assistant needs to be selected, for example, a device of the same brand, specifically, the voice assistant of the first device needs to be selected, and a device of the voice assistant connected to the first device needs to be selected, so that the device can process the obtained voice information based on the same processing mode, and an interaction process of the user and the first device can be transferred to an interaction process of the second device without sense. Or may be a voice assistant device having the same version information, and the embodiment of the present application is not explained one by one.
In another possible implementation manner, the processing mode characterizes an execution attribute of the response task, and the first device selects a device that does not execute the response task from candidate devices connected with the first device, and determines the device as the second device.
Taking the smart home scenario as an example, the first device may be a master control device in a network corresponding to the smart home, which may be connected to a plurality of devices, for example, devices such as a smart television, a smart speaker, and a projector.
The response task matches the target audio data. For example, the target audio data is a wake-up word initiated by the voice assistant of the first device, and the response task to which the target audio data corresponds is an audio output task. Thus, a device that does not perform an audio output task is selected from among candidate devices connected to the first device, and is determined as the second device. It should be noted that the second device may have other forms of information output when it is determined, but only if a response task matching the target audio data is not performed. For example, a device connected to a first device includes an electronic photo frame with an audio player that displays an electronic photograph stored by the electronic photo frame in real time, while the device that needs to be selected needs to play audio and the device is not currently playing audio, so the electronic photo frame can be selected as a second device because the electronic photo frame is currently only outputting images and does not affect its task of performing audio output.
In yet another possible implementation, the processing pattern characterizes a response pattern determined for the request data. If the second audio data includes request data corresponding to the target audio data, determining the second device from candidate electronic devices connected to the first device includes:
determining a response mode based on the request data; a device matching the response pattern is determined from among candidate devices connected to the first device, and the device is determined as the second device.
In this embodiment, the second audio data includes, in addition to the target audio data, request data corresponding to the target audio data, where the request data may be a request sent by a corresponding user after the function corresponding to the target audio data is executed, for example, when a voice interaction function of the first device is started, a response mode corresponding to the request data is a device that needs to output corresponding response data in an audio form, and then a device that has an audio playing function needs to be selected from candidate devices. For another example, if the request data is that the target information needs to be displayed, a device having a display screen or capable of projection generating a display screen needs to be selected among the candidate devices.
Of course, if the first device stores data of an associated device corresponding to an application that can be started or controlled for different target audio data, the second device can be determined directly from the candidate devices by the data of the associated device. For example, if the starting audio of the first application of the first device is identified to be included in the second audio data, the associated device recorded in the first application configuration information is obtained, if the associated device is the device a, the device a is determined to be the second device, and the associated device can represent an alternative device in the case that a certain function of the first device is occupied or a certain output module fails.
The information processing method of the present application will be described below with respect to acquisition of second audio data by the first device. Referring to fig. 4, a signaling interaction diagram of an information processing method according to an embodiment of the present application is shown. The application scenario corresponding to fig. 4 includes a first device and a second device, where the second device is a device that is in communication connection with the first device. The first device and the second device may access the same WIFI (Wireless Fidelity ) network, that is, the same lan, so as to implement a communication connection and further data interaction between the first device and the second device. Correspondingly, the first device and the second device may also be connected through other communication manners, which is not described in one-to-one manner in the embodiment of the present application.
S401, in the process of playing the first audio data by the first equipment, acquiring second audio data of the environment where the first equipment is located;
s402, the first device processes the second audio data;
s403, if the second audio data comprises target audio data matched with the first reference data, the first device generates a second instruction;
s404, the first device sends a second instruction to the second device;
s405, the second device executes the second instruction and outputs a response result.
The first audio data is obtained in the first manner, and the specific form of the first manner may be referred to the description of the corresponding embodiment of fig. 2, which is not described herein.
In the embodiment of fig. 4, the acquisition of the second audio data of the environment of the first device is performed by the first device, but may of course also be performed by the second device, but it is necessary to ensure that the second device is in the same environment as the first device. Since the second audio data is the audio data of the environment where the first device is located, and the first device is in a state of playing the first audio data, the second audio data includes the first audio data and also includes the environmental audio except the first audio data. The ambient audio may include audio data output by the user for the output of the first device, may include audio data output by other devices, or may include audio data output by the user for other devices.
In this embodiment the first device processes the second audio data to identify whether the second audio data includes target his audio data matching the first reference data, e.g. the first reference data characterizes wake-up words for target functions of the first device. When the first device processes the second audio data, it needs to identify whether the second audio data includes the wake-up word, if so, the first device generates a second instruction and sends the second instruction to the second device, so that the second device outputs a response result according to the second instruction, and the response result is matched with the second audio data. Corresponding to the above example, that is, the second audio data includes a wake-up word, where the second instruction may be a response result indicating that the second device needs to perform, instead of the first device, a target function corresponding to the wake-up word. In one possible implementation manner, the second device may output the response result to the first device, the first device outputs the response result, or the first device designates a corresponding target device to output the response result to the receiver.
Referring to fig. 5, a schematic diagram of a voice interaction scenario provided by an embodiment of the present application is shown. In fig. 5, the first device is a smart tv 501, and the second device is a smart speaker 502. The smart tv 501 has an application program corresponding to a voice assistant, that is, it may implement a voice interaction function with a user through the voice assistant. And the wake-up word for the voice assistant, i.e., the first reference data, is "hello, television".
In the scenario shown in fig. 5, the smart tv 501 is playing a target video clip, that is, the smart tv 501 plays the picture information of the target video clip through its display screen and plays the audio information of the target video clip through its speaker. At this time, the smart tv 501 collects audio data of the environment in which it is located, including audio information that it is playing and audio information output by the user 503. The smart tv 501 then needs to determine whether the collected environmental audio data includes first reference data corresponding to the smart tv-enabled voice assistant. If the audio information output by the user 503 is "hello, tv. Please complain about the weather condition of me today, and then the smart tv 501 recognizes that the audio information output by the user includes "hello, tv", and then the application corresponding to the voice assistant of the smart tv 501 is started. Because the smart tv 501 is outputting the target video clip, in order to make the user obtain clearer request response information, an instruction for instructing the smart speaker 502 to output the corresponding response result may be generated, so that the smart speaker 502 outputs response information "today sunny, the highest temperature of 15 ℃, the lowest temperature of 2"
It should be noted that, in the embodiment corresponding to fig. 5, the intelligent television may process the request information of the user through the corresponding voice assistant function, then generate a response result, and output the response result to the intelligent sound box, so that the intelligent sound box outputs the response result. At this time, the intelligent sound box is equivalent to an executor of information output corresponding to the intelligent television voice assistant function. In another possible implementation manner, the intelligent sound box also has a function of performing voice interaction with the user, and after recognizing the wake-up word of the voice assistant, the intelligent television generates a voice interaction function control instruction for starting the intelligent sound box, and then sends the control instruction to the intelligent sound box, so that the intelligent sound box starts the voice interaction function of the intelligent sound box, and then the intelligent sound box completely replaces the intelligent television to perform voice interaction with the user, namely, the intelligent sound box receives voice information of the user and outputs a response result matched with the voice information.
Referring to fig. 6, a signaling interaction diagram of another information processing method according to an embodiment of the present application is shown. The application scenario corresponding to fig. 6 includes a first device and a second device, where the second device is a device that is in communication connection with the first device. The interaction process of the first device and the second device is as follows:
S601, a first device plays first audio data;
s602, second equipment collects second audio data of an environment where the second equipment is located;
s603, the second device sends the second audio data to the first device;
s604, the first device receives the second audio data and processes the second audio data;
s605, if the second audio data includes the target audio data matched with the first reference data, the first device outputs a response result.
It can be seen that in this embodiment, the second device connected to the first device performs the collection of the second audio data, and the first device serves as a receiving end of the second audio data. The second audio data is audio data of an environment in which the second device is located. The second device may be the same as or different from the first device. After the second device collects the second audio information of the environment where the second device is located, the second device may not recognize the second audio information because some functions of the second device are limited, and then the second device sends the second audio information to the first device, so that the first device processes the second audio information. For example, it may be identified by the first device whether the second audio data includes target audio data that matches the first reference data. The first reference data may be for the first device or for the second device in this embodiment.
After the first device processes the second audio data, and the second audio data includes the target audio data, the first device executes the first instruction to output a response result for the first device. The response result is information corresponding to the second audio data. Since the second audio data includes the target audio data, that is, the response result may correspond to the target audio data. For example, the target audio data corresponds to a certain function of the second device, and the response result may be control information for controlling the second device to turn on the function. The second audio data further includes request information, where the request information may represent a request for identification, that is, the second device requests the first device to identify whether the second audio information includes the target audio information, and the output response result may be a corresponding identification result, and the corresponding identification result may be sent to the second device. For another example, the request information may be request information for interaction with a first device-specific application, and then the first device may output response information matching the request information.
For example, the first device is a smart television, and the second device is a smart speaker. In the video playing process of the intelligent television, as the intelligent television has a video playing task to respond, the intelligent sound box can collect the audio data of the environment where the intelligent sound box is located, so that the task executed by the equipment in the current environment is more balanced. After the intelligent sound box collects the audio data, the intelligent sound box can send the audio data to the intelligent television, the intelligent television processes the audio data, if the audio data comprises target audio data, the intelligent television can output corresponding response results, for example, output feedback information which is fed back according to information collected by the intelligent sound box, even if the target audio data is aimed at the intelligent sound box, the effect of video being played by the intelligent television can be avoided through the output response of the intelligent television, because the audio output of the intelligent television is usually only provided with one player, the output of the audio information in the video can be suspended when the response results are output, the mixing of two types of audio is avoided, and the experience effect of users on the audio data reception is better.
In some embodiments, receiving audio data from an environment in which a connected second device is located collected by the second device includes:
and receiving second audio data broadcast from a second device, wherein the second audio data is transmitted by the second device after judging that the second audio data does not comprise audio data matched with second reference data.
Wherein the second reference data characterizes data for the second device;
alternatively, the second audio data is transmitted by the second device after determining that the second audio data does not include audio data matching the second reference data, and the second audio data includes audio data matching the third reference data.
And if the second audio data is transmitted by the second device after judging that the second audio data does not comprise the audio data matched with the second reference data, wherein the second reference data represents the data aiming at the second device. After the second device obtains the second audio data, the second device identifies the second audio data, and if the second audio data does not include the second reference data for the second audio data, the second reference data may be data started for the second device, or may be data for a specific function or application of the second device. The second device may broadcast the second audio to other electronic devices connected to the second device, so that the other electronic devices may process or respond to the second audio data, thereby avoiding ignoring the second audio data.
If the trigger application corresponding to the second reference data is a voice interaction application, in an embodiment corresponding to the scene, the second device may include a voice interaction application, and the wake-up word corresponding to the voice interaction application of the second device is the second reference data. After the second device collects the second audio data of the environment where the second device is located, the second device judges that the second audio data does not include the audio data matched with the second reference data, and then broadcasts the second audio data to the devices connected with the second device, namely the first device is included. The second audio data continues to be identified, for example by the first device, which may identify whether it contains target data corresponding to the first reference data. After the second device collects the second audio data, the second device does not recognize the audio data corresponding to the second reference data, so that the voice interaction application is not awakened, and the second device broadcasts the second audio data to other devices, so that the other devices recognize whether the awakening words corresponding to the voice interaction function of the second device exist. The problem that voice interaction requests cannot be responded in time is avoided, excessive processing resources of equipment for executing playing tasks are not occupied, and interaction of tasks for collecting data and playing data is avoided.
And if the second audio data is the second equipment, the second equipment transmits the second audio data after judging that the second audio data does not comprise the audio data matched with the second reference data and the second audio data comprises the audio data matched with the third reference data. The second device analyzes the second audio data after obtaining the second audio data, and continues to analyze the second audio data after obtaining the second audio data which does not include the second reference data of the second device, so as to judge whether the second audio data includes third reference data, wherein the third reference data can represent common reference data of a certain device group or reference data of devices associated with the second device stored in the second device. Such that the second device, when broadcasting the second audio data, broadcasts the second audio data to the electronic device that matches the third reference data.
For example, the reference data characterizes a wake word to a voice assistant of the device. When the second device analyzes the second audio data, the second audio data does not include the wake-up word of the second device, and further analysis obtains that the wake-up word includes the wake-up word of a specific voice assistant, such as the wake-up word corresponding to the voice assistant of the same type. Assuming that the electronic device a, the electronic device B, and the electronic device C connected to the second device each include the specific voice assistant, the second device will broadcast the second audio data to the electronic device a, the electronic device B, and the electronic device C when broadcasting the second audio data.
In another possible implementation manner, the second device may store wake words of each device connected thereto or wake words of a target function corresponding to each device. The second device may then be understood as the master device in the current local area network. After the second audio data is identified to not include the audio data matched with the second reference data, preliminary judgment can be performed, namely, judgment is performed on wake-up words of devices possibly included in the second audio data, then the second audio data is broadcasted to the corresponding devices, and further, the received devices accurately identify the second audio data. For example, the first device receives the second audio data and then identifies whether it includes target audio data that matches its corresponding first reference data.
In another embodiment of the present application, the first instruction includes: and third instructions for adjusting parameters of the first audio data played by the first device.
Wherein the first device executes the first instruction, comprising:
the first device executes the third instruction and controls the first audio data to be played according to the playing parameters corresponding to the third instruction.
In this embodiment, when the first device recognizes that the second audio data includes the target audio data matched with the first reference data, since the first device or other devices connected to the first device need to output a response result to a function corresponding to the target audio data, in order to avoid an influence of a response environment corresponding to the first audio data being played by the first device, the first device may generate a third instruction for controlling a playing parameter of the first audio data while processing the second audio data, thereby changing a playing mode of the first audio data. Or the first device receives a third instruction generated when the device connected with the first device outputs a response result, so that the first device executes the received third instruction to control the first audio data to be played with playing parameters corresponding to the third instruction. The third instruction may include an audio play volume down instruction, a pause instruction, or other instruction that matches the current play mode of the first audio data.
Specifically, the first device processes the second audio data to generate a third instruction. At this time, the second audio data may be second audio data of an environment where the second device is located, which is collected by the second device. The second device sends the second audio data to the first device, and when the first device identifies whether the second audio data includes the target audio data matched with the first reference data, the second audio data can be compared with the voiceprint information of the played first audio data, if the voiceprint information included in the second audio data is matched with the voiceprint information of the first audio data, a third instruction is generated, and the third instruction is used for instructing to reduce the playing volume of the first audio data.
For example, in the process of playing audio, the first device receives the audio information including the wake-up word of the voice interaction function of the first device sent by the second device, and when the first device processes the audio information and recognizes the wake-up word, the first device wakes up the voice interaction function of the first device. And the first equipment can firstly reduce the volume of the audio played by the first equipment according to the comparison of the audio information and the voiceprint characteristics of the audio played by the first equipment, then receive the voice information of the user through the voice interaction function, and output a response result matched with the voice information of the user. Further, after the first device completes the voice interaction with the user, it generates a sound volume restoration instruction, that is, restores the volume of the audio data that is always played by the first device.
In an embodiment of the present application, referring to fig. 7, there is also provided an electronic device, including:
an output device 701 for playing first audio data, the first audio data being data obtained in a first manner;
obtaining means 702 for obtaining second audio data in a second manner, wherein the first manner and the second manner are different.
Processing means 703 for processing the second audio data; and if the second audio data comprises target audio data matched with the first reference data, executing the first instruction.
In the electronic device, the component structure of the output device 701 is matched with the format of the played first audio data, and if the played first audio data is video data, the corresponding output device 701 includes a display component and an audio playing component; if the first audio data to be played includes only audio, the corresponding output device 701 may include an audio playing component.
The obtaining means 702 in the electronic device may include a communication component or an acquisition component, and when the obtaining means 702 includes the communication component, the electronic device may establish a communication connection with other devices through the communication component, receive the second audio data transmitted from the other devices, and may use the communication component to perform transmission of related instructions or other information. If the obtaining device 702 includes an acquisition component, the electronic apparatus may acquire the second audio data of the environment in which the electronic apparatus is located through the acquisition component.
The processing device 703 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. It may also be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processing means may also comprise a master processor, which may be used for processing data in a wake-up state, also called CPU (Central Processing Unit ), and a slave processor, which is a low power consumption processor for processing data in a standby state. In some embodiments, the processor may be integrated with an audio processor that may process the audio to be identified. In some embodiments, the processor may also include an AI (Artificial Intelligence ) processor. The AI processor is used for processing the calculation operation of the learning of the related machine. For example, an AI processor may be used in embodiments of the application to identify voiceprint features of audio data.
Corresponding to the specific implementation manner of the present application, the electronic device in the embodiment of the present application may further have structures such as an audio data component, a display component, a radio component, a connection component, various auxiliary circuits, and a data bus.
In some embodiments, when the processing apparatus 703 in the electronic device executes the first instruction, it may be that the processing apparatus sends a second instruction to a second device connected thereto, so that the second device outputs a response result, where the response result matches the second audio data.
Based on the above embodiments, the obtaining means 702 comprises an acquisition component;
the acquisition component acquires second audio data of the environment where the electronic device is located.
Correspondingly, the processing means 703 is further configured to determine the second device from the candidate electronic devices connected to the electronic device.
Specifically, determining the second device from the candidate electronic devices connected to the electronic device includes:
selecting a device which does not execute a response task from candidate devices connected with the electronic device, determining the device as the second device, wherein the response task is matched with the target audio data.
Or when the second audio data includes the request data corresponding to the target audio data, determining the second device from the candidate electronic devices connected to the electronic device, including:
determining a response mode based on the request data; and determining a device matched with the response mode from candidate devices connected with the electronic device, and determining the device as the second device.
In some embodiments, the obtaining means 702 comprises a communication component through which the electronic device connects to a second device and receives second audio data from an environment in which the connected second device is located, the second audio data being collected by the connected second device;
correspondingly, the processing device 703 is specifically configured to:
and outputting a response result, wherein the response result is information corresponding to the second audio data.
On the basis of the above embodiment, the communication component is further configured to receive second audio data broadcast from a second device, where the second audio data is transmitted by the second device after determining that the second audio data does not include audio data matching the second reference data.
Optionally, the first instructions executed by the processing device 703 include: a third instruction for adjusting parameters of the first audio data played by the electronic equipment;
correspondingly, the processing device 703 is specifically configured to:
executing the third instruction, and controlling the first audio data to play according to the playing parameters corresponding to the third instruction.
For the functions and execution processes of the respective constituent devices of the electronic apparatus, please refer to any one of the above-mentioned information processing methods and the methods and steps associated with the information processing method, and detailed descriptions thereof are omitted herein.
There is also provided in an embodiment of the present application an information processing apparatus applied to a first device, see fig. 8, including:
a data acquisition unit 801 for acquiring second audio data in a second manner in playing the first audio data acquired in the first manner, wherein the first manner and the second manner are different;
a processing unit 802, configured to process the second audio data;
an execution unit 803 for executing the first instruction if the second audio data includes target audio data that matches the first reference data.
Further, the execution unit is specifically configured to:
and sending a second instruction to a second device connected with the first device, so that the second device outputs a response result, wherein the response result is matched with the second audio data.
Optionally, the data acquisition unit includes an acquisition subunit, where the acquisition subunit is configured to acquire second audio data of an environment where the first device is located.
Optionally, the apparatus further comprises:
a device determining unit configured to determine the second device from among candidate electronic devices connected to the first device;
Wherein, the equipment determining unit is specifically configured to: selecting a device which does not execute a response task from candidate devices connected with the first device, determining the device as the second device, wherein the response task is matched with the target audio data.
When the second audio data includes request data corresponding to the target audio data, the device determining unit is specifically configured to:
determining a response mode based on the request data; and determining a device matched with the response mode from candidate devices connected with the first device, and determining the device as the second device.
Optionally, the data acquisition unit includes a receiving subunit, configured to receive second audio data from an environment where the second device is located, where the second device is collected by the connected second device;
the execution unit is specifically configured to:
and outputting a response result, wherein the response result is information corresponding to the second audio data.
On the basis of the above embodiment, the receiving subunit is specifically configured to:
and receiving second audio data broadcast from a second device, wherein the second audio data is transmitted by the second device after judging that the second audio data does not comprise audio data matched with second reference data.
Optionally, the first instruction includes: a third instruction for adjusting parameters of the first audio data played by the first device;
the execution unit is specifically configured to:
executing the third instruction, and controlling the first audio data to be played according to the playing parameters corresponding to the third instruction.
The embodiment of the application provides a storage medium having a program stored thereon, which when executed by a processor, implements the information processing method.
The embodiment of the application provides a processor for running a program, wherein the information processing method is executed when the program runs.
The present application also provides a computer program product adapted to perform a program initialized with any one of the steps of the information processing method as described above when executed on a data processing device.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware associated with program instructions, where the foregoing program may be stored in a computer readable storage medium, and when executed, the program performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present application may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
In this specification, each embodiment is mainly described in the specification as a difference from other embodiments, and the same similar parts between the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. An information processing method, comprising:
obtaining second audio data in a second mode in the process of playing the first audio data obtained in the first mode by the first device, wherein the first mode and the second mode are different;
the first device processing the second audio data;
if the second audio data includes target audio data that matches first reference data, the first device executes a first instruction;
wherein the first device executing the first instruction comprises:
the first device sends a second instruction to a second device connected with the first device, so that the second device responds to related audio data in the second audio data instead of the first device, the output of the first audio data is prevented from being interrupted, and the second device outputs a response result.
2. The method of claim 1, the obtaining second audio data in a second manner, comprising: and collecting second audio data of the environment where the first equipment is located.
3. The method of claim 1, the method further comprising:
the first device determines the second device from candidate electronic devices connected with the first device;
Wherein the determining the second device from candidate electronic devices connected to the first device includes: selecting a device which does not execute a response task from candidate devices connected with the first device, determining the device as the second device, wherein the response task is matched with the target audio data.
4. The method of claim 1, the method further comprising:
the first device determines the second device from candidate electronic devices connected with the first device;
wherein the second audio data includes request data corresponding to the target audio data, and the determining the second device from candidate electronic devices connected to the first device includes:
determining a response mode based on the request data; and determining a device matched with the response mode from candidate devices connected with the first device, and determining the device as the second device.
5. The method of claim 1, the obtaining second audio data in a second manner, comprising: receiving second audio data from an environment where a connected second device is located, wherein the second audio data is acquired by the connected second device;
The first device executes a first instruction comprising:
the first device outputs a response result, wherein the response result is information corresponding to the second audio data.
6. The method of claim 5, wherein the receiving the second audio data from the environment in which the second device is located, the second audio data collected by the connected second device, comprises:
and receiving second audio data broadcast from a second device, wherein the second audio data is transmitted by the second device after judging that the second audio data does not comprise audio data matched with second reference data.
7. The method of claim 1, the first instruction comprising: a third instruction for adjusting parameters of the first audio data played by the first device;
the first device executes a first instruction comprising:
and the first device executes the third instruction and controls the first audio data to be played according to the playing parameters corresponding to the third instruction.
8. An information processing apparatus comprising:
a data acquisition unit configured to acquire second audio data in a second manner in playing first audio data acquired in a first manner, wherein the first manner and the second manner are different;
A processing unit for processing the second audio data;
an execution unit configured to execute a first instruction if the second audio data includes target audio data that matches first reference data;
wherein the executing the first instruction includes:
and sending a second instruction to other equipment connected with the information processing device, so that the other equipment responds to related audio data in the second audio data instead of the information processing device, the output of the first audio data is prevented from being interrupted, and the other equipment outputs a response result.
9. An electronic device, comprising:
the output device is used for playing first audio data, wherein the first audio data are obtained in a first mode;
obtaining means for obtaining second audio data in a second manner, wherein the first manner and the second manner are different;
processing means for processing the second audio data; executing a first instruction if the second audio data includes target audio data that matches the first reference data;
wherein the executing the first instruction includes:
and the electronic equipment sends a second instruction to other equipment connected with the electronic equipment, so that the other equipment responds to related audio data in the second audio data instead of the electronic equipment, the output of the first audio data is prevented from being interrupted, and the other equipment outputs a response result.
CN201911423707.2A 2019-12-31 2019-12-31 Information processing method and device and electronic equipment Active CN111210819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911423707.2A CN111210819B (en) 2019-12-31 2019-12-31 Information processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911423707.2A CN111210819B (en) 2019-12-31 2019-12-31 Information processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111210819A CN111210819A (en) 2020-05-29
CN111210819B true CN111210819B (en) 2023-11-21

Family

ID=70784986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911423707.2A Active CN111210819B (en) 2019-12-31 2019-12-31 Information processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111210819B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107734370A (en) * 2017-10-18 2018-02-23 北京地平线机器人技术研发有限公司 Information interacting method, information interactive device and electronic equipment
CN108156497A (en) * 2018-01-02 2018-06-12 联想(北京)有限公司 A kind of control method, control device and control system
CN108899024A (en) * 2018-06-29 2018-11-27 联想(北京)有限公司 A kind of audio-frequency processing method, electronic equipment and server
CN110459221A (en) * 2019-08-27 2019-11-15 苏州思必驰信息科技有限公司 The method and apparatus of more equipment collaboration interactive voices
CN110534108A (en) * 2019-09-25 2019-12-03 北京猎户星空科技有限公司 A kind of voice interactive method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI602437B (en) * 2015-01-12 2017-10-11 仁寶電腦工業股份有限公司 Video and audio processing devices and video conference system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107734370A (en) * 2017-10-18 2018-02-23 北京地平线机器人技术研发有限公司 Information interacting method, information interactive device and electronic equipment
CN108156497A (en) * 2018-01-02 2018-06-12 联想(北京)有限公司 A kind of control method, control device and control system
CN108899024A (en) * 2018-06-29 2018-11-27 联想(北京)有限公司 A kind of audio-frequency processing method, electronic equipment and server
CN110459221A (en) * 2019-08-27 2019-11-15 苏州思必驰信息科技有限公司 The method and apparatus of more equipment collaboration interactive voices
CN110534108A (en) * 2019-09-25 2019-12-03 北京猎户星空科技有限公司 A kind of voice interactive method and device

Also Published As

Publication number Publication date
CN111210819A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
CN107370649B (en) Household appliance control method, system, control terminal and storage medium
US20210385506A1 (en) Method and electronic device for assisting live streaming
EP3182715A2 (en) Method and apparatus for controlling electronic device
CN102779509A (en) Voice processing equipment and voice processing method
CN103634683A (en) Screen capturing method and device for intelligent televisions
CN106488335A (en) Live-broadcast control method and device
US11232790B2 (en) Control method for human-computer interaction device, human-computer interaction device and human-computer interaction system
CN103607609A (en) Voice switching method and device for TV set channels
US20190020543A1 (en) Automatic Determination of Display Device Functionality
CN111312240A (en) Data control method and device, electronic equipment and storage medium
CN107105339B (en) A kind of methods, devices and systems playing live video
US20200053399A1 (en) Method for contents playback with continuity and electronic device therefor
CN112333499A (en) Method for searching target equipment and display equipment
CN112330371A (en) AI-based intelligent advertisement pushing method, device, system and storage medium
WO2019119643A1 (en) Interaction terminal and method for mobile live broadcast, and computer-readable storage medium
WO2022022743A1 (en) Method for identifying user on public device, and electronic device
WO2024001802A1 (en) Image processing method and apparatus, and electronic device and storage medium
CN111210819B (en) Information processing method and device and electronic equipment
CN108769799B (en) Information processing method and electronic equipment
CN112055234A (en) Television equipment screen projection processing method, equipment and storage medium
CN114143521A (en) Game projection method, projector and storage medium
CN106412703B (en) Information processing method, device, system and storage medium
CN112447174B (en) Service providing method, device and system, computing device and storage medium
US9363559B2 (en) Method for providing second screen information
WO2017096897A1 (en) Media content playing method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant